Selected

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

We propose a new privacy-preserving learning framework, outsourcing training to cloud without uploading data, which provides more data without injecting noise into gradient or samples.

Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent

Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises …

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

Efficient and federated learning for heterogeneous clients with different memory sizes

Federated Adversarial Debiasing for Fair and Trasnferable Representations

A distributed domain/group debiasing framework for unsupervised domain adaptation or fairness enhancement.

Detecting MCI using real-time, ecologically valid data capture methodology: How to improve scientific rigor in digital biomarker analyses

Early identification and accurate assessment of Mild Cognitive Impairment (MCI) is critical for clinical-trial enrichment as well as the early intervention of the neurodegenerative disease. Continuous home-based measurements of functions using simple …

Privacy in Collaborative ML

On the concern of data privacy, we aim to develop algorithms towards learning accurate models privately from data.

AI for Dementia Healthcare

We aim to early detect and intervene dementia diseases leveraging the power of (Generative) AI.