Selected

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

We uncover the security risk of data-free distillation from a poisoned teacher and propose the first countermeasure.

MECTA: Memory-Economic Continual Test-Time Model Adaptation

Continual Test-time Adaptation (CTA) is a promising art to secure accuracy gains in continually-changing environments. The state-of-the-art adaptations improve out-of-distribution model accuracy via computation-efficient online test-time gradient …

Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning

Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without requiring raw data to be shared. One major challenge of FL comes from heterogeneity in users, which may have …

Holistic Trustworthy ML

Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.

Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling

We propose a new privacy-preserving learning framework, outsourcing training to cloud without uploading data, which provides more data without injecting noise into gradient or samples.

Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent

Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises …

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

Efficient and federated learning for heterogeneous clients with different memory sizes

Federated Adversarial Debiasing for Fair and Trasnferable Representations

A distributed domain/group debiasing framework for unsupervised domain adaptation or fairness enhancement.

Data-Free Knowledge Distillation for Heterogeneous Federated Learning

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which …

Federated Learning

On the need of data privacy and more data, we strive to join the knowledge from a fair amount of users to train powerful deep neural networks without sharing data.