Selected

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

A comprehensive trustworthiness assessment of compressed LLMs.

A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention

We develop a chatbot for early dementia prevention and leverage LLMs to build digital twins to evaluate chatbots.

Safe and Robust Watermark Injection with a Single OoD Image

A new method for safely and robustly injecting watermark after training without training data.

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

We propose a new risk to published generative models that finetuning on generated samples can exacerbate the privacy leakage.

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer

We make local LLMs to engineer privacy-preserving prompts that are transferrable for cloud models.

Understanding Deep Gradient Leakage via Inversion Influence Functions

We propose a new metric to efficiently evaluate the privacy risks from gradient inversion and provides new insights.

A Privacy-Preserving Hybrid Federated Learning Framework for Financial Crime Detection

We develop a hybrid federated learning for learning financial-crime predictive models from horizontal and vertical federated data structures.

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

We uncover the security risk of data-free distillation from a poisoned teacher and propose the first countermeasure.

Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning

Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without requiring raw data to be shared. One major challenge of FL comes from heterogeneity in users, which may have …

Holistic Trustworthy ML

Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.