1

LLM-PBE: Assessing Data Privacy in Large Language Models

A comprehensive privacy assessment of LLMs.

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

A comprehensive trustworthiness assessment of compressed LLMs.

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Zeroth-order optimization for LLM.

On the Generalization Ability of Unsupervised Pretraining

Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization. However, a rigorous understanding of how the representation function learned on an unlabeled dataset …

Safe and Robust Watermark Injection with a Single OoD Image

A new method for safely and robustly injecting watermark after training without training data.

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

We propose a new risk to published generative models that finetuning on generated samples can exacerbate the privacy leakage.

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer

We make local LLMs to engineer privacy-preserving prompts that are transferrable for cloud models.

Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning

Tracking IP leakage in federated learning.

Understanding Deep Gradient Leakage via Inversion Influence Functions

We propose a new metric to efficiently evaluate the privacy risks from gradient inversion and provides new insights.

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

We uncover the security risk of data-free distillation from a poisoned teacher and propose the first countermeasure.