A comprehensive privacy assessment of LLMs.
We propose a new risk to published generative models that finetuning on generated samples can exacerbate the privacy leakage.
We make local LLMs to engineer privacy-preserving prompts that are transferrable for cloud models.
We propose a new metric to efficiently evaluate the privacy risks from gradient inversion and provides new insights.
We develop a hybrid federated learning for learning financial-crime predictive models from horizontal and vertical federated data structures.
Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.
We propose a new privacy-preserving learning framework, outsourcing training to cloud without uploading data, which provides more data without injecting noise into gradient or samples.
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. Private Gradient Descent (PGD) is a commonly used private learning framework, which noises …
Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant …