Junyuan Hong

Junyuan Hong

Postdoctoral Fellow

IFML & WNCG at UT Austin

I am a postdoctoral fellow hosted by Dr. Zhangyang Wang in Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG) at UT Austin. I obtained my Ph.D. degree from Computer Science and Engineering at Michigan State University, advised by Dr. Jiayu Zhou. Previously, I earned my B.S. in Physics and M.S. in Computer Science at University of Science and Technology of China.

My long-term research vision is to establish Holistic Trustworthy AI for Healthcare. My recent research is driven by the emergent challenges in AI for Dementia Healthcare, and centers around Privacy-Centric Trustworthy Machine Learning toward Responsible AI where I pursue fairness, robustness, security and inclusiveness under privacy constraints. Most of my work are considered in privacy-preserving scenarios: edge-edge (federated learning) FL and edge-cloud (pre-training fine-tuning, transfer learning) EC collaboration.

Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.


  • Dec, 2023 ๐Ÿพ Our paper on amplifying privacy risks via fine-tuning (Shake-To-Leak) is accepted to SaTML.
  • Nov, 2023 ๐Ÿ… Grateful to be selected as Top Reviewer at NeurIPS 2023.
  • Dec, 2023 Our new preprint on private prompt engineering for close-source LLMs is online.
  • Dec, 2023 โœˆ๏ธ ๐ŸŽท I will be at New Orleans for presenting our recent work on understanding gradient privacy (NeurIPS'23 ) and tracking IP leakage in FL (NeurIPS-RegML).
  • Nov, 2023 ๐Ÿค– We are releasing a set of compressed LLMs at compressed-llm for public benchmarks.
  • Nov, 2023 Our work on tracking IP leakage in FL is accepted to NeurIPS'23 Workshop on Regulated ML (NeurIPS-RegML).
  • Sep, 2023 Our work on understanding gradient privacy via inversion influence functions is accepted to NeurIPS'23.
  • Sep, 2023 Our new work on watermarking models using one image is online.
  • August, 2023 ๐Ÿ‘ฅ We are organizing a KDD workshop on federated learning for distributed data mining (FL4Data-Mining) on August 7th at Long Beach๐ŸŒด.
  • July, 2023 I am going to travel for ICML 2023 at Hawaii ๐ŸŒบ. Come and talk to me about data-free backdoor!
  • July, 2023 ๐Ÿ… Honored to receive Research Enhancement Award for organizing FL4DataMining workshop! Thank you to MSU Graduate School!
  • July, 2023 ๐ŸŽ“ I successfully defended my thesis. Many thanks to my collaborators, advisor and committees.
  • May, 2023 My new website is online with released junyuan-academic-theme including many cool new features.
  • April, 2023 One paper on data-free backdoor got accepted to ICML'23.
  • March, 2023 ๐Ÿ† Our ILLIDAN Lab team just won the 3rd place in the U.S. PETs prize challenge. Media cover by The White House, MSU EGR news and MSU Office of Research and Innovation.
  • Jan, 2022 Two papers got accepted to ICLR'23: OoD detection by FL (splotlight!), memory-efficient CTA.
  • Sep, 2022 Our work on federated robustness sharing has been accepted to AAAI'23 (oral).
  • Nov, 2022 Two papers got accepted to NeurIPS'22: outsourcing training, backdoor defense.
  • May, 2022 Our work on connection-resilient FL got accepted to ICML'22.
  • Healthcare
  • Privacy
  • Trustworthy Machine Learning
  • Federated Learning
  • Large Language/Vision Models
  • PhD in CSE, 2023

    Michigan State University

  • MSc in Computer Science, 2018

    University of Science and Technology of China

  • BSc in Physics, minor in CS., 2015

    University of Science and Technology of China


DiRP Trustworthy LLM

DiRP Trustworthy LLM

Directed Reading Program (DiRP) on trustworthy large language models.

Holistic Trustworthy ML

Holistic Trustworthy ML

Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.

Federated Learning

Federated Learning

On the need of data privacy and more data, we strive to join the knowledge from a fair amount of users to train powerful deep neural networks without sharing data.

Privacy in Collaborative ML

Privacy in Collaborative ML

On the concern of data privacy, we aim to develop algorithms towards learning accurate models privately from data.

AI for Dementia Healthcare

AI for Dementia Healthcare

We aim to early detect and intervene dementia diseases leveraging the power of (Generative) AI.

Subspace Learning

Subspace Learning

Supervised learning on subspace data which could model real data like skeleton motion.


ICML 2024 Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression.
PDF Models Website
ICML 2024 Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.
PDF Code ๐Ÿ‘จโ€๐ŸซTutorial
LLM Agents 2024 A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention.
PDF Website ๐Ÿค–Demo
AISTATS 2024 On the Generalization Ability of Unsupervised Pretraining.
ICLR 2024 Safe and Robust Watermark Injection with a Single OoD Image.
PDF Code
SaTML 2024 Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk.
PDF Code
ICLR (Spotlight) 2024 DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer.
PDF Code
NeurIPS-RegML 2023 Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning.
NeurIPS 2023 Understanding Deep Gradient Leakage via Inversion Influence Functions.
PDF Code
FL4DM 2023 FedNoisy: A Federated Noisy Label Learning Benchmark.
PDF Code
ICML 2023 Revisiting Data-Free Knowledge Distillation with Poisoned Teachers.
PDF Code Poster
ICLR 2023 MECTA: Memory-Economic Continual Test-Time Model Adaptation.
PDF Code Slides
ICLR (Spotlight) 2023 Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection.
PDF Code
Preprint 2022 Precautionary Unfairness in Self-Supervised Contrastive Pre-training.
NeurIPS 2022 Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.
PDF Code
ICML 2022 Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.
KDD 2018 Federated Adversarial Debiasing for Fair and Trasnferable Representations.
PDF Code Slides

Professional Activities




I am grateful that our research is supported by the multiple programs.

Media Coverage

  • At Summit for Democracy, the United States and the United Kingdom Announce Winners of Challenge to Drive Innovation in Privacy-enhancing Technologies That Reinforce Democratic Values, The White House, 2023
  • Privacy-enhancing Research Earns International Attention, MSU Engineering News, 2023
  • Privacy-Enhancing Research Earns International Attention, MSU Office Of Research And Innovation, 2023


  • ‘Building Conversational AI for Affordable and Accessible Early Dementia Intervention’ @ AI Health Course, The School of Information, UT Austin, April, 2024: [paper]
  • ‘Shake to Leak: Amplifying the Generative Privacy Risk through Fine-Tuning’ @ Good Systems Symposium: Shaping the Future of Ethical AI, UT Austin, March, 2024: [paper]
  • ‘Foundation Models Meet Data Privacy: Risks and Countermeasures’ @ Trustworthy Machine Learning Course, Virginia Tech, Nov, 2023
  • ‘Economizing Mild-Cognitive-Impairment Research: Developing a Digital Twin Chatbot from Patient Conversations’ @ BABUล KA FORUM, Nov, 2023: [link]
  • ‘Backdoor Meets Data-Free Learning’ @ Hong Kong Baptist University, Sep, 2023: [slides]
  • ‘MECTA: Memory-Economic Continual Test-Time Model Adaptation’ @ Computer Vision Talks, March, 2023: [slides] [video]
  • ‘Split-Mix Federated Learning for Model Customization’ @ TrustML Young Scientist Seminars, July, 2022: [link] [video]
  • ‘Federated Adversarial Debiasing for Fair and Transferable Representations’, @ CSE Graduate Seminar, Michigan State University, October, 2021: [slides]
  • ‘Dynamic Policies on Differential Private Learning’ @ VITA Seminars, UT Austin, Sep, 2020: [slides]



Co-mentored with my advisor: