Junyuan Hong

Junyuan Hong

Postdoctoral Fellow

IFML & WNCG at UT Austin

I am a postdoctoral fellow advised by Dr. Zhangyang Wang at the Institute for Foundations of Machine Learning (IFML) and the Wireless Networking and Communications Group (WNCG) at UT Austin. I obtained my Ph.D. in Computer Science and Engineering from Michigan State University, where I was advised by Dr. Jiayu Zhou. I hold a B.S. in Physics and an M.S. in Computer Science from the University of Science and Technology of China. I was honored as one of the MLCommons Rising Stars in 2024.

My long-term research vision is to establish Holistic Trustworthy AI for Healthcare. My recent research is driven by the emergent challenges in AI for Dementia Healthcare, and centers around Privacy-Centric Trustworthy Machine Learning toward Responsible AI where I pursue fairness, robustness, security and inclusiveness under privacy constraints. Most of my work (including GenAI GAI) follows these principles to trade-off efficiency, utility and privacy: edge-edge (federated learning) FL and edge-cloud (pre-training fine-tuning, transfer learning) EC collaboration.

Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.

News

More
Interests
  • Healthcare
  • Privacy
  • Trustworthy Machine Learning
  • Federated Learning
  • Large Language/Vision Models
Education
  • PhD in CSE, 2023

    Michigan State University

  • MSc in Computer Science, 2018

    University of Science and Technology of China

  • BSc in Physics, minor in CS., 2015

    University of Science and Technology of China

Projects

.js-id-Selected
DiRP Trustworthy LLM

DiRP Trustworthy LLM

Directed Reading Program (DiRP) on trustworthy large language models.

Holistic Trustworthy ML

Holistic Trustworthy ML

Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.

Federated Learning

Federated Learning

On the need of data privacy and more data, we strive to join the knowledge from a fair amount of users to train powerful deep neural networks without sharing data.

Privacy in Collaborative ML

Privacy in Collaborative ML

On the concern of data privacy, we aim to develop algorithms towards learning accurate models privately from data.

AI for Dementia Healthcare

AI for Dementia Healthcare

We aim to early detect and intervene dementia diseases leveraging the power of (Generative) AI.

Subspace Learning

Subspace Learning

Supervised learning on subspace data which could model real data like skeleton motion.

Publications

.js-id-Selected
VLDB 2024 LLM-PBE: Assessing Data Privacy in Large Language Models.
Competition
ArXiv 2024 GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning.
PDF
ICML 2024 Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression.
PDF Models Website
ICML 2024 Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.
PDF Code πŸ‘¨β€πŸ«Tutorial
LLM Agents 2024 A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention.
PDF Website πŸ€–Demo
AISTATS 2024 On the Generalization Ability of Unsupervised Pretraining.
PDF
ICLR 2024 Safe and Robust Watermark Injection with a Single OoD Image.
PDF Code
SaTML 2024 Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk.
PDF Code
ICLR (Spotlight) 2024 DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer.
PDF Code
NeurIPS-RegML 2023 Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning.
PDF
NeurIPS 2023 Understanding Deep Gradient Leakage via Inversion Influence Functions.
PDF Code
FL4DM 2023 FedNoisy: A Federated Noisy Label Learning Benchmark.
PDF Code
ICML 2023 Revisiting Data-Free Knowledge Distillation with Poisoned Teachers.
PDF Code Poster
ICLR 2023 MECTA: Memory-Economic Continual Test-Time Model Adaptation.
PDF Code Slides
ICLR (Spotlight) 2023 Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection.
PDF Code
Preprint 2022 Precautionary Unfairness in Self-Supervised Contrastive Pre-training.
Preprint
NeurIPS 2022 Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.
PDF Code
ICML 2022 Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.
PDF
KDD 2018 Federated Adversarial Debiasing for Fair and Trasnferable Representations.
PDF Code Slides

Professional Activities

Experience

Awards

Fundings

I am grateful that our research is supported by the multiple programs.

Media Coverage

  • At Summit for Democracy, the United States and the United Kingdom Announce Winners of Challenge to Drive Innovation in Privacy-enhancing Technologies That Reinforce Democratic Values, The White House, 2023
  • Privacy-enhancing Research Earns International Attention, MSU Engineering News, 2023
  • Privacy-Enhancing Research Earns International Attention, MSU Office Of Research And Innovation, 2023

Talks

  • ‘Building Conversational AI for Affordable and Accessible Early Dementia Intervention’ @ AI Health Course, The School of Information, UT Austin, April, 2024: [paper]
  • ‘Shake to Leak: Amplifying the Generative Privacy Risk through Fine-Tuning’ @ Good Systems Symposium: Shaping the Future of Ethical AI, UT Austin, March, 2024: [paper]
  • ‘Foundation Models Meet Data Privacy: Risks and Countermeasures’ @ Trustworthy Machine Learning Course, Virginia Tech, Nov, 2023
  • ‘Economizing Mild-Cognitive-Impairment Research: Developing a Digital Twin Chatbot from Patient Conversations’ @ BABUΕ KA FORUM, Nov, 2023: [link]
  • ‘Backdoor Meets Data-Free Learning’ @ Hong Kong Baptist University, Sep, 2023: [slides]
  • ‘MECTA: Memory-Economic Continual Test-Time Model Adaptation’ @ Computer Vision Talks, March, 2023: [slides] [video]
  • ‘Split-Mix Federated Learning for Model Customization’ @ TrustML Young Scientist Seminars, July, 2022: [link] [video]
  • ‘Federated Adversarial Debiasing for Fair and Transferable Representations’, @ CSE Graduate Seminar, Michigan State University, October, 2021: [slides]
  • ‘Dynamic Policies on Differential Private Learning’ @ VITA Seminars, UT Austin, Sep, 2020: [slides]

Teaching

Mentoring

Co-mentored with my advisor:

Services