Junyuan "Jason" Hong

Junyuan "Jason" Hong

Postdoctoral Fellow

University of Texas at Austin

I am a joint postdoctoral fellow advised by Dr. Zhangyang Wang in the Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG), and also affiliated with the UT AI Health Lab as well as the Good System Challenge. I was recognized as one of the MLSys Rising Stars in 2024 and received a Best Paper Nomination at VLDB 2024. My work was covered by The White House, and MSU Office of Research and Innovation. Part of my work is funded by OpenAI Researcher Access Program.

Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.

News

Interests
  • Healthcare
  • Responsible AI
  • Privacy
  • Federated Learning
Education
  • PhD in CSE, 2023

    Michigan State University (Advisor: Jiayu Zhou)

  • MSc in Computer Science, 2018

    University of Science and Technology of China

  • BSc in Physics, minor in CS., 2015

    University of Science and Technology of China

Research

My research vision is to harmonize, understand, and deploy Responsible AI: Optimizing AI systems that balance real-world constraints in computational efficiency, data privacy, and ethical norms through comprehensive threat analysis and the development of integrative trustworthy, resource-aware collaborative learning frameworks. Guided by this principle, I aim to lead a research group combining rigorous theoretical foundations with a commitment to developing algorithm tools that have a meaningful real-world impact, particularly in healthcare applications.

RAI

T1: Harmonizing Multifaceted Values in AI Trust.

Trust in AI is complex, reflecting the intricate web of social norms and values. Pursuing only one aspect of trustworthiness while neglecting others may lead to unintended consequences. For instance, overzealous privacy protection can come at the price of transparency, robustness, or fairness. To address these challenges, I have developed innovative collaborative learning approaches that balance key aspects of trustworthy AI, including privacy-preserving learning [FL4DM23&PETs23 ] with fairness guarantees [KDD21, TMLR23], enhanced robustness [AAAI23, ICLR23a], and provable computation and data efficiency [ICLR22, FAccT22, NeurIPS22a, ICLR24]. These methods are designed to create AI systems that uphold individual privacy while remaining efficient, fair, and accountable.

Privacy + Efficiency via Edge-Cloud Collaboration
[ICLR24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer
Privacy + Fairness via Federated Transfer
[KDD21] Federated Adversarial Debiasing for Fair and Transferable Representations

T2: Understanding Multi-faceted Emerging Risks in GenAI Trust.

As AI evolves from traditional machine learning to generative AI (GenAI), new privacy and trust challenges arise, yet remain opaque due to the complexity of AI models. My research aims to anticipate and address these challenges by developing theoretical frameworks that generalize privacy risk analysis across AI architectures [NeurIPS23], introducing novel threat models for generation-driven transfer learning [ICML23] and pre-trained foundation models [SaTML24], and leveraging insights from integrative benchmarks [VLDB24 , ICML24]. This deeper understanding of GenAI risks further informs the creation of collaborative or multi-agent learning paradigms that prioritize privacy [ICLR24] and safety [arXiv24].

Benchmark Trust under Compression
[ICML24] Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Benchmark Privacy in LLM Lifecycle
[VLDB24 ] LLM-PBE: Assessing Data Privacy in Large Language Models
Theoretical Risk Analysis
[NeurIPS23] Understanding Deep Gradient Leakage via Inversion Influence Functions

T3: Deploying AI Aligned with Human Norms in Dementia Healthcare.

To ground my research in real-world impacts, I am actively exploring applications in healthcare, a domain where trust, privacy, and fairness are paramount. My projects include clinical-protocol-compliant conversational AI for dementia prevention [ICLRW24] and fair, in-home AI-driven early dementia detection [KDD21, AD20]. These initiatives serve as testbeds for responsible AI principles, particularly in ensuring ethical considerations like patient autonomy, data confidentiality, and equitable access to technology, while demonstrating AI’s potential to improve lives.

Protocol-Compliant Dementia Intervention
[ICLRW24] A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention
In-home Dementia Detection
[AD20] Detecting MCI using real-time, ecologically valid data capture methodology: How to improve scientific rigor in digital biomarker analyses

Publications

.js-id-Selected
NeurIPSW 2024 Demo: An Exploration of LLM-Guided Conversation in Reminiscence Therapy.
PDF
VLDB (Best Paper Finalist) 2024 LLM-PBE: Assessing Data Privacy in Large Language Models.
PDF Code 🌍 Website 🏁 Competition πŸ† Best Paper Nomination Finetune Code
ArXiv 2024 GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning.
PDF 🏁 Competition
ICML 2024 Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression.
PDF πŸ€— Models 🌍 Website
ICML 2024 Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark.
PDF Code πŸ‘¨β€πŸ«Tutorial
LLM Agents 2024 A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention.
PDF Website πŸ€–Demo
AISTATS 2024 On the Generalization Ability of Unsupervised Pretraining.
PDF
ICLR 2024 Safe and Robust Watermark Injection with a Single OoD Image.
PDF Code
SaTML 2024 Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk.
PDF Code
ICLR (Spotlight) 2024 DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer.
PDF Code
NeurIPS-RegML 2023 Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning.
PDF
NeurIPS 2023 Understanding Deep Gradient Leakage via Inversion Influence Functions.
PDF Code
KDDW 2023 FedNoisy: A Federated Noisy Label Learning Benchmark.
PDF Code
ICML 2023 Revisiting Data-Free Knowledge Distillation with Poisoned Teachers.
PDF Code Poster
TMLR 2023 How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts.
PDF
ICLR 2023 MECTA: Memory-Economic Continual Test-Time Model Adaptation.
PDF Code Slides
ICLR (Spotlight) 2023 Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection.
PDF Code
AAAI (Oral) 2023 Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning.
Preprint Code Poster
Preprint 2022 Precautionary Unfairness in Self-Supervised Contrastive Pre-training.
Preprint
NeurIPS 2022 Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling.
PDF Poster Slides
NeurIPS 2022 Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.
PDF Code
ICML 2022 Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.
PDF
ICLR 2022 Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization.
PDF Code Slides Video
KDD 2021 Federated Adversarial Debiasing for Fair and Transferable Representations.
PDF Code Slides
ICML 2021 Data-Free Knowledge Distillation for Heterogeneous Federated Learning.
Preprint Code
AAAI 2021 Learning Model-Based Privacy Protection under Budget Constraints.
PDF Slides Video Supplementary
TNNLS 2019 Short Sequence Classification Through Discriminable Linear Dynamical System.
DOI
ECML 2016 Sequential Data Classification in the Space of Liquid State Machines.
PDF Code

Resume

Experience

Awards

Fundings

I am grateful that our research is supported by the multiple programs.

Media Coverage

  • Texas ECE Student and Postdoc Named MLCommons Rising Stars, UT Austin ECE News, 2024
  • At Summit for Democracy, the United States and the United Kingdom Announce Winners of Challenge to Drive Innovation in Privacy-enhancing Technologies That Reinforce Democratic Values, The White House, 2023
  • Privacy-enhancing Research Earns International Attention, MSU Engineering News, 2023
  • Privacy-Enhancing Research Earns International Attention, MSU Office Of Research And Innovation, 2023

Talks

  • ‘GenAI-Based Chatbot for Early Dementia Intervention’ @ Rising Star Symposium Series, IEEE TCCN Special Interest Group for AI and Machine Learning in Security, September, 2024: [link]
  • ‘Building Conversational AI for Affordable and Accessible Early Dementia Intervention’ @ AI Health Course, The School of Information, UT Austin, April, 2024: [paper]
  • ‘Shake to Leak: Amplifying the Generative Privacy Risk through Fine-Tuning’ @ Good Systems Symposium: Shaping the Future of Ethical AI, UT Austin, March, 2024: [paper]
  • ‘Foundation Models Meet Data Privacy: Risks and Countermeasures’ @ Trustworthy Machine Learning Course, Virginia Tech, Nov, 2023
  • ‘Economizing Mild-Cognitive-Impairment Research: Developing a Digital Twin Chatbot from Patient Conversations’ @ BABUΕ KA FORUM, Nov, 2023: [link]
  • ‘Backdoor Meets Data-Free Learning’ @ Hong Kong Baptist University, Sep, 2023: [slides]
  • ‘MECTA: Memory-Economic Continual Test-Time Model Adaptation’ @ Computer Vision Talks, March, 2023: [slides] [video]
  • ‘Split-Mix Federated Learning for Model Customization’ @ TrustML Young Scientist Seminars, July, 2022: [link] [video]
  • ‘Federated Adversarial Debiasing for Fair and Transferable Representations’, @ CSE Graduate Seminar, Michigan State University, October, 2021: [slides]
  • ‘Dynamic Policies on Differential Private Learning’ @ VITA Seminars, UT Austin, Sep, 2020: [slides]

Teaching

Mentoring

  • 2023 - Now: Zhangheng Li, Ph.D. student, University of Texas at Austin
    SaTML 2024 (first author), ICML 2024 (co-first author), ICLR 2024

  • 2023 - Now: Runjin Chen, Ph.D. student, University of Texas at Austin
    ICLR 2025 under review (first author)

  • 2023 - Now: Gabriel Jacob Perin, Undergraduate student, University of SΓ£o Paulo, Brazil
    EMNLP 2024 (first author), ICLR 2025 under review (co-first author)

  • 2023 - 2024: Jeffrey Tan, Undergraduate student, University of California, Berkeley
    VLDB 2024 (Best Paper Nomination)

  • 2020 - 2023: Shuyang Yu, Ph.D. student, Michigan State University
    ICLR 2024 (first author), ICLR 2023 (spotlight; first author), NeurIPSW 2023 (first author), ICML 2023, KDD 2021

  • 2022 - 2023: Haobo Zhang, Ph.D. student, Michigan State University
    NeurIPS 2023 (first author), KDDW 2023 (first author)
    Team member, 3rd place winner at US-UK PETs (Privacy-enhancing technologies) Prize Challenge, 2023.

  • 2022 - 2023: Siqi Liang, Ph.D. student, Michigan State University
    KDDW 2023 (first author)

Services