I am a postdoctoral fellow advised by Dr. Zhangyang Wang at the Institute for Foundations of Machine Learning (IFML) and the Wireless Networking and Communications Group (WNCG) at UT Austin. I obtained my Ph.D. in Computer Science and Engineering from Michigan State University, where I was advised by Dr. Jiayu Zhou. I hold a B.S. in Physics and an M.S. in Computer Science from the University of Science and Technology of China.
I was honored as one of the MLCommons Rising Stars in 2024 and VLDB 2024 best paper finalist.
My long-term research vision is to establish Holistic Trustworthy AI for Healthcare.
My recent research is driven by the emergent challenges in AI for Dementia Healthcare, and centers around Privacy-Centric Trustworthy Machine Learning toward Responsible AI where I pursue fairness, robustness, security and inclusiveness under privacy constraints.
Most of my work (including GenAI GAI) follows these principles to trade-off efficiency, utility and privacy: edge-edge (federated learning) FL and edge-cloud (pre-training fine-tuning, transfer learning) EC collaboration.
- Dementia [Healthcare] β¨ Generative AI:
- [Privacy] β¨ Collaborative ML β¨ Large Vision/Language Models:
- [Trustworthy ML] β¨ Privacy:
Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.
News
- August, 2023 π
Our privacy benchmark paper (LLM-PBE) gets into the best paper finalist at VLDB 2024.
- July, 2024 π₯ We are exciting to organize the GenAI for Health: Potential, Trust and Policy Compliance workshop (GenAI4Health 2024) at NeurIPS 2024.
- June, 2024 π New benchmark on LLM privacy is accepted to VLDB!
- June, 2024 New paper on safeguarding LLM agent is online!
- June, 2024 π Thrilled to co-organize The LLM and Agent Safety Competition 2024 at NeurIPS 2024!
- June, 2024 π Grateful to receive API grants from the OpenAIβs Researcher Access Program!
- May, 2024 π Thrilled to co-organize The NeurIPS 2024 LLM Privacy Challenge! Join us for the competition!
- May, 2024 π I am honored to be selected one of ML and Systems Rising Stars by ML Commons for my work in health and trustworthy ML, 2024!
- May, 2024 π Two benchmark papers are accepted at ICML 2024: how to obtain trustworthy compressed LLMs (models at huggingface) and how to optimize LLMs with less memory.
- April, 2024 π€ I am honored to give a invited talk on conversational AI for dementia health at UT school of information.
- March, 2024 π€ I am honored to give a talk on the new privacy risk of GenAI at UT Good System Symposium 2024.
More
- March, 2024 We are exciting to organize the International Joint Workshop on Federated Learning for Data Mining and Graph Analytics (FedKDD 2024).
- March, 2024 Our benchmark work, Decoding Compressed Trust, has been accepted to SET LLM @ICLR. A curated set of compressed models are available at huggingface.
- Feb, 2024 New benchmark preprint on zeroth-order optimization for LLMs.
- Jan, 2024 π Three papers are accepted: The first local privacy-preserving prompt tuning as Spotlight at ICLR, robust watermarking from one image as poster at ICLR, the generalization of unsupervised pretraining at AISTATS!
- Dec, 2023 πΎ Our paper on amplifying privacy risks via fine-tuning (Shake-To-Leak) is accepted to SaTML.
- Nov, 2023 π
Grateful to be selected as Top Reviewer at NeurIPS 2023.
- Dec, 2023 Our new preprint on private prompt engineering for close-source LLMs is online.
- Dec, 2023 βοΈ π· I will be at New Orleans for presenting our recent work on understanding gradient privacy (NeurIPS'23 ) and tracking IP leakage in FL (NeurIPS-RegML).
- Nov, 2023 π€ We are releasing a set of compressed LLMs at compressed-llm for public benchmarks.
- Nov, 2023 Our work on tracking IP leakage in FL is accepted to NeurIPS'23 Workshop on Regulated ML (NeurIPS-RegML).
- Sep, 2023 Our work on understanding gradient privacy via inversion influence functions is accepted to NeurIPS'23.
- Sep, 2023 Our new work on watermarking models using one image is online.
- August, 2023 π₯ We are organizing a KDD workshop on federated learning for distributed data mining (FL4Data-Mining) on August 7th at Long Beachπ΄.
- July, 2023 I am going to travel for ICML 2023 at Hawaii πΊ. Come and talk to me about data-free backdoor!
- July, 2023 π
Honored to receive Research Enhancement Award for organizing FL4DataMining workshop! Thank you to MSU Graduate School!
- July, 2023 π I successfully defended my thesis. Many thanks to my collaborators, advisor and committees.
- May, 2023 My new website is online with released junyuan-academic-theme including many cool new features.
- April, 2023 One paper on data-free backdoor got accepted to ICML'23.
- March, 2023 π Our ILLIDAN Lab team just won the 3rd place in the U.S. PETs prize challenge. Media cover by The White House, MSU EGR news and MSU Office of Research and Innovation.
- Jan, 2022 Two papers got accepted to ICLR'23: OoD detection by FL (splotlight!), memory-efficient CTA.
- Sep, 2022 Our work on federated robustness sharing has been accepted to AAAI'23 (oral).
- Nov, 2022 Two papers got accepted to NeurIPS'22: outsourcing training, backdoor defense.
- May, 2022 Our work on connection-resilient FL got accepted to ICML'22.