I am a postdoctoral fellow hosted by Dr. Zhangyang Wang in Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG) at UT Austin.
I obtained my Ph.D. degree from Computer Science and Engineering at Michigan State University, advised by Dr. Jiayu Zhou.
Previously, I earned my B.S. in Physics and M.S. in Computer Science at University of Science and Technology of China.
My long-term research vision is to establish Holistic Trustworthy AI for Healthcare.
My recent research is driven by the emergent challenges in AI for Dementia Healthcare, and centers around Privacy-Centric Trustworthy Machine Learning toward Responsible AI where I pursue fairness, robustness, security and inclusiveness under privacy constraints.
Most of my work are considered in privacy-preserving scenarios: edge-edge (federated learning) FL and edge-cloud (pre-training fine-tuning, transfer learning) EC collaboration.
- Dementia [Healthcare] โจ Generative AI:
- [Privacy] โจ Collaborative ML โจ Large Vision/Language Models:
- [Trustworthy ML] โจ Privacy:
Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.
News
- April, 2024 I am honored to give a invited talk on conversational AI for dementia health at UT school of information.
- March, 2024 I am honored to give a talk on the new privacy risk of GenAI at UT Good System Symposium 2024.
- March, 2024 We are exciting to organize the International Joint Workshop on Federated Learning for Data Mining and Graph Analytics (FedKDD 2023).
- March, 2024 Our benchmark work, Decoding Compressed Trust, has been accepted to SET LLM @ICLR. A curated set of compressed models are available at huggingface.
- Feb, 2024 New benchmark preprint on zeroth-order optimization for LLMs.
- Jan, 2024 ๐ Three papers are accepted: The first local privacy-preserving prompt tuning as Spotlight at ICLR, robust watermarking from one image as poster at ICLR, the generalization of unsupervised pretraining at AISTATS!
- Dec, 2023 ๐พ Our paper on amplifying privacy risks via fine-tuning (Shake-To-Leak) is accepted to SaTML.
- Nov, 2023 ๐
Grateful to be selected as Top Reviewer at NeurIPS 2023.
More
- Dec, 2023 Our new preprint on private prompt engineering for close-source LLMs is online.
- Dec, 2023 โ๏ธ ๐ท I will be at New Orleans for presenting our recent work on understanding gradient privacy (NeurIPS'23 ) and tracking IP leakage in FL (NeurIPS-RegML).
- Nov, 2023 ๐ค We are releasing a set of compressed LLMs at compressed-llm for public benchmarks.
- Nov, 2023 Our work on tracking IP leakage in FL is accepted to NeurIPS'23 Workshop on Regulated ML (NeurIPS-RegML).
- Sep, 2023 Our work on understanding gradient privacy via inversion influence functions is accepted to NeurIPS'23.
- Sep, 2023 Our new work on watermarking models using one image is online.
- August, 2023 ๐ฅ We are organizing a KDD workshop on federated learning for distributed data mining (FL4Data-Mining) on August 7th at Long Beach๐ด.
- July, 2023 I am going to travel for ICML 2023 at Hawaii ๐บ. Come and talk to me about data-free backdoor!
- July, 2023 ๐
Honored to receive Research Enhancement Award for organizing FL4DataMining workshop! Thank you to MSU Graduate School!
- July, 2023 ๐ I successfully defended my thesis. Many thanks to my collaborators, advisor and committees.
- May, 2023 My new website is online with released junyuan-academic-theme including many cool new features.
- April, 2023 One paper on data-free backdoor got accepted to ICML'23.
- March, 2023 ๐ Our ILLIDAN Lab team just won the 3rd place in the U.S. PETs prize challenge. Media cover by The White House, MSU EGR news and MSU Office of Research and Innovation.
- Jan, 2022 Two papers got accepted to ICLR'23: OoD detection by FL (splotlight!), memory-efficient CTA.
- Sep, 2022 Our work on federated robustness sharing has been accepted to AAAI'23 (oral).
- Nov, 2022 Two papers got accepted to NeurIPS'22: outsourcing training, backdoor defense.
- May, 2022 Our work on connection-resilient FL got accepted to ICML'22.