I am a joint postdoctoral fellow advised by Dr. Zhangyang Wang in the Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG), and also affiliated with the UT AI Health Lab as well as the Good System Challenge. I was recognized as one of the MLSys Rising Stars in 2024 and received a Best Paper Nomination at VLDB 2024. My work was covered by The White House, and MSU Office of Research and Innovation. Part of my work is funded by OpenAI Researcher Access Program.
Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.
Our research on engaging chatbot based on A-CONECT is supported by the NAIRR Pilot Program!
LLM-PBE benchmark [VLDB24] is selected as the best paper finalist, which is covered by UT ECE News and is used for our NeurIPS 2024 LLM Privacy Challenge!
Co-organize the GenAI4Health workshop at NeurIPS 2024.
Co-organize The LLM and Agent Safety Competition at NeurIPS 2024!
Our GenAI for Dementia Health project (A-CONECT) is supported by the OpenAI’s Researcher Access Program!
DP-OPT [ICLR24] (private prompt tuning) is selected as Spotlight.
I am selected as MLSys Rising Stars for my work in health and trustworthy ML, covered by UT ECE News.
Invited talk on GenAI Privacy [SaTML24] at UT Good System Symposium.
Our ILLIDAN Lab team wins the 3rd place in the U.S. PETs prize challenge, covered by The White House, and MSU Office of Research and Innovation.
PhD in CSE, 2023
Michigan State University (Advisor: Jiayu Zhou)
Committee: Anil K. Jain, Sijia Liu, Atlas Wang, Jiayu Zhou
MSc in Computer Science, 2018
University of Science and Technology of China
BSc in Physics, minor in CS., 2015
University of Science and Technology of China
My research vision is to harmonize, understand, and deploy Responsible AI: Optimizing AI systems that balance real-world constraints in computational efficiency, data privacy, and ethical norms through comprehensive threat analysis and the development of integrative trustworthy, resource-aware collaborative learning frameworks. Guided by this principle, I aim to lead a research group combining rigorous theoretical foundations with a commitment to developing algorithm tools that have a meaningful real-world impact, particularly in healthcare applications.
Trust in AI is complex, reflecting the intricate web of social norms and values. Pursuing only one aspect of trustworthiness while neglecting others may lead to unintended consequences. For instance, overzealous privacy protection can come at the price of transparency, robustness, or fairness. To address these challenges, I have developed innovative collaborative learning approaches that balance key aspects of trustworthy AI, including privacy-preserving learning [FL4DM23&PETs23 ] with fairness guarantees [KDD21, TMLR23], enhanced robustness [AAAI23, ICLR23a], and provable computation and data efficiency [ICLR22, FAccT22, NeurIPS22a, ICLR24]. These methods are designed to create AI systems that uphold individual privacy while remaining efficient, fair, and accountable.
As AI evolves from traditional machine learning to generative AI (GenAI), new privacy and trust challenges arise, yet remain opaque due to the complexity of AI models. My research aims to anticipate and address these challenges by developing theoretical frameworks that generalize privacy risk analysis across AI architectures [NeurIPS23], introducing novel threat models for generation-driven transfer learning [ICML23] and pre-trained foundation models [SaTML24], and leveraging insights from integrative benchmarks [VLDB24 , ICML24]. This deeper understanding of GenAI risks further informs the creation of collaborative or multi-agent learning paradigms that prioritize privacy [ICLR24] and safety [arXiv24].
To ground my research in real-world impacts, I am actively exploring applications in healthcare, a domain where trust, privacy, and fairness are paramount. My projects include clinical-protocol-compliant conversational AI for dementia prevention [ICLRW24] and fair, in-home AI-driven early dementia detection [KDD21, AD20]. These initiatives serve as testbeds for responsible AI principles, particularly in ensuring ethical considerations like patient autonomy, data confidentiality, and equitable access to technology, while demonstrating AI’s potential to improve lives.
Mentored Students:
Zhangheng Li (2023 - Now), Ph.D. student, University of Texas at Austin
SaTML 2024 (first author), ICML 2024 (co-first author), ICLR 2024
Runjin Chen (2023 - Now), Ph.D. student, University of Texas at Austin
ICLR 2025 under review (first author)
Gabriel Jacob Perin (2023 - Now), Undergraduate student, University of SΓ£o Paulo, Brazil
EMNLP 2024 (first author), ICLR 2025 under review (co-first author)
Jeffrey Tan (2023 - 2024), Undergraduate student, University of California, Berkeley
VLDB 2024 (Best Paper Nomination)
Shuyang Yu (2020 - 2023), Ph.D. student, Michigan State University
ICLR 2024 (first author), ICLR 2023 (spotlight; first author), NeurIPSW 2023 (first author), ICML 2023, KDD 2021
Haobo Zhang (2022 - 2023), Ph.D. student, Michigan State University
NeurIPS 2023 (first author), KDDW 2023 (first author)
Team member, 3rd place winner at US-UK PETs (Privacy-enhancing technologies) Prize Challenge, 2023.
Siqi Liang (2022 - 2023), Ph.D. student, Michigan State University
KDDW 2023 (first author)