I am a joint postdoctoral fellow advised by Dr. Zhangyang Wang in the Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG), and also affiliated with the UT AI Health Lab as well as the Good System Challenge. I was recognized as one of the MLSys Rising Stars in 2024 and received a Best Paper Nomination at VLDB 2024. My work was covered by The White House, and MSU Office of Research and Innovation. Part of my work is funded by OpenAI Researcher Access Program.
Check my curricula vitae and feel free to drop me an email if you are interested in collaboration.
PhD in CSE, 2023
Michigan State University (Advisor: Jiayu Zhou)
MSc in Computer Science, 2018
University of Science and Technology of China
BSc in Physics, minor in CS., 2015
University of Science and Technology of China
My research vision is to harmonize, understand, and deploy Responsible AI: Optimizing AI systems that balance real-world constraints in computational efficiency, data privacy, and ethical norms through comprehensive threat analysis and the development of integrative trustworthy, resource-aware collaborative learning frameworks. Guided by this principle, I aim to lead a research group combining rigorous theoretical foundations with a commitment to developing algorithm tools that have a meaningful real-world impact, particularly in healthcare applications.
Trust in AI is complex, reflecting the intricate web of social norms and values. Pursuing only one aspect of trustworthiness while neglecting others may lead to unintended consequences. For instance, overzealous privacy protection can come at the price of transparency, robustness, or fairness. To address these challenges, I have developed innovative collaborative learning approaches that balance key aspects of trustworthy AI, including privacy-preserving learning [FL4DM23&PETs23 ] with fairness guarantees [KDD21, TMLR23], enhanced robustness [AAAI23, ICLR23a], and provable computation and data efficiency [ICLR22, FAccT22, NeurIPS22a, ICLR24]. These methods are designed to create AI systems that uphold individual privacy while remaining efficient, fair, and accountable.
As AI evolves from traditional machine learning to generative AI (GenAI), new privacy and trust challenges arise, yet remain opaque due to the complexity of AI models. My research aims to anticipate and address these challenges by developing theoretical frameworks that generalize privacy risk analysis across AI architectures [NeurIPS23], introducing novel threat models for generation-driven transfer learning [ICML23] and pre-trained foundation models [SaTML24], and leveraging insights from integrative benchmarks [VLDB24 , ICML24]. This deeper understanding of GenAI risks further informs the creation of collaborative or multi-agent learning paradigms that prioritize privacy [ICLR24] and safety [arXiv24].
To ground my research in real-world impacts, I am actively exploring applications in healthcare, a domain where trust, privacy, and fairness are paramount. My projects include clinical-protocol-compliant conversational AI for dementia prevention [ICLRW24] and fair, in-home AI-driven early dementia detection [KDD21, AD20]. These initiatives serve as testbeds for responsible AI principles, particularly in ensuring ethical considerations like patient autonomy, data confidentiality, and equitable access to technology, while demonstrating AI’s potential to improve lives.
I am grateful that our research is supported by the multiple programs.
2023 - Now: Zhangheng Li, Ph.D. student, University of Texas at Austin
SaTML 2024 (first author), ICML 2024 (co-first author), ICLR 2024
2023 - Now: Runjin Chen, Ph.D. student, University of Texas at Austin
ICLR 2025 under review (first author)
2023 - Now: Gabriel Jacob Perin, Undergraduate student, University of SΓ£o Paulo, Brazil
EMNLP 2024 (first author), ICLR 2025 under review (co-first author)
2023 - 2024: Jeffrey Tan, Undergraduate student, University of California, Berkeley
VLDB 2024 (Best Paper Nomination)
2020 - 2023: Shuyang Yu, Ph.D. student, Michigan State University
ICLR 2024 (first author), ICLR 2023 (spotlight; first author), NeurIPSW 2023 (first author), ICML 2023, KDD 2021
2022 - 2023: Haobo Zhang, Ph.D. student, Michigan State University
NeurIPS 2023 (first author), KDDW 2023 (first author)
Team member, 3rd place winner at US-UK PETs (Privacy-enhancing technologies) Prize Challenge, 2023.
2022 - 2023: Siqi Liang, Ph.D. student, Michigan State University
KDDW 2023 (first author)