Junyuan Hong

Junyuan Hong

Postdoctoral Fellow

IFML & WNCG at UT Austin

I am a postdoctoral fellow hosted by Dr. Zhangyang Wang in the VITA group, Institute for Foundations of Machine Learning (IFML) and Wireless Networking and Communications Group (WNCG) at UT Austin. I obtained my Ph.D. degree from Computer Science and Engineering at ILLIDAN Lab@Michigan State University (MSU), advised by Dr. Jiayu Zhou. Previously, I earned my B.S. in Physics and M.S. in Computer Science at University of Science and Technology of China (USTC).

My long-term research vision is to build a Holistic Trustworthy ML system, including fairness, robustness, security and privacy. My recent research centers around Privacy-Centric Trustworthy Machine Learning where I pursue trustworthiness under the privacy constraint, e.g., federated learning and differentially-private learning:

I am on the job market! Check my curricula vitae and feel free to drop me an email if you are interested.

News

  • September, 2023 Our work on understanding gradient via inversion influence functions got accepted to NeurIPS'23.
  • August, 2023 We are organizing a KDD workshop on federated learning for distributed data mining (FL4Data-Mining) on August 7th at Long Beach🌴.
  • July, 2023 πŸ… Honored to receive Research Enhancement Award for organizing FL4DataMining workshop! Thank you to MSU Graduate School!
  • July, 2023 I am going to travel for ICML 2023 at Hawaii 🌺. Come and talk to me about data-free backdoor!
  • July, 2023 πŸŽ“ I successfully defended my thesis. Many thanks to my collaborators, advisor and committees.
  • May, 2023 My new website is online with released junyuan-academic-theme including many cool new features.
  • April, 2023 One paper on data-free backdoor got accepted to ICML'23.
  • March, 2023 πŸ† Our ILLIDAN Lab team just won the 3rd place in the U.S. PETs prize challenge. Media cover by The White House, MSU EGR news and MSU Office of Research and Innovation.
More
  • Jan, 2022 Two papers got accepted to ICLR'23: OoD detection by FL (splotlight!), memory-efficient CTA.
  • Sep, 2022 Our work on federated robustness sharing has been accepted to AAAI'23 (oral).
  • Nov, 2022 Two papers got accepted to NeurIPS'22: outsourcing training, backdoor defense.
  • May, 2022 Our work on connection-resilient FL got accepted to ICML'22.
Interests
  • Foundation (Language/Vision) Models
  • Trustworthy Machine Learning
  • Privacy
  • Federated Learning
Education
  • PhD in CSE, 2023

    Michigan State University

  • MSc in Computer Science, 2018

    University of Science and Technology of China

  • BSc in Physics, minor in CS., 2015

    University of Science and Technology of China

Projects

.js-id-Selected
Holistic Trustworthy ML

Holistic Trustworthy ML

Instead of isolated properties, we target on a holistic trustworthiness covering every properties in one solution.

Federated Learning

Federated Learning

On the need of data privacy and more data, we strive to join the knowledge from a fair amount of users to train powerful deep neural networks without sharing data.

Differentially Private Learning

Differentially Private Learning

On the concern of data privacy, we aim to develop algorithms towards learning accurate models privately from data.

Subspace Learning

Subspace Learning

Supervised learning on subspace data which could model real data like skeleton motion.

Publications

.js-id-Selected
NeurIPS 2023 Understanding Deep Gradient Leakage via Inversion Influence Functions.
PDF Code
Preprint 2023 Safe and Robust Watermark Injection with a Single OoD Image.
PDF
FL4DM 2023 A Privacy-Preserving Hybrid Federated Learning Framework for Financial Crime Detection.
PDF Code
FL4DM 2023 FedNoisy: A Federated Noisy Label Learning Benchmark.
PDF Code
ICML 2023 Revisiting Data-Free Knowledge Distillation with Poisoned Teachers.
PDF Code Poster
ICLR 2023 MECTA: Memory-Economic Continual Test-Time Model Adaptation.
PDF Code Slides
ICLR (spotlight) 2023 Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection.
PDF Code
Preprint 2022 Precautionary Unfairness in Self-Supervised Contrastive Pre-training.
Preprint
NeurIPS 2022 Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.
PDF Code
ICML 2022 Resilient and Communication Efficient Learning for Heterogeneous Federated Systems.
PDF
KDD (Oral) 2021 Federated Adversarial Debiasing for Fair and Trasnferable Representations.
PDF Code Slides

Professional Activities

Internship

Talks

  • ‘MECTA: Memory-Economic Continual Test-Time Model Adaptation’ @ Computer Vision Talks, March, 2023: [slides] [video]
  • ‘Split-Mix Federated Learning for Model Customization’ @ TrustML Young Scientist Seminars, July, 2022: [link] [video]
  • ‘Federated Adversarial Debiasing for Fair and Transferable Representations’, @ CSE Graduate Seminar, Michigan State University, October, 2021: [slides]
  • ‘Dynamic Policies on Differential Private Learning’ @ VITA Seminars, UT Austin, Sep, 2020: [slides]

Media Coverage

  • At Summit for Democracy, the United States and the United Kingdom Announce Winners of Challenge to Drive Innovation in Privacy-enhancing Technologies That Reinforce Democratic Values, The White House, 2023
  • Privacy-enhancing Research Earns International Attention, MSU Engineering News, 2023
  • Privacy-Enhancing Research Earns International Attention, MSU Office Of Research And Innovation, 2023

Teaching Assistant

  • CSE 847: Machine Learning, 2021
  • CSE 404: Introduction to Machine Learning, 2020

Awards

  • Research Enhancement Award, MSU, 2023
  • The 3rd place in the U.S. PETs prize challenge, 2023
  • Dissertation Completion Fellowship, MSU, 2023
  • Carl V. Page Memorial Graduate Fellowship, MSU, 2021
  • Student Travel Award, KDD, 2018
  • Outstanding Freshman Scholarship, USTC, 2015

Services

  • Program Chair: FL4Data-Mining workshop@KDD 2023 (Lead Chair)
  • External Reviewer: NeurIPS2023, ICML2023, ECML-PKDD2023, KDD2023, AISTATS2023, ICLR2023, NeurIPS2022, ICML2022, KDD2022, WSDM2022, AISTATS2022, AAAI2022, AAAI2021, IJCAI2019, NeuroComputing, TKDD, TKDE
  • Volunteer: Volunteer in KDD2018, KDD2021