Junyuan Hong

Junyuan Hong

CSE PhD Student

Michigan State University

Biography

I am currently a final-year Ph.D. student of Computer Science and Engineering at ILLIDAN Lab@Michigan State University (MSU), advised by Dr. Jiayu Zhou. Previously, I obtained my B.S. in Physics and M.S. in Computer Science at University of Science and Technology of China (USTC). Also, I am fortunate to work closely with Dr. Zhangyang Wang.

My research centers around privacy-preserving learning and expands to Inclusive and Trustworthy Machine Learning. I am interested in enhancing trustworthiness (regarding fairness, robustness and security) under the privacy constraint, e.g., federated learning and differentially-private learning.

I am on the job market! Please feel free to contact me and check my curricula vitae.

News

  • [March, 2022] We are going to organize a KDD workshop on federated learning for distributed data mining. More details are coming soon.
  • [Jan, 2023] Two papers got accepted to ICLR'23: OoD detection by FL (spotlight!), memory-efficient CTA.
  • [Sep, 2022] Our work on federated robustness sharing has been accepted to AAAI'23 (oral).
  • [Nov, 2022] Two papers got accepted to NeurIPS'22: outsourcing training, backdoor defense.
More
  • [May, 2022] Our work on connection-resilient FL got accepted to ICML'22.
Interests
  • Trustworthy Machine Learning
  • Privacy
  • Federated Learning
Education
  • PhD in CSE, 2023 (expected)

    Michigan State University

  • MSc in Computer Science, 2018

    University of Science and Technology of China

  • BSc in Physics, minor in CS., 2015

    University of Science and Technology of China

Publications

Quickly discover relevant content by filtering publications.
How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts. TMLR, 2023.
A Privacy-Preserving Hybrid Federated Learning Framework for Financial Crime Detection. Preprint, 2023.
MECTA: Memory-Economic Continual Test-Time Model Adaptation. ICLR, 2023.
Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection. ICLR (spotlight), 2023.
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. AAAI (oral), 2023.
Precautionary Unfairness in Self-Supervised Contrastive Pre-training. Preprint, 2022.
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. NeurIPS, 2022.
Resilient and Communication Efficient Learning for Heterogeneous Federated Systems. ICML, 2022.
Federated Adversarial Debiasing for Fair and Trasnferable Representations. KDD (Oral), 2021.

Professional Activities

Internship

Talks

  • ‘Split-Mix Federated Learning for Model Customization’ @ TrustML Young Scientist Seminars, July, 2022: [link] [video]
  • Federated Adversarial Debiasing for Fair and Transferable Representations, @ CSE Graduate Seminar, Michigan State University, October, 2021: [slides]
  • ‘Dynamic Policies on Differential Private Learning’ @ VITA Seminars, UT Austin, Sep, 2020: [slides]

Teaching Assistant

  • CSE 847: Machine Learning, 2021
  • CSE 404: Introduction to Machine Learning, 2020

Award

  • Dissertation Completion Fellowship, MSU, 2023
  • Carl V. Page Memorial Graduate Fellowship, MSU, 2021
  • Student Travel Award, KDD, 2018
  • Outstanding Freshman Scholarship, USTC, 2015

Service

  • External Reviewer: ICML2023, KDD2023, ICLR2023, NeurIPS2022, ICML2022, KDD2022, WSDM2022, AISTATS2022, AAAI2022, AAAI2021, IJCAI2019, NeuroComputing, TKDD
  • Volunteer: Volunteer in KDD2018, KDD2021