Junyuan Hong
Junyuan Hong
Research
Publications
Experiences
Teaching
Page not found
Perhaps you were looking for one of these?
Latest
Example Talk
LLMs Can Get "Brain Rot"!
AD-VF: LLM-Automatic Differentiation Enables Fine-Tuning-Free Robot Planning from Formal Methods Feedback
LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment
Scaling Textual Gradients via Sampling-Based Momentum
GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning
SEAL: Steerable Reasoning Calibration of Large Language Models for Free
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Extracting and Understanding the Superficial Knowledge in Alignment
Cite
×