3

The Last Human-Written Paper: Agent-Native Research Artifacts

A machine-executable format that replaces traditional papers with structured, agent-consumable research artifacts capturing logic, code, exploration, and evidence.

CATNIP: LLM Unlearning via Calibrated and Tokenized Negative Preference Alignment

A token-level confidence-calibrated negative preference alignment method for LLM unlearning that removes undesirable knowledge without requiring retention data or contrastive pairs.

LLMs Can Get "Brain Rot"!

We find that LLMs can get Brain Rot just like human after browsing enormous brainless social media.

DeepOSets: Non-Autoregressive In-Context Learning of Supervised Learning Operators

A non-autoregressive architecture combining DeepONets with DeepSets for in-context operator learning, achieving orders-of-magnitude parameter reduction and stronger noise robustness over transformer baselines.

A-CONECT: Designing AI-based Conversational Chatbot for Early Dementia Intervention

We develop a chatbot for early dementia prevention and leverage LLMs to build digital twins to evaluate chatbots.

A Privacy-Preserving Hybrid Federated Learning Framework for Financial Crime Detection

We develop a hybrid federated learning for learning financial-crime predictive models from horizontal and vertical federated data structures.

FedNoisy: A Federated Noisy Label Learning Benchmark

The recent decade witnessed a surge of increase in financial crimes across the public and private sectors, with an average cost of scams of $102m to financial institutions in 2022. Developing a mechanism for battling financial crimes is an impending …

Precautionary Unfairness in Self-Supervised Contrastive Pre-training

Recently, self-supervised contrastive pre-training has become the de facto regime, that allows for efficient downstream fine-tuning. Meanwhile, its fairness issues are barely studied, though they have drawn great attention from the machine learning …