A machine-executable format that replaces traditional papers with structured, agent-consumable research artifacts capturing logic, code, exploration, and evidence.
A token-level confidence-calibrated negative preference alignment method for LLM unlearning that removes undesirable knowledge without requiring retention data or contrastive pairs.
We find that LLMs can get Brain Rot just like human after browsing enormous brainless social media.
A non-autoregressive architecture combining DeepONets with DeepSets for in-context operator learning, achieving orders-of-magnitude parameter reduction and stronger noise robustness over transformer baselines.
We develop a chatbot for early dementia prevention and leverage LLMs to build digital twins to evaluate chatbots.
We develop a hybrid federated learning for learning financial-crime predictive models from horizontal and vertical federated data structures.
The recent decade witnessed a surge of increase in financial crimes across the public and private sectors, with an average cost of scams of $102m to financial institutions in 2022. Developing a mechanism for battling financial crimes is an impending …
Recently, self-supervised contrastive pre-training has become the de facto regime, that allows for efficient downstream fine-tuning. Meanwhile, its fairness issues are barely studied, though they have drawn great attention from the machine learning …