Combinatorial Rising Bandits
From raw interaction to reusable knowledge: Rethinking memory for AI agents
It seems counterintuitive: giving AI agents more memory can make them less effective. As interaction logs accumulate, they grow large, fill with irrelevant content, and become increasingly difficult to use. More memory means that agents must…
How people use Copilot for Health
Efficient Distributed Orthonormal Optimizers for Large-Scale Training
Kwangjun delivered a 50-minute technical talk on recent advances in orthonormal update methods for large-scale AI model training. This topic has been rapidly gaining attention in the community, emerging as a strong successor to AdamW…
Research Intern – AI Safety and Security
Protecting large language models (LLMs) from malicious inputs is critical. LLMs can also be used to protect users from malicious attacks. The Deep Learning Team in Microsoft Research – Redmond is seeking Research Interns interested…
Phi-4-reasoning-vision and the lessons of training a multimodal reasoning model
We are pleased to announce Phi-4-reasoning-vision-15B, a 15 billion parameter open‑weight multimodal reasoning model, available through Microsoft Foundry (opens in new tab), HuggingFace (opens in new tab) and GitHub (opens in new tab). Phi-4-reasoning-vision-15B is…
Dion2: A new simple method to shrink matrix in Muon
Dion2 reduces the cost of Muon’s orthonormalization step by orthonormalizing only a small, selected submatrix at each iteration. This lightweight approach preserves Muon’s strong performance while significantly improving scalability of optimizer at scale.