Loading...
White ICLR logo to the left of the first page of the accepted paper, “Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs” on a purple background.
Microsoft Research Blog

LLM profiling guides KV cache optimization 

May 8, 2024 | Liyuan Liu and Jianfeng Gao

LLMs rely on memory-intensive mechanisms like the key-value (KV) cache to store and quickly retrieve data. FastGen optimizes KV cache usage, reducing LLM memory demands by up to 50% while maintaining performance.

LoftQ paper at ICLR 2024
Microsoft Research Blog

LoftQ: Reimagining LLM fine-tuning with smarter initialization 

May 7, 2024 | Nikos Karampatziakis, Chen Liang, Weizhu Chen, Yixiao Li, Yifan Yu, and Tuo Zhao

LoftQ boosts LLM efficiency by streamlining the fine-tuning process, reducing computational demands while preserving high performance. Innovations like this can help make AI technology more energy-efficient.

In the news | Politico

Microsoft goes from bad boy to top cop in the age of AI 

May 7, 2024

Alongside efforts to use artificial intelligence to find new cures for cancer and combat climate change, the Microsoft AI for Good's small engineering team has another job: figuring out how to detect AI-powered deepfake videos, audio clips and images bombarding…

dongqi han
Articles

Dongqi Han: An interdisciplinary odyssey with AI and other fields 

May 6, 2024

Deciding between fundamental and applied research is a dilemma that confronts many in the scientific community. Dongqi Han, on the cusp of graduation, ambitiously aspired to bridge this divide by pursuing both avenues of research in his future endeavors. After…

Stylized microphone and sound waves illustration.
Microsoft Research Podcast

Abstracts: May 6, 2024 

May 6, 2024 | Michel Galley and Gretchen Huizinga

Researcher Michel Galley explores how he and fellow researchers combined new and existing data to create MathVista, an open-source benchmark for measuring the mathematical reasoning capabilities of foundation models in scenarios that involve text and images.

New Future of Work - appropriate reliance for trust in AI -| AI-generated graphic of a small and large ball balanced on a bar
Articles

Appropriate Reliance Research Initiative 

May 2, 2024

The Appropriate Reliance research initiative focuses on advancing research and creating practical solutions for fostering appropriate reliance on AI.  Through appropriate reliance (opens in new tab), we aim to help people who use AI systems find a balance between over-trusting…

Articles

AI and Productivity Research Initiative 

May 2, 2024

Large language model-powered tools like Copilot have the potential to increase labor productivity more than any technology in a generation. Motivated by the significance of this moment, researchers from across Microsoft have come together to measure and improve the productivity…

AI-generated image of a woman in front of a computer screen
Articles

AI and Software Engineering Research Initiative 

May 2, 2024

Examining how AI can and should influence software engineering - including its effects on developers, how it improves developer efficiency, how it can assist in keeping software safe, and what the potential risks are.

Research Focus: Week of April 29, 2024
Microsoft Research Blog

Research Focus: Week of April 29, 2024 

May 2, 2024

In this edition: Can LLMs transform natural language into formal method postconditions; Semantically aligned question + code generation for automated insight generation; Explaining CLIP performance disparities on blind/low vision data; plus recent news.

  • Previous
  • 1
  • …
  • 76
  • 77
  • 78
  • 79
  • 80
  • …
  • 573
  • Next