Microsoft Research Blog

Research Focus: Week of May 13, 2024 

May 15, 2024
Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Large language models (LLMs) have shown remarkable performance in generating text similar to that created by people, proving…

Recent Posts

  1. Research Focus: May 13, 2024

    Research Focus: Week of May 13, 2024 

    May 15, 2024

    Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Large language models (LLMs) have shown remarkable performance in generating text similar to that created by people, proving…

  2. Microsoft at CHI 2024

    Microsoft at CHI 2024: Innovations in human-centered design 

    May 15, 2024

    From immersive virtual experiences to interactive design tools, Microsoft Research is at the frontier of exploring how people engage with technology. Discover our latest breakthroughs in human-computer interaction research at CHI 2024.

  3. The image features a complex network of interconnected nodes with a molecular structure, illuminated in blue against a dark background.

    MatterSim: A deep-learning model for materials under real-world conditions 

    May 13, 2024 | Han Yang, Jielan Li, Hongxia Hao, and Ziheng Lu

    Property prediction for materials under realistic conditions has been a long-standing challenge within the digital transformation of materials design. MatterSim investigates atomic interactions from the very fundamental principles of quantum mechanics.

  4. White ICLR logo to the left of the first page of the accepted paper, ā€œModel Tells You What to Discard: Adaptive KV Cache Compression for LLMsā€ on a purple background.

    LLM profiling guides KV cache optimization 

    May 8, 2024 | Liyuan Liu and Jianfeng Gao

    LLMs rely on memory-intensive mechanisms like the key-value (KV) cache to store and quickly retrieve data. FastGen optimizes KV cache usage, reducing LLM memory demands by up to 50% while maintaining performance.

  5. Research Focus: Week of April 29, 2024

    Research Focus: Week of April 29, 2024 

    May 2, 2024

    In this edition: Can LLMs transform natural language into formal method postconditions; Semantically aligned question + code generation for automated insight generation; Explaining CLIP performance disparities on blind/low vision data; plus recent news.

  6. Research Focus April 15, 2024

    Research Focus: Week of April 15, 2024 

    April 17, 2024

    In this issue: New research on appropriate reliance on generative AI; Power management opportunities for LLMs in the cloud; LLMLingua-2 improves task-agnostic prompt compression; Enhancing COMET to embrace under-resourced African languages:

Explore More

  • Events & conferences

    Events & conferences 

    Meet our community of researchers, learn about exciting research topics, and grow your network

  • Podcasts

    Podcasts 

    Ongoing conversations at the cutting edge of research

  • Microsoft Research Forum

    Microsoft Research Forum 

    Join us for a continuous exchange of ideas about research in the era of general AI