Microsoft Research Blog

Microsoft at FAccT 2024: Advancing responsible AI research and practice 

June 5, 2024
From studying how to identify gender bias in Hindi to uncovering AI-related risks for workers, Microsoft is making key contributions towards advancing the state of the art in responsible AI research. Check out their work at ACM FAccT 2024.

Recent Posts

  1. Research Focus: May 27, 2024

    Research Focus: Week of May 27, 2024 

    May 29, 2024

    How can generative AI tools represent less common identities and narratives; Can LLMs help players participate in game narratives; Using LLMs to improve geospatial demographic data; A Graph RAG Approach to Query-Focused Summarization; and more.

  2. Research Focus: May 13, 2024

    Research Focus: Week of May 13, 2024 

    May 15, 2024

    Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Large language models (LLMs) have shown remarkable performance in generating text similar to that created by people, proving…

  3. Microsoft at CHI 2024

    Microsoft at CHI 2024: Innovations in human-centered design 

    May 15, 2024

    From immersive virtual experiences to interactive design tools, Microsoft Research is at the frontier of exploring how people engage with technology. Discover our latest breakthroughs in human-computer interaction research at CHI 2024.

  4. The image features a complex network of interconnected nodes with a molecular structure, illuminated in blue against a dark background.

    MatterSim: A deep-learning model for materials under real-world conditions 

    May 13, 2024 | Han Yang, Jielan Li, Hongxia Hao, and Ziheng Lu

    Property prediction for materials under realistic conditions has been a long-standing challenge within the digital transformation of materials design. MatterSim investigates atomic interactions from the very fundamental principles of quantum mechanics.

  5. White ICLR logo to the left of the first page of the accepted paper, “Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs” on a purple background.

    LLM profiling guides KV cache optimization 

    May 8, 2024 | Liyuan Liu and Jianfeng Gao

    LLMs rely on memory-intensive mechanisms like the key-value (KV) cache to store and quickly retrieve data. FastGen optimizes KV cache usage, reducing LLM memory demands by up to 50% while maintaining performance.

Explore More

  • Events & conferences

    Events & conferences 

    Meet our community of researchers, learn about exciting research topics, and grow your network

  • Podcasts

    Podcasts 

    Ongoing conversations at the cutting edge of research

  • Microsoft Research Forum

    Microsoft Research Forum 

    Join us for a continuous exchange of ideas about research in the era of general AI