Loading...
three conversation bubbles on a blue, purple, and pink gradient background
Microsoft Research Blog

Steering at the Frontier: Extending the Power of Prompting 

December 12, 2023 | Eric Horvitz, Harsha Nori, and Yin Tat Lee

We’re seeing exciting capabilities of frontier foundation models, including intriguing powers of abstraction, generalization, and composition across numerous areas of knowledge and expertise. Even seasoned AI researchers have been impressed with the ability to steer the models with straightforward, zero-shot prompts. Beyond…

Satya Nadella on stage at Microsoft Ignite 2023 announcing Phi-2.
Microsoft Research Blog

Phi-2: The surprising power of small language models 

December 12, 2023 | Mojan Javaheripi and Sébastien Bubeck

Phi-2 is now accessible on the Azure model catalog. Its compact size and new innovations in model scaling and training data curation make it ideal for exploration around mechanistic interpretability, safety improvements, and fine-tuning experimentation on a variety of tasks.

In the news | VentureBeat

Microsoft releases Phi-2, a small language model AI that outperforms Llama 2, Mistral 7B 

December 12, 2023

The rapid pace of generative AI news and announcements isn’t slowing down, even as we reach the final stretches of 2023 and the traditional winter holiday quiet period. Just take a look at Microsoft Research, the blue sky division of…

Microsoft Research Podcast - Abstracts hero with a microphone icon
Microsoft Research Podcast

Abstracts: December 11, 2023 

December 11, 2023 | Gretchen Huizinga and Alessandro Sordoni

By treating language models as layers in a network and prompts as learnable parameters, researchers aim for more adaptable, reusable LLM architectures. Check out the work in the “Abstracts” podcast series with guest Alessandro Sordoni and at #NeurIPS2023:

RF NeurIPS Edition December 11, 2023
Microsoft Research Blog

NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation 

December 11, 2023

We’re proud to have 100+ accepted papers At NeurIPS 2023, plus 18 workshops. Several submissions were chosen as oral presentations and spotlight posters, reflecting groundbreaking concepts, methods, or applications. Here’s an overview of those submissions.

In the news | NEJM AI

Multimodal Generative AI for Precision Health 

December 11, 2023

The dream of precision health is to develop a continuous learning health system where new health information is instantly incorporated to optimize care delivery and accelerate biomedical discovery. Multimodal generative AI has the potential to drastically accelerate progress toward precision…

Articles

AI与社会、科学的协同:跨学科研究的潜力与挑战 

December 8, 2023

编者按:2023年是微软亚洲研究院建院25周年,借此机会,我们特别策划了“智汇对话”系列内容,邀请全球各领域顶尖专家学者共同畅谈研究文化,探讨跨学科创新,展望技术未来。 11月14日,微软亚洲研究院与东京大学联合举办了以“人工智能协同:社会与科学(AI Synergy: Society and Science)”为主题的2023年人工智能论坛(查看回放视频)。在圆桌讨论环节,东京大学新一代智能科学...

four crystalline structures
Microsoft Research Blog

MatterGen: Property-guided materials design 

December 7, 2023 | Andrew Fowler, Matthew Horton, Ryota Tomioka, Robert Pinsler, Tian Xie, Claudio Zeni, and Daniel Zügner

The central problem in materials science is to discover materials with desired properties. MatterGen enables broad property-guided materials design.

EMNLP 2023 logo to the left of accepted paper "LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models" on a blue/green gradient background
Microsoft Research Blog

LLMLingua: Innovating LLM efficiency with prompt compression 

December 7, 2023 | Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu

Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX.

  • Previous
  • 1
  • …
  • 89
  • 90
  • 91
  • 92
  • 93
  • …
  • 569
  • Next