Loading...
People on a remote conference call
Articles

Customer conversations are more valuable than ever in the post-COVID world 

August 10, 2021

Direct customer interaction is key to understanding users’ changing needs. Over the past 18 months, researchers have adapted the methods they use to meet with customers in real time as there are many asynchronous methods that researchers use to practice…

Articles

铸星闪耀 | 肖立:用人工智能解谜生物学与物理学的科研密码 

August 9, 2021

编者按:微软亚洲研究院“铸星计划”旨在发掘和助力新一代青年学者,使其成为科研创新能力突出、走在世界科技前沿的学术带头人。 无论是与领域内顶尖研究员合作的机会,还是最新、最丰富的数据集和强大的支持资源,抑或是产业界独有的实际应用场景,都吸引着青年才俊们来到微软亚洲研究院探索领域内前沿新知。 年轻、开拓、探索,是铸星计划的关键词;合作、创新、成就,是每个学术新星发光闪耀的必经之途。通过微软亚洲研究院“...

In the news | Microsoft Azure AI Blog

Azure Health Bot expands its template catalog to amplify the patient voice through PRO collection 

August 9, 2021

In recent years, there has been an increasing focus to place patients at the center of healthcare decisions, improve safety, enhance experience and maximize the value of the provided care. Self-reporting of daily functioning and health outcomes from the patient, rather than caregiver…

In the news | Microsoft AI - Cognitive Services Blog

Summarize text with Text Analytics API 

August 9, 2021

The extractive summarization feature in Text Analytics uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document. This feature is provided as an API for developers.…

In the news | Analytics India Magazine

Top 10 AI Innovations Of 2021 So Far 

August 6, 2021

Listed as one of the top 10 AI innovations of 2021 by Analytics India Magazine, Microsoft’s FLAML is a python package that can tell us the best-fit ML model for low computation. It helps eliminate the manual process of choosing…

Technical diagram of MEB model. MEB is a sparse neural network model composed of an input layer taking in binary features, a feature embedding layer transforming each binary feature into a 15-dimension vector, a sum pooling layer applied on each of 49 feature groups and concatenated to produce a 735-dimension vector, which is then passed through two dense layers to produce a click probability. Features shown in this figure are generated from the example query “Microsoft Windows” and document www.microsoft.com/en-us/windows.
Microsoft Research Blog

Make Every feature Binary: A 135B parameter sparse neural network for massively improved search relevance 

August 4, 2021 | Junyan Chen, Frédéric Dubut, Jason (Zengzhong) Li, and Rangan Majumder

Recently, Transformer-based deep learning models like GPT-3 have been getting a lot of attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience (opens in…

happy office workers
Articles

Jobs to be done: A useful framework for driving customer value 

August 4, 2021

The Jobs-to-be-Done (JBTD) framework that is gaining more and more traction among our research community. If you’re new to the JBTD, you may be taken aback by different approaches to defining a job and different schools of thought, and Jim Kalbach’s “The Jobs to be Done playbook” can be…

Portraits of Microsoft researchers Ginger Hudson and Jaime Teevan photographed in black and white. Both smile and look forward. Teevan, on the right, is holding a cell phone in the lower right of the frame.
Microsoft Research Podcast

New Future of Work: Redefining workspaces as hybrid and remote work become more prevalent with Jaime Teevan and Ginger Hudson 

August 4, 2021

In this episode of The New Future of Work series, Chief Scientist Jaime Teevan and Principal User Research Manager Ginger Hudson share how people evolved their home office setups throughout the COVID-19 pandemic, and they explore how information workers used…

Graphormer
Articles

Transformer stands out as the best graph learner: Researchers from Microsoft Research Asia wins the KDD Cup’s 2021 Graph Prediction Track 

August 3, 2021

Transformer is well acknowledged to be the most powerful neural network in modelling sequential data, such as in natural language and speech. Model variants built upon Transformer have also shown great performance in computer vision and programming language. However, Transformer…

  • Previous
  • 1
  • …
  • 190
  • 191
  • 192
  • 193
  • 194
  • …
  • 568
  • Next