Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

  1. Insights on the future of video calling in emergency situations

    For most of us, a call to emergency services is a rare act, or one thankfully that we’ve never had to take. Anyone having had to make that call often will, in a calmer future moment, reflect on what transpired during it – the quality of the service, the professionalism and empathy of the call takers, and perhaps most importantly how information crucial to the outcome was conveyed. The communication aspect of emergency calls can…

    May 16th, 2018

  2. Not lost in translation with Arul Menezes

    Episode 24, May 16, 2018 - Menezes talks about how the advent of deep learning has enabled exciting advances in machine translation, including applications for people with disabilities, and gives us an inside look at the recent “human parity” milestone at Microsoft Research, where machines translated a news dataset from Chinese to English with the same accuracy and quality as a person.

    May 16th, 2018

  3. Sounding the Future: Microsoft Research brings its best to ICASSP 2018 in Calgary

    Introduction Speech technology has come a long way since Alexander Graham Bell's famous Mr. Watson – Come here – I want to see you became the first speech to be heard over the telephone in 1876. Today, speech technology has moved into realms such as VoIP, teleconferencing systems, home automation, and so on. Its importance has grown exponentially with the emergence of mobile and wearable devices and many existing and upcoming Microsoft services, devices and…

    May 14th, 2018

  4. Rapid Adaptation and Metalearning with Conditionally Shifted Neurons

    The Machine Comprehension team at MSR-Montreal recently developed a neural mechanism for metalearning that we call conditionally shifted neurons. Conditionally shifted neurons (CSNs) adapt their activation values rapidly to new data to help neural networks solve new tasks. They do this with task-specific, additive shifts retrieved from a key-value memory module populated from just a few examples. Intuitively, the process is as follows: first, the model stores shift vectors that correspond to demonstrated class labels…

    May 11th, 2018

  5. Clouds, catapults and life after the end of Moore’s Law with Dr. Doug Burger

    Episode 23, May 9, 2018 - Dr. Burger talks about how advances in AI and deep machine learning have placed new acceleration demands on current hardware and computer architecture, offers some observations about the demise of Moore’s Law, and shares his vision of what life might look like in a post-CPU, post-von-Neumann computing world.

    May 9th, 2018

  6. Customized neural machine translation with Microsoft Translator

    Released in preview this week at Build 2018, the new Microsoft Translator custom feature lets users customize neural machine translation systems. These customizations can be applied to both text and speech translation workflows. Microsoft Translator released neural machine translation (NMT) in 2016. NMT provided major advances in translation quality over the then industry-standard statistical machine translation (SMT) technology. Because NMT better captures the context of full sentences before translating them, it provides higher quality, more…

    May 7th, 2018

  7. Machine learning and the incredible flying robot with Dr. Ashish Kapoor

    Episode 22, May 2, 2018 - Dr. Kapoor talks about how cutting-edge machine learning techniques are empowering a new generation of autonomous vehicles, and tells us all about AirSim, an innovative platform that’s helping bridge the simulator-to-reality gap, paving the way for safer, more robust real-world AI systems of all kinds.

    May 2nd, 2018

  8. Learning from Source Code

    Over the last five years, deep learning-based methods have revolutionised a wide range of applications, for example those requiring understanding of pictures, speech and natural language. For computer scientists, a naturally arising question is whether computers learn to understand source code? It appears to be a trivial question at first glance because programming languages indeed are designed to be understood by computers. However, many software bugs are in fact instances of Do what I mean,…

    May 1st, 2018

  9. Boundary-seeking GANs: A new method for adversarial generation of discrete data

    Generative models are an important subset of machine learning goals and tasks that require realistic and statistically accurate generation of target data. Among all available generative models, generative adversarial networks (GANs) have emerged recently as a leading and state-of-the-art method, particularly in image generation tasks. While highly successful with continuous data, generation of discrete data with GANs remains a challenging problem that limits its applications in language and other important domains. In this post, we…

    April 30th, 2018

  10. Neural-Guided Deductive Search: A best of both worlds approach to program synthesis

    Program synthesis — automatically generating a program that satisfies a given specification — is a major challenge in AI. In addition to changing the way we design software, it has the potential to revolutionize task automation. End users without programming skills can easily provide input-output examples of the desired program behavior. The Flash Fill feature in Microsoft Excel, a particularly successful application of this technology, demonstrates that a single example is often sufficient to generate…

    April 27th, 2018

  11. AI, machine learning and the reasoning machine with Dr. Geoff Gordon

    Episode 21, April 25, 2018 - Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less… well, less computer-like.

    April 25th, 2018