Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

  1. Competition win a steppingstone in the greater journey to create sustainable farming

    The cucumber plants, their leaves wide and green and veiny, stood tall in neat rows, basking in the Netherland sunlight shining through the glass panes of their greenhouses. Hopes were high for the plants—a bountiful crop in just four months using as few resources as possible. With the right amount and type of care, they’d produce vegetables for consumers to enjoy. To the casual observer, though, it might have seemed like the plants had been…

    December 18th, 2018

  2. Soundscaping the world with Amos Miller

    Episode 54, December 12, 2018 - Amos Miller is a product strategist on the Microsoft Research NeXT Enable team, and he’s played a pivotal role in bringing some of MSR’s most innovative research to users with disabilities. He also happens to be blind, so he can appreciate, perhaps in ways others can’t, the value of the technologies he works on, like Soundscape, an app which enhances mobility independence through audio and sound.

    December 12th, 2018

  3. First TextWorld Problems—Microsoft Research Montreal’s latest AI competition is really cooking

    This week, Microsoft Research threw down the gauntlet with the launch of a competition challenging researchers around the world to develop AI agents that can solve text-based games. Conceived by the Machine Reading Comprehension team at Microsoft Research Montreal, the competition—First TextWorld Problems: A Reinforcement and Language Learning Challenge—runs from December 8, 2018 through May 31, 2019. First TextWorld Problems is built on the TextWorld framework. TextWorld was released to the public in July 2018…

    December 11th, 2018

  4. A Deep Learning Theory: Global minima and over-parameterization

    One empirical finding in deep learning is that simple methods such as stochastic gradient descent (SGD) have a remarkable ability to fit training data. From a capacity perspective, this may not be surprising— modern neural networks are heavily over-parameterized, with the number of parameters much larger than the number of training samples. In principle, there exist parameters to achieve 100% accuracy. Yet, from a theory perspective, why and how SGD finds global minima over the…

    December 10th, 2018

  5. Fast, accurate, stable and tiny – Breathing life into IoT devices with an innovative algorithmic approach

    In the larger quest to make the Internet of Things (IoT) a reality for people everywhere, building devices that can be both ultrafunctional and beneficent isn’t a simple matter. Particularly in the arena of resource-constrained, real-time scenarios, the hurdles are significant. The challenges for devices that require quick responsiveness—say, smart implants that warn of impending epileptic seizures or smart spectacles providing navigation for low-vision people—are multifold.

    December 6th, 2018

  6. Learning to teach: Mutually enhanced learning and teaching for artificial intelligence

    Teaching is super important. From an individual perspective, a student learning on his or her own is never ideal; a student needs a teacher's guidance and perspective to be more effectively educated. Taking the societal perspective, teaching enables civilization to be passed on to the next generation. Human teachers have three concrete responsibilities: providing students with qualified teaching material (for example, textbooks); defining the appropriate skill set to be mastered by the students (for example,…

    December 5th, 2018

  7. Chasing convex bodies and other random topics with Dr. Sébastien Bubeck

    Episode 53, December 5, 2018 - Dr. Sébastien Bubeck explains the difficulty of the multi-armed bandit problem in the context of a parameter- and data-rich online world. He also discusses a host of topics from randomness and convex optimization to metrical task systems and log n competitiveness to the surprising connection between Gaussian kernels and what he calls some of the most beautiful objects in mathematics.

    December 5th, 2018

  8. Unlikely research area reveals surprising twist in non-smooth optimization

    Modern machine learning is characterized by two key features: high-dimensional models and very large datasets. Each of these features presents its own unique challenges, from basic issues such as storing and accessing all of the data to more intricate mathematical quests such as finding good algorithms to search through the high-dimensional space of models. In our recent work, which we’re happy to announce received a best paper award at this year’s Conference on Neural Information…

    December 4th, 2018

  9. Getting into a conversational groove: New approach encourages risk-taking in data-driven neural modeling

    Microsoft Research’s Natural Language Processing group has set an ambitious goal for itself: to create a neural model that can engage in the full scope of conversational capabilities, providing answers to requests while also bringing the value of additional information relevant to the exchange and—in doing so—sustaining and encouraging further conversation. Take the act of renting a car at the airport, for example. Across from you at the counter is the company representative, entering your…

    December 3rd, 2018

  10. The Microsoft Simple Encrypted Arithmetic Library goes open source

    Today we are extremely excited to announce that our Microsoft Simple Encrypted Arithmetic Library (Microsoft SEAL), an easy-to-use homomorphic encryption library developed by researchers in the Cryptography Research group at Microsoft, is open source on GitHub under an MIT License for free use. The library has already been adopted by Intel to implement the underlying cryptography functions in HE-Transformer, the homomorphic encryption back end to its neural network compiler nGraph. As we increasingly move our…

    December 3rd, 2018

  11. ReDial: Recommendation dialogs for bridging the gap between chit-chat and goal-oriented chatbots

    Chatbots come in many flavors, but most can be placed in one of two categories: goal-oriented chatbots and chit-chat chatbots. Goal-oriented chatbots behave like a natural language interface for function calls, where the chatbot asks for and confirms all required parameter values and then executes a function. The Cortana chat interface is a classic example of a goal-directed chatbot. For example, you can ask about the weather for a specific location or let Cortana walk…

    November 30th, 2018

  12. Discovering the best neural architectures in the continuous space

    If you’re a deep learning practitioner, you may find yourself faced with the same critical question on a regular basis: Which neural network architecture should I choose for my current task? The decision depends on a variety of factors and the answers to a number of other questions. What operations should I choose for this layer—convolution, depth separable convolution, or max pooling? What is the kernel size for convolution? 3x3 or 1x1? And which previous…

    November 30th, 2018