Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

Most recent

  1. First TextWorld Problems—Microsoft Research Montreal’s latest AI competition is really cooking

    This week, Microsoft Research threw down the gauntlet with the launch of a competition challenging researchers around the world to develop AI agents that can solve text-based games. Conceived by the Machine Reading Comprehension team at Microsoft Research Montreal, the competition—First TextWorld Problems: A Reinforcement and Language Learning Challenge—runs from December 8, 2018 through May 31, 2019. First TextWorld Problems is built on the TextWorld framework. TextWorld was released to the public in July 2018…

    December 11th, 2018

  2. A Deep Learning Theory: Global minima and over-parameterization

    One empirical finding in deep learning is that simple methods such as stochastic gradient descent (SGD) have a remarkable ability to fit training data. From a capacity perspective, this may not be surprising— modern neural networks are heavily over-parameterized, with the number of parameters much larger than the number of training samples. In principle, there exist parameters to achieve 100% accuracy. Yet, from a theory perspective, why and how SGD finds global minima over the…

    December 10th, 2018

  3. Fast, accurate, stable and tiny – Breathing life into IoT devices with an innovative algorithmic approach

    In the larger quest to make the Internet of Things (IoT) a reality for people everywhere, building devices that can be both ultrafunctional and beneficent isn’t a simple matter. Particularly in the arena of resource-constrained, real-time scenarios, the hurdles are significant. The challenges for devices that require quick responsiveness—say, smart implants that warn of impending epileptic seizures or smart spectacles providing navigation for low-vision people—are multifold.

    December 6th, 2018

  4. Learning to teach: Mutually enhanced learning and teaching for artificial intelligence

    Teaching is super important. From an individual perspective, a student learning on his or her own is never ideal; a student needs a teacher's guidance and perspective to be more effectively educated. Taking the societal perspective, teaching enables civilization to be passed on to the next generation. Human teachers have three concrete responsibilities: providing students with qualified teaching material (for example, textbooks); defining the appropriate skill set to be mastered by the students (for example,…

    December 5th, 2018

  5. Chasing convex bodies and other random topics with Dr. Sébastien Bubeck

    Episode 53, December 5, 2018 - Dr. Sébastien Bubeck explains the difficulty of the multi-armed bandit problem in the context of a parameter- and data-rich online world. He also discusses a host of topics from randomness and convex optimization to metrical task systems and log n competitiveness to the surprising connection between Gaussian kernels and what he calls some of the most beautiful objects in mathematics.

    December 5th, 2018

  6. Unlikely research area reveals surprising twist in non-smooth optimization

    Modern machine learning is characterized by two key features: high-dimensional models and very large datasets. Each of these features presents its own unique challenges, from basic issues such as storing and accessing all of the data to more intricate mathematical quests such as finding good algorithms to search through the high-dimensional space of models. In our recent work, which we’re happy to announce received a best paper award at this year’s Conference on Neural Information…

    December 4th, 2018

  7. Getting into a conversational groove: New approach encourages risk-taking in data-driven neural modeling

    Microsoft Research’s Natural Language Processing group has set an ambitious goal for itself: to create a neural model that can engage in the full scope of conversational capabilities, providing answers to requests while also bringing the value of additional information relevant to the exchange and—in doing so—sustaining and encouraging further conversation. Take the act of renting a car at the airport, for example. Across from you at the counter is the company representative, entering your…

    December 3rd, 2018

  8. The Microsoft Simple Encrypted Arithmetic Library goes open source

    Today we are extremely excited to announce that our Microsoft Simple Encrypted Arithmetic Library (Microsoft SEAL), an easy-to-use homomorphic encryption library developed by researchers in the Cryptography Research group at Microsoft, is open source on GitHub under an MIT License for free use. The library has already been adopted by Intel to implement the underlying cryptography functions in HE-Transformer, the homomorphic encryption back end to its neural network compiler nGraph. As we increasingly move our…

    December 3rd, 2018

  9. ReDial: Recommendation dialogs for bridging the gap between chit-chat and goal-oriented chatbots

    Chatbots come in many flavors, but most can be placed in one of two categories: goal-oriented chatbots and chit-chat chatbots. Goal-oriented chatbots behave like a natural language interface for function calls, where the chatbot asks for and confirms all required parameter values and then executes a function. The Cortana chat interface is a classic example of a goal-directed chatbot. For example, you can ask about the weather for a specific location or let Cortana walk…

    November 30th, 2018

  10. Discovering the best neural architectures in the continuous space

    If you’re a deep learning practitioner, you may find yourself faced with the same critical question on a regular basis: Which neural network architecture should I choose for my current task? The decision depends on a variety of factors and the answers to a number of other questions. What operations should I choose for this layer—convolution, depth separable convolution, or max pooling? What is the kernel size for convolution? 3x3 or 1x1? And which previous…

    November 30th, 2018

  11. Minimizing trial and error in the drug discovery process

    In 1928, Alexander Fleming accidentally let his petri dishes go moldy, a mistake that would lead to the breakthrough discovery of penicillin and save the lives of countless people. From these haphazard beginnings, the pharmaceutical industry has grown into one of the most technically advanced and valuable sectors, driven by incredible progress in chemistry and molecular biology. Nevertheless, a great deal of trial and error still exists in the drug discovery process. With an estimated…

    November 29th, 2018

  12. Machine learning and the learning machine with Dr. Christopher Bishop

    Episode 52, November 28, 2018 - Dr. Christopher Bishop talks about the past, present and future of AI research, explains the No Free Lunch Theorem, talks about the modern view of machine learning (or how he learned to stop worrying and love uncertainty), and tells how the real excitement in the next few years will be the growth in our ability to create new technologies not by programming machines but by teaching them to learn.

    November 28th, 2018