Microsoft Research Blog

  1. This diagram shows a payload exchange between a server, inside Worker 0, and clients that live inside Workers 2 and 3. First, the server pushes the central ML model plus the clients’ data to Workers 2 and 3. Then, each client trains the model with their local data. Finally, the clients send the pseudo-gradients of this new model back to the server for aggregation and the creation of a new global model.

    FLUTE: A scalable federated learning simulation platform

    Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However, despite this flexibility and the…
    May 16, 2022

Microsoft Research

Webinar Series

Online lectures from Microsoft’s computer scientists

View All Webinars

Microsoft Research

Podcast

Ongoing conversations at the cutting edge of research

View All Episodes
  1. Diagram showing Ekya’s architecture. Video data flows from a series of cameras into specialized, lightweight inference models and shared resource pools before reaching the edge.

    Don’t let data drift derail edge compute machine learning models

    Edge computing has come of age, with deployments enabling many applications that process data from IoT sensors and cameras. In 2017, we identified the symbiotic relationship between edge computing and video analytics in an article, noting that live video analytics is the “killer app” for…
    April 19, 2022
  2. A flowchart showing inputs pre-processed before being fed into large language models including GPT-3, Codex, and others. The post-process output is returned to the end-user for verification. If they find the output incorrect, it is edited by them, and the learning is fed back into the pre-process and post-process mechanisms to improve them further.

    Jigsaw fixes bugs in machine-written software

    Large pre-trained language models such as GPT-3, Codex, and others can be tuned to generate code from natural language specifications of programmer intent. Such automated models have the potential to improve productivity for every programmer in the world. But since the models can struggle to…
    March 31, 2022
  3. Figure 1: COMPASS is a general-purpose pretraining pipeline, which is trained on mulitmodal data, including RGB image, segmentation, depth and optical flow. The pretrained COMPASS model can be deployed to various downstream tasks of autonomous systems. In this work, we transfer COMPASS to drone navigation, car racing and visual odometry, which are deployed in very different environments and application scenarios.

    COMPASS: COntrastive Multimodal Pretraining for AutonomouS Systems

    Figure 1: COMPASS is a general-purpose pretraining pipeline, which is trained on multimodal data, including RGB images, depth and optical flow. The pretrained COMPASS model can be deployed on various downstream autonomous systems tasks. In this work, we test COMPASS on simulated drone navigation, car…
    February 23, 2022