{"id":842272,"date":"2022-05-16T09:00:00","date_gmt":"2022-05-16T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=842272"},"modified":"2022-08-17T09:05:39","modified_gmt":"2022-08-17T16:05:39","slug":"flute-a-scalable-federated-learning-simulation-platform","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/flute-a-scalable-federated-learning-simulation-platform\/","title":{"rendered":"FLUTE: A scalable federated learning simulation platform"},"content":{"rendered":"\n<figure class=\"wp-block-image alignwide size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2560\" height=\"1441\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-scaled.jpg\" alt=\"This diagram shows a payload exchange between a server, inside Worker 0, and clients that live inside Workers 2 and 3. First, the server pushes the central ML model plus the clients\u2019 data to Workers 2 and 3. Then, each client trains the model with their local data. Finally, the clients send the pseudo-gradients of this new model back to the server for aggregation and the creation of a new global model.\" class=\"wp-image-842308\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-scaled.jpg 2560w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1024x577.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-scaled-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 2560px) 100vw, 2560px\" \/><\/figure>\n\n\n\n<p>Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However, despite this flexibility and the amount of research already conducted, it\u2019s difficult to implement due to its many moving parts\u2014a significant deviation from traditional ML pipelines.<\/p>\n\n\n\n<p>The challenges in working with federated learning result from the diversity of local data and end-node hardware, privacy concerns, and optimization constraints. These are compounded by the sheer volume of federated learning clients and their data and necessitates a wide skill set, significant interdisciplinary research efforts, and major engineering resources to manage.\u202fIn addition, federated learning applications often need to scale the learning process to millions of clients to simulate a real-world environment. All of these challenges underscore the need for a simulation platform, one that enables researchers and developers to perform proof-of-concept implementations and validate performance before building and deploying their ML models.\u202f<\/p>\n\n\n\n<h2 id=\"a-versatile-framework-for-federated-learning\">A versatile framework for federated learning<\/h2>\n\n\n\n<p>Today, the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/privacy-in-ai\/\">Privacy in AI<\/a> team at Microsoft Research is thrilled to introduce <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/msrflute\">Federated Learning Utilities and Tools for Experimentation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (FLUTE) as a framework for running large-scale offline federated learning simulations, which we discuss in detail in the paper, \u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/flute-a-scalable-extensible-framework-for-high-performance-federated-learning-simulations\/\">FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations<\/a>.\u201d In creating FLUTE, our goal was to develop a high-performance simulation platform that enables quick prototyping of federated learning research and makes it easier to implement federated learning applications.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/flute-a-scalable-extensible-framework-for-high-performance-federated-learning-simulations\/\" target=\"_blank\" rel=\"noreferrer noopener\">Read the paper<\/a><\/div>\n\n\n\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link\" href=\"https:\/\/github.com\/microsoft\/msrflute\" target=\"_blank\" rel=\"noreferrer noopener\">Download software<\/a><\/div>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>There has been a lot of research in the last few years directed at tackling the many challenges in working with federated learning, including setting up learning environments, providing privacy guarantees, implementing model-client updates, and lowering communication costs. FLUTE addresses many of these while providing enhanced customization and enabling new research on a realistic scale. It also allows developers and researchers to test and experiment with certain scenarios, such as data privacy, communication strategies, and scalability, before implementing their ML model in a production framework.<\/p>\n\n\n\n<div class=\"annotations \" data-bi-aN=\"margin-callout\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 annotations__list--left\">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/youtu.be\/ixwQh8sQthM\" target=\"_self\" aria-label=\"FLUTE: Breaking Barriers for Federated Learning Research at Scale\" data-bi-type=\"annotated-link\" data-bi-cN=\"FLUTE: Breaking Barriers for Federated Learning Research at Scale\" class=\"annotations__list-thumbnail\" >\n\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"172\" height=\"96\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-240x135.png\" class=\"mb-2\" alt=\"FLUTE\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_slides_modified.png 1280w\" sizes=\"auto, (max-width: 172px) 100vw, 172px\" \/>\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">VIDEO<\/span>\n\t\t\t<a href=\"https:\/\/youtu.be\/ixwQh8sQthM\" data-bi-cN=\"FLUTE: Breaking Barriers for Federated Learning Research at Scale\" data-external-link=\"false\" data-bi-aN=\"margin-callout\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>FLUTE: Breaking Barriers for Federated Learning Research at Scale<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n\n\n\n<p>One of FLUTE\u2019s main benefits is its native integration with\u202fAzure ML workspaces, leveraging the platform\u2019s features to manage and track experiments, parameter sweeps, and model snapshots. Its distributed nature is based on Python and PyTorch, and the flexibly designed client-server architecture helps researchers and developers quickly prototype novel approaches to federated learning. However, FLUTE\u2019s key innovation and technological differentiator is the ease it provides in implementing new scenarios for experimentation in core areas of active research in a robust high-performance simulator.\u202f<\/p>\n\n\n\n<p>FLUTE offers a platform where all clients are implemented as isolated object instances, as shown in Figure 1. The interface between the server and the remaining workers relies on messages that contain client IDs and training information, with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.open-mpi.org\/\">MPI<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> as the main communication protocol. Local data on each client stays within local storage boundaries and is never aggregated with other local sources. Clients only communicate gradients to the central server.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1818\" height=\"975\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1.png\" alt=\"This diagram shows server-client communication under FLUTE\u2019s architecture. Worker 0 that acts as the server and contains the global model, client training data, the configuration, and the optimizer. Worker i receives a copy of the global model plus the task configuration. It also contains clients that are composed of the trainer and the optimizer. Each client sends the payload back to Worker 0. \" class=\"wp-image-842311\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1.png 1818w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-300x161.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-1024x549.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-768x412.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-1536x824.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-710x380.png 710w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/FLUTE_Fig1-240x129.png 240w\" sizes=\"auto, (max-width: 1818px) 100vw, 1818px\" \/><figcaption>Figure 1: FLUTE\u2019s client-server architecture and workflow. First, the server pushes the initial global model to the clients and sends training information. Then, the clients train their instances of the global model with locally available data. Finally, all clients return the information to the server to aggregate the pseudo-gradients and produce a new global model that will be updated to the clients. This three-step process repeats for all rounds of training.<\/figcaption><\/figure>\n\n\n\n<p>The following features contribute to FLUTE\u2019s versatile framework and enable experimentation with new federated learning approaches:\u202f<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Scalability<\/strong>: Scale is a critical factor in understanding practical metrics, such as convergence and privacy-utility tradeoffs.\u202fResearchers and developers can run large-scale experiments using tens of thousands of clients with a reasonable turnaround time.&nbsp;<\/li><li><strong>Flexibility<\/strong>: FLUTE supports diverse federated learning configurations, including standardized implementations such as <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/federated-transfer-learning-with-dynamic-gradient-aggregation\/\">DGA<\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1602.05629\">FedAvg<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/li><li><strong>Versatility<\/strong>: FLUTE\u2019s generic API helps researchers and developers easily implement new models, datasets, metrics, and experimentation features, while its open architecture helps them add new algorithms in such areas as optimization, privacy, and robustness.<\/li><\/ul>\n\n\n\n<h2 id=\"available-as-an-open-source-platform\">Available as an open-source platform<\/h2>\n\n\n\n<p>As part of this announcement, we\u2019re making FLUTE available as a versatile <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/msrflute\">open-source platform<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for rapid prototyping and experimentation.\u202fIt comes with a set of basic tools to help kickstart experiments. We hope researchers and developers take advantage of this framework by exploring new approaches to federated learning.<\/p>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1141385\">\n\t\t\n\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/ai.azure.com\/labs\" aria-label=\"Azure AI Foundry Labs\" data-bi-cN=\"Azure AI Foundry Labs\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/06\/Azure-AI-Foundry_1600x900.jpg\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">Azure AI Foundry Labs<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"azure-ai-foundry-labs\" class=\"large\">Get a glimpse of potential future directions for AI, with these experimental technologies from Microsoft Research.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/ai.azure.com\/labs\" aria-describedby=\"azure-ai-foundry-labs\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"Azure AI Foundry Labs\" target=\"_blank\">\n\t\t\t\t\t\t\tAzure AI Foundry\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h2 id=\"looking-ahead\">Looking ahead<\/h2>\n\n\n\n<p>FLUTE\u2019s innovative framework offers a new paradigm for implementing federated learning algorithms at scale, and this is just the beginning. We\u2019re making improvements with the view toward making FLUTE the standard federated learning simulation platform. Future releases will include algorithmic enhancements in optimization and support for additional communication protocols. We\u2019re also adding features to make it easier to set up experiments when including tailored features in new tasks and the ability to easily incorporate FLUTE as a library into Azure ML pipelines.<\/p>\n\n\n\n<h2 id=\"additional-resources\">Additional resources&nbsp;<\/h2>\n\n\n\n<p>Check out this <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/msrflute-tutorial\" target=\"_blank\" rel=\"noopener noreferrer\">video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for a deep dive into FLUTE architecture and a tutorial on how to use it. Our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/microsoft.github.io\/msrflute\/index.html\" target=\"_blank\" rel=\"noopener noreferrer\">documentation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> also explains how to implement FLUTE.&nbsp;&nbsp;<\/p>\n\n\n\n<p>You can learn more about the FLUTE project by visiting our <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/project-ftl\/\" target=\"_blank\" rel=\"noreferrer noopener\">project page<\/a>, and discover more about our current federated learning research as well as other projects related to privacy in AI on our\u202f<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/privacy-in-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">group page<\/a>.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"FLUTE: Breaking Barriers for Federated Learning Research at Scale\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/ixwQh8sQthM?feature=oembed&rel=0\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h4 id=\"explore-more\">Explore More<\/h4>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Publication<\/span>\n\t\t\t<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/flute-a-scalable-extensible-framework-for-high-performance-federated-learning-simulations\/\" data-bi-cN=\"FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"annotations \" data-bi-aN=\"citation\">\n\t<article class=\"annotations__list card depth-16 bg-body p-4 \">\n\t\t<div class=\"annotations__list-item\">\n\t\t\t\t\t\t<span class=\"annotations__type d-block text-uppercase font-weight-semibold text-neutral-300 small\">Tool<\/span>\n\t\t\t<a href=\"https:\/\/github.com\/microsoft\/msrflute\" data-bi-cN=\"FLUTE\" data-external-link=\"false\" data-bi-aN=\"citation\" data-bi-type=\"annotated-link\" class=\"annotations__link font-weight-semibold text-decoration-none\"><span>FLUTE<\/span>&nbsp;<span class=\"glyph-in-link glyph-append glyph-append-chevron-right\" aria-hidden=\"true\"><\/span><\/a>\t\t\t\t\t\t\t<p class=\"annotations__caption text-neutral-400 mt-2\">FLUTE (Federated Learning Utilities for Testing and Experimentation) is a platform for conducting high-performance federated learning simulations.<\/p>\n\t\t\t\t\t<\/div>\n\t<\/article>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However, despite this flexibility and the amount of research already conducted, it\u2019s difficult to implement due [&hellip;]<\/p>\n","protected":false},"author":37583,"featured_media":842308,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Dimitrios Dimitriadis","user_id":"37521"},{"type":"user_nicename","value":"Mirian Hipolito Garcia","user_id":"40483"},{"type":"user_nicename","value":"Daniel Eduardo Madrigal Diaz","user_id":"40480"},{"type":"user_nicename","value":"Andre Manoel","user_id":"40504"},{"type":"user_nicename","value":"Robert Sim","user_id":"36650"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556,13558,13547],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-842272","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-security-privacy-cryptography","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[756487,793670],"related-projects":[658488],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Mirian Hipolito Garcia","user_id":40483,"display_name":"Mirian Hipolito Garcia","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mirianh\/\" aria-label=\"Visit the profile page for Mirian Hipolito Garcia\">Mirian Hipolito Garcia<\/a>","is_active":false,"last_first":"Hipolito Garcia, Mirian","people_section":0,"alias":"mirianh"},{"type":"user_nicename","value":"Daniel Eduardo Madrigal Diaz","user_id":40480,"display_name":"Daniel Eduardo Madrigal Diaz","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/danielmad\/\" aria-label=\"Visit the profile page for Daniel Eduardo Madrigal Diaz\">Daniel Eduardo Madrigal Diaz<\/a>","is_active":false,"last_first":"Madrigal Diaz, Daniel Eduardo","people_section":0,"alias":"danielmad"},{"type":"user_nicename","value":"Robert Sim","user_id":36650,"display_name":"Robert Sim","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rsim\/\" aria-label=\"Visit the profile page for Robert Sim\">Robert Sim<\/a>","is_active":false,"last_first":"Sim, Robert","people_section":0,"alias":"rsim"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-scaled-960x540.jpg\" class=\"img-object-cover\" alt=\"This diagram shows a payload exchange between a server, inside Worker 0, and clients that live inside Workers 2 and 3. First, the server pushes the central ML model plus the clients\u2019 data to Workers 2 and 3. Then, each client trains the model with their local data. Finally, the clients send the pseudo-gradients of this new model back to the server for aggregation and the creation of a new global model.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-scaled-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-300x169.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1024x577.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-768x432.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1536x865.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-2048x1153.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-240x135.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1280x720.jpg 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/05\/1400x788_Flute_Blog_blog_hero-1920x1080.jpg 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"May 16, 2022","formattedExcerpt":"Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However, despite this flexibility and the&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/842272","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=842272"}],"version-history":[{"count":21,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/842272\/revisions"}],"predecessor-version":[{"id":870564,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/842272\/revisions\/870564"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/842308"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=842272"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=842272"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=842272"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=842272"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=842272"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=842272"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=842272"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=842272"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=842272"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=842272"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=842272"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}