{"id":1155383,"date":"2025-12-04T04:12:09","date_gmt":"2025-12-04T12:12:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=1155383"},"modified":"2025-12-18T01:47:51","modified_gmt":"2025-12-18T09:47:51","slug":"quantifying-and-mitigating-emerging-risks-in-multi-agent-collaboration","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/quantifying-and-mitigating-emerging-risks-in-multi-agent-collaboration\/","title":{"rendered":"Quantifying and Mitigating Emerging Risks in Multi-Agent Collaboration"},"content":{"rendered":"<section class=\"mb-3 moray-highlight\">\n\t<div class=\"card-img-overlay mx-lg-0\">\n\t\t<div class=\"card-background  has-background- card-background--full-bleed\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"721\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1.jpg\" class=\"attachment-full size-full\" alt=\"background pattern\" style=\"\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1.jpg 1920w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-300x113.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-1024x385.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-768x288.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-1536x577.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-1600x600.jpg 1600w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/11\/Quantifying-and-Mitigating-Emerging-Risks-in-Multi-Agent-Collaboration_Banner-1920x721-1-240x90.jpg 240w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\" \/>\t\t<\/div>\n\t\t<!-- Foreground -->\n\t\t<div class=\"card-foreground d-flex mt-md-n5 my-lg-5 px-g px-lg-0\">\n\t\t\t<!-- Container -->\n\t\t\t<div class=\"container d-flex mt-md-n5 my-lg-5 \">\n\t\t\t\t<!-- Card wrapper -->\n\t\t\t\t<div class=\"w-100 w-lg-col-5\">\n\t\t\t\t\t<!-- Card -->\n\t\t\t\t\t<div class=\"card material-md-card py-5 px-md-5\">\n\t\t\t\t\t\t<div class=\"card-body \">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n<h1 class=\"wp-block-heading\" id=\"quantifying-and-mitigating-emerging-risks-in-multi-agent-collaboration\">Quantifying and Mitigating Emerging Risks in Multi-Agent Collaboration<\/h1>\n\n\n\n<p><\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n<p>This project investigates critical safety challenges in large-scale deployments of AI agents, focusing on privacy leakage and collusion risks in multi-agent environments. As agents collaborate and negotiate across complex tasks, they may unintentionally expose sensitive information or coordinate in ways that misalign with human values. The research develops a simulation testbed to analyse these behaviours, introduces dynamic privacy protocols, and explores how scaling agent interactions amplifies risk. Outcomes include a taxonomy of collusion patterns, mitigation strategies, and design principles for safer, transparent, and trustworthy multi-agent systems\u2014informing future AI safety standards and governance.<\/p>\n\n\n\n<p>This research is conducted via&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/agentic-ai-research-and-innovation\/\">The Agentic AI Research and Innovation&nbsp;<\/a>(AARI) Initiative which focuses on the next frontier of agentic systems through&nbsp;<em>Grand Challenges<\/em>&nbsp;with the academic community and Microsoft Research.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>This project investigates critical safety challenges in large-scale deployments of AI agents, focusing on privacy leakage and collusion risks in multi-agent environments. As agents collaborate and negotiate across complex tasks, they may unintentionally expose sensitive information or coordinate in ways that misalign with human values. The research develops a simulation testbed to analyse these behaviours, [&hellip;]<\/p>\n","protected":false},"featured_media":1155712,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1155383","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Jianxun Lian","user_id":38470,"people_section":"Section name 0","alias":"jialia"},{"type":"user_nicename","display_name":"Beibei Shi","user_id":42162,"people_section":"Section name 0","alias":"besh"},{"type":"guest","display_name":"Yule Wen","user_id":1159033,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Xing Xie","user_id":34906,"people_section":"Section name 0","alias":"xingx"},{"type":"guest","display_name":"Diyi  Yang","user_id":1157402,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Xiaoyuan Yi","user_id":40768,"people_section":"Section name 0","alias":"xiaoyuanyi"},{"type":"guest","display_name":"Yanzhe Zhang","user_id":1158807,"people_section":"Section name 0","alias":""}],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":6,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383\/revisions"}],"predecessor-version":[{"id":1158808,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1155383\/revisions\/1158808"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1155712"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1155383"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1155383"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1155383"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1155383"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1155383"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}