{"id":1160075,"date":"2026-01-20T15:07:28","date_gmt":"2026-01-20T23:07:28","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=1160075"},"modified":"2026-01-20T15:07:29","modified_gmt":"2026-01-20T23:07:29","slug":"computer-vision-in-the-wild-workshop-at-cvpr-2026","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/computer-vision-in-the-wild-workshop-at-cvpr-2026\/","title":{"rendered":"Computer Vision in the Wild Workshop at CVPR 2026"},"content":{"rendered":"\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"about-the-workshop\">About the workshop<\/h2>\n\n\n\n<p><strong>Full workshop title<\/strong>: The 5th Workshop on Computer Vision in the Wild (CVinW): Towards Unified Multimodal Agents for Reasoning in the Wild<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em><strong>Note<\/strong>: The date of this workshop is tentative, so please <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/computer-vision-in-the-wild.github.io\/cvpr-2026\/\" target=\"_blank\" rel=\"noopener noreferrer\">check the official workshop page for the final agenda<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/em><\/li>\n<\/ul>\n\n\n\n<p><strong>Host conference<\/strong>: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cvpr.thecvf.com\/Conferences\/2026\" target=\"_blank\" rel=\"noopener noreferrer\">The Conference on Computer Vision and Pattern Recognition (CVPR)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> | June 3-4, 2026<\/p>\n\n\n\n<p><strong>Workshop organizers<\/strong>:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tanreuben\/\">Reuben Tan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhengyang\/\">Zhengyuan Yang<\/a><\/p>\n\n\n\n<p><strong>Workshop scientific advisor<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<\/a><\/p>\n\n\n\n<p><strong>Speakers<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Kate Saenko, Meta AGI Foundations<\/li>\n\n\n\n<li>Chelsea Finn, Stanford & PI<\/li>\n\n\n\n<li>Manling Li, Northwestern University<\/li>\n\n\n\n<li>Xiaolong Wang, UCSD & Nvidia<\/li>\n\n\n\n<li>Mohit Bansal, University North Carolina Chapel Hill<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--1\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\">Official workshop agenda<\/a><\/div>\n<\/div>\n\n\n\n<p>The 5th CVinW workshop brings together researchers building <em>multimodal AI agents<\/em> that can <em>perceive, reason, and act<\/em> in digital and physical environments. The workshop focuses on capabilities where today\u2019s agentic models still struggle, including not limited to <em>fine-grained spatiotemporal reasoning, causal inference, long-horizon planning & memory, and robust tool-use<\/em>, and it convenes both academia and industry to discuss approaches, datasets, and benchmarks for robust agents that complete complex tasks \u201cin the wild.\u201d<\/p>\n\n\n\n<p>This year\u2019s edition emphasizes the intersection of <em>LMMs and VLA models<\/em> and the full loop from representation to inference to decision-making, including structured reasoning strategies (e.g., chain-\/tree-of-thought, program-aided reasoning), long-horizon planning\/memory, and evaluation protocols that diagnose reasoning (not just recognition).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"challenges\">Challenges<\/h2>\n\n\n\n<p>To measure progress with <em>fine-grained evaluations and public leaderboards<\/em>, the workshop proposes two challenges:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>MindCube (Spatial Mental Models under Partial Observability)<\/strong>\n<ul class=\"wp-block-list\">\n<li>Evaluates whether VLMs can form robust spatial mental models, by capturing <em>positions<\/em>, <em>orientations<\/em>, and <em>counterfactual \u201cwhat-if\u201d dynamics<\/em>, from limited viewpoints.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>SITE (Standardized, Cross-modal Spatial Intelligence Thorough Evaluation)<\/strong>\n<ul class=\"wp-block-list\">\n<li>Evaluates spatial intelligence across <em>single-image<\/em>, <em>multi-image<\/em>, and <em>video <\/em>modalities and across spatial factors (scale, visualization vs. orientation, intrinsic vs. extrinsic frames, static vs. dynamic).<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"call-for-contributions\">Call for contributions<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Submissions of published and unpublished works are welcome.<\/li>\n\n\n\n<li>Accepted works will be presented as posters and spotlights at the workshop.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"important-dates\">Important dates<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Workshop papers\n<ul class=\"wp-block-list\">\n<li>Paper submission deadline: <strong>April 21, 2026<\/strong><\/li>\n\n\n\n<li>Notification: <strong>May 19, 2026<\/strong><\/li>\n\n\n\n<li>Camera-ready deadline: <strong>June 2, 2026<\/strong><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Challenges\n<ul class=\"wp-block-list\">\n<li>Start date: January 21, 2026<\/li>\n\n\n\n<li>End date: June 2, 2026<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"agenda-tentative\">Agenda (tentative)<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th>Time<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr><td>13:00-13:30<\/td><td><strong>Invited talk: Spatial Intelligence and Embodied AI<\/strong><br>Manling Li, Northwestern University<\/td><\/tr><tr><td>13:30-14:00<\/td><td><strong>Invited talk: Robotic perception, planning, and reasoning<\/strong><br>Chelsea Finn, Stanford & PI<\/td><\/tr><tr><td>14:00-14:30<\/td><td><strong>Workshop paper presentations<\/strong><\/td><\/tr><tr><td>14:30-15:00<\/td><td><strong>Afternoon break + poster session<\/strong><\/td><\/tr><tr><td>15:00-15:30<\/td><td><strong>Invited talk: Test-time scaling and reinforcement learning<\/strong><br>Xiaolong Wang, UCSD & Nvidia<\/td><\/tr><tr><td>15:30-16:00<\/td><td><strong>Invited talk: Multimodal reasoning and reward-driven video understanding<\/strong><br>Mohit Bansal, University North Carolina Chapel Hill<\/td><\/tr><tr><td>16:00-16:30<\/td><td><strong>Invited talk: Sam3 promptable concept segmentation<\/strong><br>Kate Saenko, Meta AGI Foundations<\/td><\/tr><tr><td>16:30-17:30<\/td><td><strong>Panel discussion and closing remarks<\/strong><br>Moderators: Zhengyuan Yang, Jianfeng Gao<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Full workshop title: The 5th Workshop on Computer Vision in the Wild (CVinW): Towards Unified Multimodal Agents for Reasoning in the Wild Host conference: The Conference on Computer Vision and Pattern Recognition (CVPR) (opens in new tab) | June 3-4, 2026 Workshop organizers:\u00a0Reuben Tan, Zhengyuan Yang Workshop scientific advisor: Jianfeng Gao Speakers: The 5th CVinW [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2026-06-03","msr_enddate":"2026-06-03","msr_location":"Denver, Colorado, USA","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"Mountain Standard Time (UTC -7)","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":null,"footnotes":""},"research-area":[13562],"msr-region":[197900],"msr-event-type":[210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[269148,269142],"msr-impact-theme":[],"class_list":["post-1160075","msr-event","type-msr-event","status-publish","hentry","msr-research-area-computer-vision","msr-region-north-america","msr-event-type-workshop","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-include-in-river"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Computer Vision in the Wild at CVPR 2026\",\"hasSubtitle\":true,\"subTitle\":\"Towards Unified Multimodal Agents for Reasoning in the Wild\",\"image\":{\"id\":1034910,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/05\/CVPR_WebBanner_1920x720.png\",\"alt\":\"teal background triangular pattern\"}} \/-->\n\n<!-- wp:msr\/content-tabs -->\n<!-- wp:msr\/content-tab -->\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"about-the-workshop\">About the workshop<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p><strong>Full workshop title<\/strong>: The 5th Workshop on Computer Vision in the Wild (CVinW): Towards Unified Multimodal Agents for Reasoning in the Wild<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li><em><strong>Note<\/strong>: The date of this workshop is tentative, so please <a href=\"https:\/\/computer-vision-in-the-wild.github.io\/cvpr-2026\/\" target=\"_blank\" rel=\"noreferrer noopener\">check the official workshop page for the final agenda<\/a>.<\/em><\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:paragraph -->\n<p><strong>Host conference<\/strong>: <a href=\"https:\/\/cvpr.thecvf.com\/Conferences\/2026\" target=\"_blank\" rel=\"noreferrer noopener\">The Conference on Computer Vision and Pattern Recognition (CVPR)<\/a> | June 3-4, 2026<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Workshop organizers<\/strong>:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tanreuben\/\">Reuben Tan<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhengyang\/\">Zhengyuan Yang<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Workshop scientific advisor<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Speakers<\/strong>:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Kate Saenko, Meta AGI Foundations<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Chelsea Finn, Stanford &amp; PI<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Manling Li, Northwestern University<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Xiaolong Wang, UCSD &amp; Nvidia<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Mohit Bansal, University North Carolina Chapel Hill<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:buttons -->\n<div class=\"wp-block-buttons\"><!-- wp:button {\"className\":\"is-style-outline\"} -->\n<div class=\"wp-block-button is-style-outline\"><a class=\"wp-block-button__link wp-element-button\">Official workshop agenda<\/a><\/div>\n<!-- \/wp:button --><\/div>\n<!-- \/wp:buttons -->\n\n<!-- wp:paragraph -->\n<p>The 5th CVinW workshop brings together researchers building <em>multimodal AI agents<\/em> that can <em>perceive, reason, and act<\/em> in digital and physical environments. The workshop focuses on capabilities where today\u2019s agentic models still struggle, including not limited to <em>fine-grained spatiotemporal reasoning, causal inference, long-horizon planning &amp; memory, and robust tool-use<\/em>, and it convenes both academia and industry to discuss approaches, datasets, and benchmarks for robust agents that complete complex tasks \u201cin the wild.\u201d<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>This year\u2019s edition emphasizes the intersection of <em>LMMs and VLA models<\/em> and the full loop from representation to inference to decision-making, including structured reasoning strategies (e.g., chain-\/tree-of-thought, program-aided reasoning), long-horizon planning\/memory, and evaluation protocols that diagnose reasoning (not just recognition).<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"challenges\">Challenges<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>To measure progress with <em>fine-grained evaluations and public leaderboards<\/em>, the workshop proposes two challenges:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li><strong>MindCube (Spatial Mental Models under Partial Observability)<\/strong><!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Evaluates whether VLMs can form robust spatial mental models, by capturing <em>positions<\/em>, <em>orientations<\/em>, and <em>counterfactual \u201cwhat-if\u201d dynamics<\/em>, from limited viewpoints.<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list --><\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><strong>SITE (Standardized, Cross-modal Spatial Intelligence Thorough Evaluation)<\/strong><!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Evaluates spatial intelligence across <em>single-image<\/em>, <em>multi-image<\/em>, and <em>video <\/em>modalities and across spatial factors (scale, visualization vs. orientation, intrinsic vs. extrinsic frames, static vs. dynamic).<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list --><\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"call-for-contributions\">Call for contributions<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Submissions of published and unpublished works are welcome.<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Accepted works will be presented as posters and spotlights at the workshop.<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:heading {\"level\":3} -->\n<h3 class=\"wp-block-heading\" id=\"important-dates\">Important dates<\/h3>\n<!-- \/wp:heading -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Workshop papers<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Paper submission deadline: <strong>April 21, 2026<\/strong><\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Notification: <strong>May 19, 2026<\/strong><\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Camera-ready deadline: <strong>June 2, 2026<\/strong><\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list --><\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Challenges<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Start date: January 21, 2026<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>End date: June 2, 2026<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list --><\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:spacer {\"height\":\"30px\"} -->\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n<!-- \/wp:msr\/content-tab -->\n\n<!-- wp:msr\/content-tab {\"title\":\"Sessions\"} -->\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"agenda-tentative\">Agenda (tentative)<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:table {\"hasFixedLayout\":false} -->\n<figure class=\"wp-block-table\"><table><thead><tr><th>Time<\/th><th>Description<\/th><\/tr><\/thead><tbody><tr><td>13:00-13:30<\/td><td><strong>Invited talk: Spatial Intelligence and Embodied AI<\/strong><br>Manling Li, Northwestern University<\/td><\/tr><tr><td>13:30-14:00<\/td><td><strong>Invited talk: Robotic perception, planning, and reasoning<\/strong><br>Chelsea Finn, Stanford &amp; PI<\/td><\/tr><tr><td>14:00-14:30<\/td><td><strong>Workshop paper presentations<\/strong><\/td><\/tr><tr><td>14:30-15:00<\/td><td><strong>Afternoon break + poster session<\/strong><\/td><\/tr><tr><td>15:00-15:30<\/td><td><strong>Invited talk: Test-time scaling and reinforcement learning<\/strong><br>Xiaolong Wang, UCSD &amp; Nvidia<\/td><\/tr><tr><td>15:30-16:00<\/td><td><strong>Invited talk: Multimodal reasoning and reward-driven video understanding<\/strong><br>Mohit Bansal, University North Carolina Chapel Hill<\/td><\/tr><tr><td>16:00-16:30<\/td><td><strong>Invited talk: Sam3 promptable concept segmentation<\/strong><br>Kate Saenko, Meta AGI Foundations<\/td><\/tr><tr><td>16:30-17:30<\/td><td><strong>Panel discussion and closing remarks<\/strong><br>Moderators: Zhengyuan Yang, Jianfeng Gao<\/td><\/tr><\/tbody><\/table><\/figure>\n<!-- \/wp:table -->\n\n<!-- wp:spacer {\"height\":\"30px\"} -->\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n<!-- \/wp:msr\/content-tab -->\n<!-- \/wp:msr\/content-tabs -->","tab-content":[],"msr_startdate":"2026-06-03","msr_enddate":"2026-06-03","msr_event_time":"Mountain Standard Time (UTC -7)","msr_location":"Denver, Colorado, USA","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"Upcoming: June 3, 2026","msr_register_text":"Register now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"Full workshop title: The 5th Workshop on Computer Vision in the Wild (CVinW): Towards Unified Multimodal Agents for Reasoning in the Wild Host conference: The Conference on Computer Vision and Pattern Recognition (CVPR) (opens in new tab) | June 3-4, 2026 Workshop organizers:\u00a0Reuben Tan, Zhengyuan Yang Workshop scientific advisor: Jianfeng Gao Speakers: The 5th CVinW workshop brings together researchers building multimodal AI agents that can perceive, reason, and act in digital and physical environments. The&hellip;","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1160075","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":18,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1160075\/revisions"}],"predecessor-version":[{"id":1160608,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1160075\/revisions\/1160608"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1160075"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1160075"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1160075"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1160075"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=1160075"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1160075"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=1160075"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1160075"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1160075"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}