{"id":598369,"date":"2019-07-23T09:13:16","date_gmt":"2019-07-23T16:13:16","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=598369"},"modified":"2019-08-14T10:19:28","modified_gmt":"2019-08-14T17:19:28","slug":"the-knowref-coreference-corpus-a-resource-for-training-and-evaluating-common-sense-in-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/the-knowref-coreference-corpus-a-resource-for-training-and-evaluating-common-sense-in-ai\/","title":{"rendered":"The KnowRef Coreference Corpus: a resource for training and evaluating common sense in AI"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-599130 size-large\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-1024x396.png\" alt=\"\" width=\"1024\" height=\"396\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-1024x396.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-300x116.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-768x297.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/p>\n<p>AI has made major strides in the last decade, from <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/deepmind.com\/research\/alphago\/\">beating the world champion of Go<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, to learning <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deepcoder-learning-write-programs\/\">how to program<\/a>, to telling <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/openai.com\/blog\/better-language-models\/\">fantastical short stories<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. However, a basic human trait continues to elude machines: common sense. Common sense is a big term with plenty of baggage, but it typically includes shared background knowledge (I know certain facts about the world, like &#8220;the sky is blue,&#8221; and I know that you know them too), elements of logic, and the ability to infer what is plausible. It looms large as one of the hardest and most central problems in AI. Machines can seem glaringly <em>un<\/em>intelligent when they lack common sense.<\/p>\n<p>This is especially true when it comes to language because language is ambiguous. Common sense enables us to fill in the semantic blanks when a statement doesn\u2019t fully specify what it describes. Imagine telling a machine:<\/p>\n<p style=\"padding-left: 40px;\"><em>The firemen arrived after the police because they were coming from so far away.<\/em><\/p>\n<p>Does the machine recognize who was coming from so far away in this scenario? Only if it understands common concepts of distance and time; that is, that being more distant from a thing means taking more time to reach it. Humans acquire this knowledge from experience and learn to utilize and refer to it at will. The question is: How do we endow machines with similar abilities and, just as important, how do we measure progress towards this goal?<\/p>\n<p>Mila\/McGill University researchers and students collaborated with Microsoft researchers on a recent paper, &#8220;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-knowref-coreference-corpus-removing-gender-and-number-cues-for-difficult-pronominal-anaphora-resolution\/\">The KnowRef Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora Resolution<\/a>,&#8221; which attempts to answer this question. It will appear at the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"http:\/\/www.acl2019.org\/EN\/index.xhtml\">2019 Annual Meeting of the Association for Computational Linguistics (ACL) in Florence, Italy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. The paper introduces a new resource for training and evaluating common sense in machines, the KnowRef coreference corpus. This benchmark contains over 8,000 annotated text passages from the web that exhibit natural, knowledge-oriented instances of <strong>pronominal coreference<\/strong>.<\/p>\n<h3>The challenge of pronominal coreference for AI<\/h3>\n<p>The language problem given above is an example of pronominal coreference. There is a statement with two antecedents (<em>the firemen<\/em> and <em>the police<\/em>) followed by an ambiguous pronoun (<em>they<\/em>) that refers uniquely to one antecedent. The challenge is to figure out which antecedent the pronoun refers to. Not every instance of pronominal coreference is tricky, though. Oftentimes, there are lexical giveaways like number and gender that make the solution obvious. Consider this slight reformulation of the previous problem:<\/p>\n<p style=\"padding-left: 40px;\"><em>The firemen arrived after the police <strong>officer<\/strong> because they were coming from so far away.<\/em><\/p>\n<p>Because the second antecedent is singular, it\u2019s more obvious that <em>they<\/em> refers to <em>the firemen<\/em>. Common sense understanding of distance and time is no longer necessary. You can imagine similar examples where gendered words and pronouns (<em>fireman\/he<\/em>) resolve ambiguity.<\/p>\n<h3>The previous test for common sense in coreference<\/h3>\n<p>The <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.aaai.org\/ocs\/index.php\/KR\/KR12\/paper\/view\/4492\/4924\">Winograd Schema Challenge<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (WSC) is a benchmark made up of the trickier kind of coreference, where lexical cues don\u2019t reveal the answer. It was the direct inspiration for KnowRef. The WSC has been called an \u201calternative Turing test,\u201d garnering considerable attention in the natural language processing and AI communities as a measure of common sense in machines. The last year has seen exciting new approaches to the WSC, from large-scale training of massive language models, like <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1810.04805\">BERT<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.techbooky.com\/wp-content\/uploads\/2019\/02\/Better-Language-Models-and-Their-Implications.pdf\">GPT-2<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, to <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.aclweb.org\/anthology\/N18-4004\">our own approach<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> based on knowledge hunting on the web. None has yet come close to human-level performance.<\/p>\n<p>Beyond the desired challenge to common sense, however, the WSC presents several difficulties. First, it isn\u2019t large enough (fewer than 300 instances) to form a proper train\/test split nor to measure results with high confidence. Because the WSC is so small, it\u2019s likely that a significant proportion of recent progress can be <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1811.01778\">attributed to chance and word-association exploits<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. Second, the Winograd schemas were authored mainly by two expert linguists with a specific goal in mind: stumping machines. Since they don\u2019t occur naturally in text, it\u2019s unclear whether a system that aces the WSC will generalize to less-contrived scenarios.<\/p>\n<h3>Building on the Winograd Schema Challenge with KnowRef<\/h3>\n<p>What if we could automatically identify WSC-like instances in natural text to compile a much larger dataset? Could this be used to train and more confidently evaluate state-of-the-art models? That\u2019s the approach we tested with KnowRef. To construct our corpus, we basically reversed the reformulation process seen above. We found text snippets on the web with two antecedents and a coreferential pronoun where the pronoun\u2019s resolution was clear\u2014it matched in number or gender with only one of the antecedents\u2014and then modified the non-matching antecedent so that it would match.<\/p>\n<p>By forcing the antecedents to correspond in number and gender, we prevent machines from exploiting these cues, and we hope the only thing left to exploit will be common sense. We developed a suite of automatic methods to enact the reformulation process (see our paper for details) and used it to gather data on Wikipedia and Reddit. It\u2019s easiest when the antecedents are gendered proper nouns, like \u201cAlice\u201d and \u201cBob,\u201d since we can swap out one name with another that matches the pronoun\u2019s gender.<\/p>\n<h3>KnowRef results and the human-machine performance gap<\/h3>\n<p>Our experiments show that KnowRef is a challenging benchmark. We demonstrate that various systems, whether rule-based, feature-rich, or neural, perform significantly worse than humans on the task. See Table 1 for numbers comparing the performance of humans, BERT, a state-of-the-art neural coreference system, and other models.<\/p>\n<table class=\"aligncenter\" style=\"height: 500px; width: 100%; border-collapse: separate; border-spacing: inherit;\" border=\"1\" cellspacing=\"inherit\" cellpadding=\"6\">\n<tbody>\n<tr style=\"height: 52px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 30px; text-align: left;\" width=\"312\"><strong>Model<\/strong><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 30px; text-align: left;\" width=\"312\"><strong>Task Accuracy<\/strong><\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 30px;\" width=\"312\">Random<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 30px;\" width=\"312\">0.50<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">Human<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.92<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www-cs.stanford.edu\/people\/nc\/pubs\/emnlp2010-sieve-coref.pdf\">Rule<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.52<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/nlp.stanford.edu\/pubs\/clark-manning-acl15-entity.pdf\">Stat<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.50<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1609.08667\">Deep-RL<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.49<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1804.05392\">E2E<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.58<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">E2E (trained on CoNLL only)<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">0.60<\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\">E2E (KnowRef + CoNLL training)<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px;\" width=\"312\"><strong>0.65<\/strong><\/td>\n<\/tr>\n<tr style=\"height: 55px;\">\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px; text-align: left;\" width=\"312\">BERT<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 50%; height: 55px; text-align: left;\" width=\"312\">0.61<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p class=\"wp-caption-text\">Table 1: Performance of various systems on the KnowRef test set.<\/p>\n<p>Human subjects displayed strong inter-annotator agreement and judged KnowRef\u2019s passages to be resistant to lexical giveaways. Looking into the human-machine performance gap more closely, our analysis shows that on KnowRef, even state-of-the-art models fail to capture semantic context; they base many decisions on the gender or number of antecedents rather than any common sense. In neural models, these cues are wrapped up in the dimensions of standard and contextualized word embeddings. Let\u2019s look at an example:<\/p>\n<p style=\"padding-left: 40px;\"><em>Peter didn&#8217;t realize how old Henry was until he saw his daughter.<\/em><\/p>\n<p>In this KnowRef instance, even BERT fails to resolve <em>he<\/em> to <em>Peter<\/em>. This changes if <em>Henry<\/em> is replaced with the name <em>Harriet<\/em> (and <em>his<\/em> with <em>her<\/em>).<\/p>\n<p>Can we more strongly discourage this lexical focus in models? We devised a data-augmentation trick for KnowRef and similar datasets called <strong>antecedent switching<\/strong> for this purpose. In antecedent switching, we duplicate each KnowRef instance but switch the antecedents\u2019 positions. In the vast majority of cases, this should switch the correct answer as well. Our thought was that exposing models to both duplicates would teach them to redirect their focus from the candidates themselves (with their gender and number cues) to the context around them (where common sense should kick in).<\/p>\n<table class=\"aligncenter\" style=\"width: 99.85%; border-collapse: separate; border-spacing: inherit;\" border=\"1\" cellspacing=\"inherit\" cellpadding=\"6\">\n<tbody>\n<tr>\n<td style=\"padding: 6px; border: 1px solid;\" width=\"208\"><strong>Model<\/strong><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\"><strong>Accuracy<\/strong><\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\"><strong>\u0394<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">BERT<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">0.71<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">+10%<\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">E2E<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">0.61<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">+3%<\/td>\n<\/tr>\n<tr>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">E2E (KnowRef + CoNLL training)<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">0.66<\/td>\n<td style=\"padding: 6px; border: 1px solid; width: 33.28%;\" width=\"208\">+1%<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p class=\"wp-caption-text\">Table 2: Accuracy and gain for several models on the KnowRef test set after augmenting the training set.<\/p>\n<p>We found that models trained on the augmented data performed much better, as you can see in Table 2. We also show that, promisingly, antecedent switching yields improvements on other tasks as well. We use it to achieve state-of-the-art accuracy and gender balance on the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai.google\/tools\/datasets\/gap-coreference\/\">GAP coreference task<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, which was designed explicitly to test gender bias. Fine-tuning BERT on an augmented version of the GAP training data improves test performance by 1.9 F1 points.<\/p>\n<p>The flurry of recent progress on the WSC shows that common sense remains an important frontier for AI. We hope that our KnowRef corpus will spur further progress and provide researchers with a more reliable means to benchmark results. We look forward to seeing you at ACL 2019 to discuss this work in more detail!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; AI has made major strides in the last decade, from beating the world champion of Go, to learning how to program, to telling fantastical short stories. However, a basic human trait continues to elude machines: common sense. Common sense is a big term with plenty of baggage, but it typically includes shared background knowledge [&hellip;]<\/p>\n","protected":false},"author":38022,"featured_media":599130,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[194467],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-598369","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artifical-intelligence","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[599466],"related-researchers":[{"type":"guest","value":"ali-emami","user_id":"598519","display_name":"Ali  Emami","author_link":"<a href=\"https:\/\/scholar.google.ca\/citations?user=Pjdq8cUAAAAJ&hl=en&oi=sra\" aria-label=\"Visit the profile page for Ali  Emami\">Ali  Emami<\/a>","is_active":true,"last_first":"Emami, Ali ","people_section":0,"alias":"ali-emami"},{"type":"guest","value":"paul-trichelair","user_id":"598950","display_name":"Paul  Trichelair","author_link":"<a href=\"https:\/\/mila.quebec\/personne\/paul-trichelair\/\" aria-label=\"Visit the profile page for Paul  Trichelair\">Paul  Trichelair<\/a>","is_active":true,"last_first":"Trichelair, Paul ","people_section":0,"alias":"paul-trichelair"},{"type":"guest","value":"jackie-chi-kit-cheung","user_id":"598522","display_name":"Jackie Chi Kit Cheung","author_link":"<a href=\"https:\/\/www.cs.mcgill.ca\/~jcheung\/\" aria-label=\"Visit the profile page for Jackie Chi Kit Cheung\">Jackie Chi Kit Cheung<\/a>","is_active":true,"last_first":"Cheung, Jackie Chi Kit","people_section":0,"alias":"jackie-chi-kit-cheung"},{"type":"user_nicename","value":"Hannes Schulz","user_id":37188,"display_name":"Hannes Schulz","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haschulz\/\" aria-label=\"Visit the profile page for Hannes Schulz\">Hannes Schulz<\/a>","is_active":false,"last_first":"Schulz, Hannes","people_section":0,"alias":"haschulz"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-960x540.png\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/07\/The-KnowRef_Site_07_2019_1400x788_VerbEdit-e1563832356720-640x360.png 640w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"July 23, 2019","formattedExcerpt":"&nbsp; AI has made major strides in the last decade, from beating the world champion of Go, to learning how to program, to telling fantastical short stories. However, a basic human trait continues to elude machines: common sense. Common sense is a big term with&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/598369","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38022"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=598369"}],"version-history":[{"count":31,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/598369\/revisions"}],"predecessor-version":[{"id":599289,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/598369\/revisions\/599289"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/599130"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=598369"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=598369"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=598369"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=598369"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=598369"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=598369"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=598369"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=598369"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=598369"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=598369"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=598369"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}