{"id":670083,"date":"2020-06-30T09:01:49","date_gmt":"2020-06-30T16:01:49","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=670083"},"modified":"2020-06-30T09:08:34","modified_gmt":"2020-06-30T16:08:34","slug":"newly-discovered-principle-reveals-how-adversarial-training-can-perform-robust-deep-learning","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/newly-discovered-principle-reveals-how-adversarial-training-can-perform-robust-deep-learning\/","title":{"rendered":"Newly discovered principle reveals how adversarial training can perform robust deep learning"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-670473 size-full\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/1400x788_Adversarial_Training_NoLogo.gif\" alt=\"\" width=\"1400\" height=\"788\" \/><\/p>\n<p>In machine learning, adversarial examples usually refer to natural inputs plus small, specially crafted perturbations that can fool the model into making mistakes. In recent years, adversarial examples have been repeatedly discovered in deep learning applications, causing public concerns about AI safety. An illustration of adversarial examples on the image classification task is given below, where an image of a panda is identified as a papillon dog after being altered by small adversarial perturbations, but one should keep in mind that they can cause more destructive problems. For example, adversarial examples have been shown to cause autopilot malfunctions on cars.<\/p>\n<div id=\"attachment_670086\" style=\"width: 547px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-670086\" class=\"wp-image-670086\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-_-figure-1-300x104.jpg\" alt=\"A photo of a panda eating bamboo, labeled \u201cpanda.\u201d Below the label, text reads \u201c81.97% confidence.\u201d To the right of the image, it reads \u201c+0.01 x,\u201d followed by an abstract, brightly multicolored collage representing adversarial perturbations. Finally, to the right of this an equals sign is followed by the same image of the panda, but the label below reads \u201cpapillon dog\u201d and \u201c99.56% confidence level.\u201d \" width=\"537\" height=\"187\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-_-figure-1-300x104.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-_-figure-1-768x267.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-_-figure-1.jpg 896w\" sizes=\"auto, (max-width: 537px) 100vw, 537px\" \/><p id=\"caption-attachment-670086\" class=\"wp-caption-text\">Figure 1: an illustration of adversarial perturbations (image by Hadi Salman, Research Engineer at Microsoft)<\/p><\/div>\n<p>In a paper titled <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/feature-purification-how-adversarial-training-performs-robust-deep-learning\/\">\u201cFeature Purification: How can Adversarial Training Perform Robust Deep Learning,<\/a>\u201d researchers from Microsoft Research and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.cmu.edu\/\">Carnegie Mellon University<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> propose the first framework toward understanding the math behind adversarial examples in deep learning. They discovered a principle called \u201cfeature purification\u201d that can formally show why a well-trained, well-generalizing ReLU neural network can still be vulnerable to adversarial perturbations on certain datasets and how adversarial training can defend them provably.<\/p>\n<h3>Background: Mysteries about adversarial examples and adversarial training<\/h3>\n<p>Why do we have adversarial examples? Deep learning models consist of large-scale neural networks with millions of parameters. Due to the inherent complexity of these networks, one school of researchers believe in a \u201ccursed\u201d result: deep learning models tend to fit the data in an overly complicated way so that, for every training or testing example, there exist small perturbations that change the network output drastically. This is illustrated in Figure 2. In contrast, another school of researchers hold that the high complexity of the network is a \u201cblessing\u201d: robustness against small perturbations can only be achieved when high-complexity, non-convex neural networks are used instead of traditional linear models. This is illustrated in Figure 3. It remains unclear whether the high complexity of neural networks is a \u201ccurse\u201d or a \u201cblessing\u201d for the purpose of robust machine learning. Nevertheless, both schools agree that adversarial examples are ubiquitous, even for well-trained, well-generalizing neural networks.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-670089\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-figure-2-1024x347.jpg\" alt=\"On left, robust to small perturbations. A linear model decision boundary is represented by a vertical yellow line. Small red dots to the left of the yellow line are shown to be moving toward the boundary with arrows. Small black dots on the right side of the line are also shown to be moving toward the line with arrows. On right, non-robust to small perturbations. a neural network decision boundary is shown as a jagged yellow line. Small red dots to the left of the boundary move toward the yellow line, with some of the arrows crossing over the line. To the right of the line, Small black dots are moving toward the line, with some of the blue arrows moving over the line.\" width=\"819\" height=\"278\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-figure-2-1024x347.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-figure-2-300x102.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-figure-2-768x261.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-figure-2.jpg 1365w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/>On the other hand, since the discovery of adversarial examples, attempting to design algorithms that find neural networks \u201crobust\u201d to such adversarial examples has been a prevailing research topic. Among the proposed methods, one of the most successful techniques to date is adversarial training, where the algorithm iteratively trains the network with adversarial examples (instead of natural, clean examples) associated with the training set. Nowadays, adversarial training has become a standard empirical tool to improve the neural network robustness. Yet it still remains unclear why, in principle, such adversarial examples exist for well-trained, well-generalizing networks over the original natural dataset and what adversarial training does to networks to defend them against malicious perturbations.<\/p>\n<h3>Feature purification: Unraveling the math behind adversarial examples and adversarial training<\/h3>\n<div id=\"attachment_670092\" style=\"width: 829px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-670092\" class=\"wp-image-670092\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2-1024x296.jpg\" alt=\"Approximately 30 thumbnail images labeled \u201coriginal CIFAR-10 images,\u201d a range of various animals and various vehicles, and their counterparts as sparsity decreases. Each image shows sparsity at 4.05%, 2.89%, 1.90%, 1.10%, and 0.44%. A grayscale layer becomes more prominent in each image as sparsity value decreases. \" width=\"819\" height=\"237\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2-1024x296.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2-300x87.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2-768x222.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2-1536x444.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-4-v2.jpg 1563w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><p id=\"caption-attachment-670092\" class=\"wp-caption-text\">Figure 4: sparse reconstruction of CIFAR-10 images using features from the first layer of AlexNet<\/p><\/div>\n<p>The researchers have discovered when the data has a generic structure commonly known as the \u201csparse coding model,\u201d a model widely used to capture real-world images and texts (see Figure 4). Then, during iterative training, the ReLU network will accumulate, in each neuron, a small \u201cdense mixture direction\u201d that not only has high correlation with the average of their inputs, but also has low correlation with any particular input. Or, in symbols, this can be shown as:<\/p>\n<div id=\"attachment_670512\" style=\"width: 829px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-670512\" class=\"wp-image-670512\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-equation-fig-1024x272.jpg\" alt=\"math equation\" width=\"819\" height=\"218\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-equation-fig-1024x272.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-equation-fig-300x80.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-equation-fig-768x204.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purification-equation-fig.jpg 1041w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><p id=\"caption-attachment-670512\" class=\"wp-caption-text\">In this equation, <em>d<\/em> is the dimension, and <em>k\/d<\/em> is the sparsity of the sparse coding model.<\/p><\/div>\n<p>The researchers showed that these \u201cdense mixtures\u201d are one of the cruxes of the existence of adversarial examples. These \u201cdense mixtures\u201d have a negligible effect on the original clean data, due to the sparse coding structure, but are extremely vulnerable to adversarial perturbations. Small perturbations along these \u201cdense mixture directions\u201d can alter the network output, for instance from \u201cpanda\u201d to \u201cpapillon dog\u201d as shown in Figure 1. The researchers also experimentally verified their theory: as shown in the middle row of Figure 5, one can visually verify that the adversarial perturbations are \u201cdense\u201d (that is, they appear like random noise).<\/p>\n<div id=\"attachment_670095\" style=\"width: 829px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-670095\" class=\"wp-image-670095\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-5-v2-1024x293.jpg\" alt=\"A row of the CIFAR-10 images, various animals and vehicles. An arrow from this row points to a row below that, a row of abstract, multicolored dots. Caption for this row reads: adversarial perturbations for clean models (= attacking using \u201cdense mixtures\u201d). An arrow also points from the CIFAR-10 image set to another row below, which shows the same images that are slightly blurred and gray, representing adversarial training. Caption for row reads: adversarial perturbations for adversarially trained models (= attacking using \u201cpure signals\u201d).\" width=\"819\" height=\"234\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-5-v2-1024x293.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-5-v2-300x86.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-5-v2-768x219.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-5-v2.jpg 1491w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><p id=\"caption-attachment-670095\" class=\"wp-caption-text\">Figure 5<\/p><\/div>\n<p>Next, the researchers also identified one of the main purposes of adversarial training\u2014to purify those small \u201cdense mixtures\u201d in the features of the network. The principle is summarized as \u201cthe principle of feature purification.\u201d To illustrate their theory, Figure 6 plots the visualization of neurons at a certain level in some deep residual network, before and after adversarial training. From this figure one can observe that for individual neurons, their visualization also becomes purified after adversarial training. As another consequence, after adversarial training, one should also expect that the (new) adversarial perturbations will become more \u201cpure,\u201d and this is indeed verified by the last row of Figure 5.<\/p>\n<p>&nbsp;<\/p>\n<div id=\"attachment_670101\" style=\"width: 829px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-670101\" class=\"wp-image-670101\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-6-1024x305.jpg\" alt=\"A block of about 32 thumbnail images from ResNet show brightly colored abstractions of images. To the right of this block, an arrow labeled \u201cadversarial training\u201d points to a block of the same images after they have been adversarially trained. An image of horse is circled in both blocks. An arrow between the two horse images reads: \u201chorse + mixtures\u201d to \u201cpure horse.\u201d An image of a car is circled in both blocks. An arrow between the two car images reads: \u201ccar + mixtures\u201d to \u201cpure car.\u201d \" width=\"819\" height=\"244\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-6-1024x305.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-6-300x89.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-6-768x229.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-purifcation-figure-6.jpg 1512w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><p id=\"caption-attachment-670101\" class=\"wp-caption-text\">Figure 6: ResNet feature visualization before and after adversarial training<\/p><\/div>\n<p>Finally, the researchers also prove that linear models (and several other simple ones), although they can sometimes achieve 100% accuracy on the original dataset, actually lack the power to defend reasonably sized adversarial perturbations. Only high-complexity models, such as ReLU networks, are \u201cblessed\u201d with the capacity to defend against these attacks, but only after adversarial training of course.<\/p>\n<p>To sum up, the researchers have made a first step towards understanding how the features in a neural network are learned during the training process, why after clean training these provably well-generalizing features are still provably non-robust, and why after adversarial training they can be fixed to make the model more robust. The researchers also acknowledge that their finding is still very provisional and have suggested many extensions. For instance, natural images have much richer structures than sparsity; therefore, those \u201cnon-robust mixtures\u201d accumulated by clean training might also carry structural properties other than density. Also, they would like to extend their mathematical theorem to capture deeper neural networks, possibly with \u201chierarchical feature purification\u201d processes, in the spirit of hierarchical learning from their <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2001.04413\">prior work.<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In machine learning, adversarial examples usually refer to natural inputs plus small, specially crafted perturbations that can fool the model into making mistakes. In recent years, adversarial examples have been repeatedly discovered in deep learning applications, causing public concerns about AI safety. An illustration of adversarial examples on the image classification task is given below, [&hellip;]<\/p>\n","protected":false},"author":38838,"featured_media":670470,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13561,13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-670083","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[{"type":"guest","value":"yuanzhi-li","user_id":"670110","display_name":"Yuanzhi  Li","author_link":"<a href=\"https:\/\/www.andrew.cmu.edu\/user\/yuanzhil\/\" aria-label=\"Visit the profile page for Yuanzhi  Li\">Yuanzhi  Li<\/a>","is_active":true,"last_first":"Li, Yuanzhi ","people_section":0,"alias":"yuanzhi-li"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-960x540.png\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Feature-Purification_Hero-Still.png 1470w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"Zeyuan Allen-Zhu and <a href=\"https:\/\/www.andrew.cmu.edu\/user\/yuanzhil\/\" title=\"Go to researcher profile for Yuanzhi  Li\" aria-label=\"Go to researcher profile for Yuanzhi  Li\" data-bi-type=\"byline author\" data-bi-cN=\"Yuanzhi  Li\">Yuanzhi  Li<\/a>","formattedDate":"June 30, 2020","formattedExcerpt":"In machine learning, adversarial examples usually refer to natural inputs plus small, specially crafted perturbations that can fool the model into making mistakes. In recent years, adversarial examples have been repeatedly discovered in deep learning applications, causing public concerns about AI safety. An illustration of&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/670083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/38838"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=670083"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/670083\/revisions"}],"predecessor-version":[{"id":670536,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/670083\/revisions\/670536"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/670470"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=670083"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=670083"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=670083"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=670083"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=670083"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=670083"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=670083"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=670083"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=670083"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=670083"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=670083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}