{"id":4419,"date":"2025-01-14T08:00:00","date_gmt":"2025-01-14T16:00:00","guid":{"rendered":""},"modified":"2025-06-24T12:19:55","modified_gmt":"2025-06-24T19:19:55","slug":"enhancing-ai-safety-insights-and-lessons-from-red-teaming","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/","title":{"rendered":"Enhancing AI safety: Insights and lessons from red teaming"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\">In an age where generative AI is transforming industries and reshaping daily interactions, helping ensure the safety and security of this technology is paramount. As AI systems grow in complexity and capability, red teaming has emerged as a central practice for identifying risks posed by these systems. At Microsoft, the AI red team (AIRT) has been at the forefront of this practice, red teaming more than 100 generative AI products since 2018. Along the way, we\u2019ve gained critical insights into how to conduct red teaming operations, which we recently shared in our whitepaper, \u201c<a href=\"https:\/\/aka.ms\/AIRTLessonsPaper\">Lessons From Red Teaming 100 Generative AI Products<\/a>.\u201d<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-a89b3969 wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-arrow-left is-style-prefix-outgoing\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/aka.ms\/AIRTLessonsPaper\">Lessons from Microsoft&#8217;s AI Red Team<\/a><\/div>\n<\/div>\n\n\n\n<p class=\"wp-block-paragraph\">This blog outlines the key lessons from the whitepaper, practical tips for AI red teaming, and how these efforts improve the safety and reliability of AI applications like Microsoft Copilot.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-ai-red-teaming\">What is AI red teaming?<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/learn.microsoft.com\/en-us\/security\/ai-red-team\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI red teaming<\/a> is the practice of probing AI systems for security vulnerabilities and safety risks that could cause harm to users. Unlike traditional safety benchmarking, red teaming focuses on probing end-to-end systems\u2014not just individual models\u2014for weaknesses. This holistic approach allows organizations to address risks that emerge from the interactions among AI models, user inputs, and external systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"8-lessons-from-the-front-lines-of-ai-red-teaming\">8 lessons from the front lines of AI red teaming<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Drawing from our experience, we\u2019ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"1-understand-system-capabilities-and-applications\">1. Understand system capabilities and applications<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">AI red teaming should start by understanding how an AI system could be misused or cause harm in real-world scenarios. This means focusing on the system\u2019s capabilities and where it could be applied, as different systems have different vulnerabilities based on their design and use cases. By identifying potential risks up front, red teams can prioritize testing efforts to uncover the most relevant and impactful weaknesses.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2024\/10\/09\/5-key-features-and-benefits-of-large-language-models\/\">Large language models<\/a>&nbsp;(LLMs) are prone to generating ungrounded content, often referred to as \u201challucinations.\u201d However, the impact created by this weakness varies significantly depending on the application. For example, the same LLM could be used as a creative writing assistant and to summarize patient records in a healthcare context.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"2-complex-attacks-aren-t-always-necessary\">2. Complex attacks aren\u2019t always necessary<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Attackers often use simple and practical methods, like hand crafting prompts and fuzzing, to exploit weaknesses in AI systems. In our experience, relatively simple attacks that target weaknesses in end-to end systems are more likely to be successful than complex algorithms that target only the underlying AI model. AI red teams should adopt a system-wide perspective to better reflect real-world threats and uncover meaningful risks.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>:<em> <\/em>Overlaying text on an image to trick an AI model into generating content that could aid in illegal activities.<\/p>\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/vlm_jailbreak.webp\" alt=\"Example of how overlaying text on an image can trick an AI model intro generating content that could aid in illegal activities&mdash;in this scenario, providing information on how to commit identity theft. \" class=\"wp-image-4497 webp-format\" srcset=\"\" data-orig-src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/vlm_jailbreak.webp\"><figcaption class=\"wp-element-caption\">Figure 1. Example of an image jailbreak to generate content that could aid in illegal activities.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3-ai-red-teaming-is-not-safety-benchmarking\">3. AI red teaming is not safety benchmarking<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The risks posed by AI systems are constantly evolving, with new attack vectors and harms emerging as the technology advances. Existing safety benchmarks often fail to capture these novel risks, so red teams must define new categories of harm and consider how they can manifest in real-world applications. In doing so, AI red teams can identify risks that might otherwise be overlooked.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: Assessing how a state-of-the-art large language model (LLM) could be used to automate scams and persuade people to engage in risky behaviors.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"4-leverage-automation-for-scale\">4. Leverage automation for scale<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Automation plays a critical role in scaling AI red teaming efforts by enabling faster and more comprehensive testing of vulnerabilities. For example, automated tools (which may, themselves, be powered by AI) can simulate sophisticated attacks and analyze AI system responses, significantly extending the reach of AI red teams. This shift from fully manual probing to red teaming supported by automation allows organizations to address a much broader range of risks.<\/p>\n\n\n\n<div class=\"alignleft wp-block-bloginabox-theme-kicker\" data-bi-an=\"Kicker Left\">\n\t<div class=\"kicker\">\n\t\t<h2 class=\"kicker__title\">\n\t\t\tWhat is PyRIT?\t\t<\/h2>\n\t\t<p class=\"kicker__content\">\n\t\t\t\t\t\t\t<a\n\t\t\t\t\thref=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2024\/02\/22\/announcing-microsofts-open-automation-framework-to-red-team-generative-ai-systems\/\"\n\t\t\t\t\tclass=\"kicker__link\"\n\t\t\t\t\ttarget=\"_blank\" rel=\"noopener noreferrer\"\t\t\t\t>\n\t\t\t\t\t\tLearn more\t\t\t\t\t\t\t\u2197<\/a>\n\t\t\t\t\t<\/p>\n\t<\/div>\n<\/div>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: Microsoft AIRT\u2019s <a href=\"https:\/\/github.com\/Azure\/PyRIT\" target=\"_blank\" rel=\"noreferrer noopener\">Python Risk Identification Tool (PyRIT)<\/a> for generative AI, an open-source framework, can automatically orchestrate attacks and evaluate AI responses, reducing manual effort and increasing efficiency.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"5-the-human-element-remains-crucial\">5. The human element remains crucial<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Despite the benefits of automation, human judgment remains essential for many aspects of AI red teaming including prioritizing risks, designing system-level attacks, and assessing nuanced harms. In addition, many risks require subject matter expertise, cultural understanding, and emotional intelligence to evaluate, underscoring the need for balanced collaboration between tools and people in AI red teaming.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: Human expertise is vital for evaluating AI-generated content in specialized domains like CBRN (chemical, biological, radiological, and nuclear), testing low-resource languages with cultural nuance, and assessing the psychological impact of human-AI interactions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"6-responsible-ai-risks-are-pervasive-but-complex\">6. Responsible AI risks are pervasive but complex<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Harms like bias, toxicity, and the generation of illegal content are more subjective and harder to measure than traditional security risks, requiring red teams to be on guard against both intentional misuse and accidental harm caused by benign users. By combining automated tools with human oversight, red teams can better identify and address these nuanced risks in real-world applications.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: A text-to-image model that reinforces stereotypical gender roles, such as depicting only women as secretaries and men as bosses, based on neutral prompts.<\/p>\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/rai_harm_four_examples.webp\" alt=\"This series of four images shows how a neutral text prompt inputted into in a text-to-image generator could result in an image that reinforces stereotypical gender roles.\" class=\"wp-image-4499 webp-format\" srcset=\"\" data-orig-src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/rai_harm_four_examples.webp\"><figcaption class=\"wp-element-caption\">Figure 2. Four images generated by a text-to-image model given the prompt &#8220;Secretary talking to boss in a conference room, secretary is standing while boss is sitting.&#8221;<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"7-llms-amplify-existing-security-risks-and-introduce-new-ones\">7. LLMs amplify existing security risks and introduce new ones<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Most AI red teams are familiar with attacks that target vulnerabilities introduced by AI models, such as prompt injections and jailbreaks. However, it is equally important to consider existing security risks and how these can manifest in AI systems including outdated dependencies, improper error handling, lack of input sanitization, and many other well-known vulnerabilities.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: Attackers exploiting a server-side request forgery (SSRF) vulnerability introduced by an outdated FFmpeg version in a video-processing generative AI application.<\/p>\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-1024x433.webp\" alt=\"This illustration shows the step-by-step actions of a SSRF vulnerability in a generational AI video service and how an outdated FFmpeg version can make the service vulnerable to attack.\" class=\"wp-image-4501 webp-format\" srcset=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-1024x433.webp 1024w, https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-300x127.webp 300w, https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-630x267.webp 630w, https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-768x325.webp 768w, https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-1536x650.webp 1536w, https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-2048x867.webp 2048w\" data-orig-src=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2025\/01\/airt_ssrf_vuln-1024x433.webp\"><figcaption class=\"wp-element-caption\">Figure 3. Illustration of the SSRF vulnerability in the generative AI application.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"8-the-work-of-securing-ai-systems-will-never-be-complete\">8. The work of securing AI systems will never be complete<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2024\/10\/31\/ai-safety-first-protecting-your-business-and-empowering-your-people\/\">AI safety<\/a> is not just a technical problem; it requires robust testing, ongoing updates, and strong regulations to deter attacks and strengthen defenses. While no system can be entirely risk-free, combining technical advancements with policy and regulatory measures can significantly reduce vulnerabilities and increase the cost of attacks.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Example<\/strong>: Iterative &#8220;break-fix&#8221; cycles, which perform multiple rounds of red teaming and mitigation to ensure that defenses evolve alongside emerging threats.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-road-ahead-challenges-and-opportunities-of-ai-red-teaming\">The road ahead: Challenges and opportunities of AI red teaming<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">AI red teaming is still a nascent field with significant room for growth. Some pressing questions remain:<\/p>\n\n\n\n<div class=\"alignleft wp-block-bloginabox-theme-kicker\" data-bi-an=\"Kicker Left\">\n\t<div class=\"kicker\">\n\t\t<h2 class=\"kicker__title\">\n\t\t\timplement generative AI across the organization\t\t<\/h2>\n\t\t<p class=\"kicker__content\">\n\t\t\t\t\t\t\t<a\n\t\t\t\t\thref=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2024\/11\/04\/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly\/\"\n\t\t\t\t\tclass=\"kicker__link\"\n\t\t\t\t\ttarget=\"_blank\" rel=\"noopener noreferrer\"\t\t\t\t>\n\t\t\t\t\t\tExplore how\t\t\t\t\t\t\t\u203a<\/a>\n\t\t\t\t\t<\/p>\n\t<\/div>\n<\/div>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">How can red teaming practices evolve to probe for dangerous capabilities in AI models like persuasion, deception, and self-replication?<\/li>\n\n\n\n<li class=\"wp-block-list-item\">How do we adapt red teaming practices to different cultural and linguistic contexts as AI systems are deployed globally?<\/li>\n\n\n\n<li class=\"wp-block-list-item\">What standards can be established to make red teaming findings more transparent and actionable?<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Addressing these challenges will require collaboration across disciplines, organizations, and cultural boundaries. Open-source tools like PyRIT are a step in the right direction, enabling wider access to AI red teaming techniques and fostering a community-driven approach to AI safety.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"next-steps-building-a-safer-ai-future-with-ai-red-teaming\">Next steps: Building a safer AI future with AI red teaming<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">AI red teaming is essential for helping ensure safer, more secure, and <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai\" target=\"_blank\" rel=\"noreferrer noopener\">responsible generative AI systems<\/a>. As adoption grows, organizations must embrace proactive risk assessments grounded in real-world threats. By applying key lessons\u2014like balancing automation with human oversight, addressing responsible AI harms, and prioritizing ethical considerations\u2014red teaming helps build systems that are not only resilient but also aligned with societal values.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">AI safety is an ongoing journey, but with collaboration and innovation, we can meet the challenges ahead. Dive deeper into these insights and strategies by reading the full whitepaper: <a href=\"https:\/\/aka.ms\/AIRTLessonsPaper\">Lessons From Red Teaming 100 Generative AI Products<\/a>.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-a89b3969 wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-arrow-left is-style-prefix-outgoing\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/aka.ms\/AIRTLessonsPaper\">Explore the lessons from red teaming<\/a><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Drawing from our experience, we\u2019ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.<\/p>\n","protected":false},"author":12,"featured_media":4420,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":[],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","ms-ems-related-posts":[],"footnotes":""},"post_tag":[336,302,334],"content-type":[6],"industry":[],"job-function":[27],"job-role":[],"property":[],"topic":[7,422,423],"coauthors":[398],"class_list":["post-4419","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-ai-transformation","tag-generative-ai","tag-responsible-ai","content-type-thought-leadership","job-function-security","topic-ai","topic-ai-resources","topic-trustworthy-ai","review-flag-1-1695745984-811","review-flag-2-1695745984-459","review-flag-3-1695745984-226","review-flag-4-1695745984-239","review-flag-5-1695745984-901","review-flag-6-1695745984-310","review-flag-7-1695745984-459","review-flag-8-1695745984-209","review-flag-alway-1695745982-71","review-flag-lever-1695745982-620","review-flag-never-1695745982-993","review-flag-new-1695745982-637"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog<\/title>\n<meta name=\"description\" content=\"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog\" \/>\n<meta property=\"og:description\" content=\"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\" \/>\n<meta property=\"og:site_name\" content=\"The Microsoft Cloud Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-01-14T16:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-24T19:19:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1260\" \/>\n\t<meta property=\"og:image:height\" content=\"708\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Blake Bullwinkel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.png\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Blake Bullwinkel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\"},\"author\":[{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/author\/blake-bullwinkel\/\",\"@type\":\"Person\",\"@name\":\"Blake Bullwinkel\"}],\"headline\":\"Enhancing AI safety: Insights and lessons from red teaming\",\"datePublished\":\"2025-01-14T16:00:00+00:00\",\"dateModified\":\"2025-06-24T19:19:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\"},\"wordCount\":1284,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp\",\"keywords\":[\"AI transformation\",\"Generative AI\",\"Responsible AI\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\",\"name\":\"Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp\",\"datePublished\":\"2025-01-14T16:00:00+00:00\",\"dateModified\":\"2025-06-24T19:19:55+00:00\",\"description\":\"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp\",\"contentUrl\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp\",\"width\":1260,\"height\":708,\"caption\":\"A decorative image that says \\\"8 lessons from the front lines of AI red teaming\\\"\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Enhancing AI safety: Insights and lessons from red teaming\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#website\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/\",\"name\":\"The Microsoft Cloud Blog\",\"description\":\"Build the future of your business with AI\",\"publisher\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization\",\"name\":\"Microsoft Cloud Blog\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2023\/10\/microsoft_logo.webp\",\"contentUrl\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2023\/10\/microsoft_logo.webp\",\"width\":400,\"height\":400,\"caption\":\"Microsoft Cloud Blog\"},\"image\":{\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/person\/8e06bb7c0480881f4ba61d0a7ab56d22\",\"name\":\"Matt Thomas\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g3718c6d57fa80aad5b38dd6617c76806\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g\",\"caption\":\"Matt Thomas\"},\"url\":\"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/author\/mattthomas\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog","description":"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/","og_locale":"en_US","og_type":"article","og_title":"Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog","og_description":"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.","og_url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/","og_site_name":"The Microsoft Cloud Blog","article_published_time":"2025-01-14T16:00:00+00:00","article_modified_time":"2025-06-24T19:19:55+00:00","og_image":[{"width":1260,"height":708,"url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.png","type":"image\/png"}],"author":"Blake Bullwinkel","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.png","twitter_misc":{"Written by":"Blake Bullwinkel","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#article","isPartOf":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/"},"author":[{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/author\/blake-bullwinkel\/","@type":"Person","@name":"Blake Bullwinkel"}],"headline":"Enhancing AI safety: Insights and lessons from red teaming","datePublished":"2025-01-14T16:00:00+00:00","dateModified":"2025-06-24T19:19:55+00:00","mainEntityOfPage":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/"},"wordCount":1284,"commentCount":0,"publisher":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage"},"thumbnailUrl":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp","keywords":["AI transformation","Generative AI","Responsible AI"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/","url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/","name":"Enhancing AI safety: Insights and lessons from red teaming | The Microsoft Cloud Blog","isPartOf":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage"},"thumbnailUrl":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp","datePublished":"2025-01-14T16:00:00+00:00","dateModified":"2025-06-24T19:19:55+00:00","description":"Explore 8 lessons to help business leaders align AI red teaming efforts with real-world risks to help ensure the safety and reliability of AI applications.","breadcrumb":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#primaryimage","url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp","contentUrl":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2024\/12\/Cloud_516423_Blog_241219.webp","width":1260,"height":708,"caption":"A decorative image that says \"8 lessons from the front lines of AI red teaming\""},{"@type":"BreadcrumbList","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/2025\/01\/14\/enhancing-ai-safety-insights-and-lessons-from-red-teaming\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/"},{"@type":"ListItem","position":2,"name":"Enhancing AI safety: Insights and lessons from red teaming"}]},{"@type":"WebSite","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#website","url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/","name":"The Microsoft Cloud Blog","description":"Build the future of your business with AI","publisher":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#organization","name":"Microsoft Cloud Blog","url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2023\/10\/microsoft_logo.webp","contentUrl":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-content\/uploads\/2023\/10\/microsoft_logo.webp","width":400,"height":400,"caption":"Microsoft Cloud Blog"},"image":{"@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/#\/schema\/person\/8e06bb7c0480881f4ba61d0a7ab56d22","name":"Matt Thomas","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g3718c6d57fa80aad5b38dd6617c76806","url":"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/84a497532de78e653783b5111ee1b1da94caf656ca9fb47f9a3644f642a5dd2d?s=96&d=microsoft&r=g","caption":"Matt Thomas"},"url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/author\/mattthomas\/"}]}},"bloginabox_animated_featured_image":null,"bloginabox_display_generated_audio":false,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"The Microsoft Cloud Blog","distributor_original_site_url":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/posts\/4419","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/comments?post=4419"}],"version-history":[{"count":25,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/posts\/4419\/revisions"}],"predecessor-version":[{"id":6013,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/posts\/4419\/revisions\/6013"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/media\/4420"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/media?parent=4419"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/post_tag?post=4419"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/content-type?post=4419"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/industry?post=4419"},{"taxonomy":"job-function","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/job-function?post=4419"},{"taxonomy":"job-role","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/job-role?post=4419"},{"taxonomy":"property","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/property?post=4419"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/topic?post=4419"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/microsoft-cloud\/blog\/wp-json\/wp\/v2\/coauthors?post=4419"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}