{"id":995412,"date":"2023-12-27T22:56:21","date_gmt":"2023-12-28T06:56:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=995412"},"modified":"2025-12-01T22:38:07","modified_gmt":"2025-12-02T06:38:07","slug":"societal-ai","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/societal-ai\/","title":{"rendered":"Societal AI"},"content":{"rendered":"<section class=\"mb-3 moray-highlight\">\n\t<div class=\"card-img-overlay mx-lg-0\">\n\t\t<div class=\"card-background  has-background-catalina-blue card-background--full-bleed\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"2940\" height=\"1136\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai.png\" class=\"attachment-full size-full\" alt=\"Societal AI banner\" style=\"\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai.png 2940w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-300x116.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-1024x396.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-768x297.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-1536x594.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-2048x791.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-240x93.png 240w\" sizes=\"auto, (max-width: 2940px) 100vw, 2940px\" \/>\t\t<\/div>\n\t\t<!-- Foreground -->\n\t\t<div class=\"card-foreground d-flex mt-md-n5 my-lg-5 px-g px-lg-0\">\n\t\t\t<!-- Container -->\n\t\t\t<div class=\"container d-flex mt-md-n5 my-lg-5 \">\n\t\t\t\t<!-- Card wrapper -->\n\t\t\t\t<div class=\"w-100 w-lg-col-5\">\n\t\t\t\t\t<!-- Card -->\n\t\t\t\t\t<div class=\"card material-md-card py-5 px-md-5\">\n\t\t\t\t\t\t<div class=\"card-body \">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n<h1 class=\"wp-block-heading\" id=\"societal-ai\">Societal AI<\/h1>\n\n\n\n<p><\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n<h3 class=\"wp-block-heading\" id=\"background\">Background:<\/h3>\n\n\n\n<p>The emerging<strong> general-purpose<\/strong> AI models (e.g., LLMs) have shown potential to enhance<strong> productivity, creative expression, and scientific research <\/strong>with their capabilities that <strong>are close to humans.<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"challenge\">Challenge:<\/h3>\n\n\n\n<p>As Brad Smith noted, \u201c<em>The more powerful the tool, the greater the benefit or damage it can cause.<\/em>\u201d Despite the benefits, their significant technical and social challenges, such as the requirements of<strong> new research paradigm, <\/strong>the emergence of <strong>unforeseeable risks, the fair and inclusive usage of AI technologies, <\/strong>andthe needs of<strong> new regulatory framework, <\/strong>should be carefully addressed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"mission\">Mission:<\/h3>\n\n\n\n<p>The mission of Societal AI is to bridge the gap by considering AI not just as a technical tool but as a technology that requires careful socio-technical integration. This research initiative aims to develop new paradigms for evaluating AI capabilities while addressing the regulatory, ethical, and accessibility challenges that come with AI\u2019s growing influence in society. To achieve the goal, we emphasize an <strong>interdisciplinary effort<\/strong> with social scientists to responsibly manage AI\u2019s challenges and risks.<\/p>\n\n\n\n<p>Specifically, we have devoted ourselves to cutting-edge research on:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Innovating the paradigms to <strong>evaluate AI\u2019s capability and performance <\/strong>in new, unforeseen tasks and environments to enable more comprehensive understanding<\/li>\n\n\n\n<li><strong>Aligning AI with diverse human values and ethical principles<\/strong> to respect and reflect a broad spectrum of human values, ensuring ethical considerations are integrated throughout the development process.<\/li>\n\n\n\n<li>Developing robust frameworks to ensure the <strong>safety, reliability, and controllability<\/strong> of the increasingly autonomous AI models<\/li>\n<\/ul>\n\n\n\n<p>We are also actively exploring more Societal AI research directions, including human-AI collaboration, AI interpretability, AI\u2019s societal impact, and personalized AI.<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile\" style=\"grid-template-columns:22% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"743\" height=\"1024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-743x1024.png\" alt=\"cover\" class=\"wp-image-1134608 size-full\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-743x1024.png 743w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-218x300.png 218w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-768x1059.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-1114x1536.png 1114w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-1485x2048.png 1485w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover-131x180.png 131w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/cover.png 1958w\" sizes=\"auto, (max-width: 743px) 100vw, 743px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<h2 class=\"wp-block-heading\" id=\"societal-ai-research-challenges-and-opportunities\">Societal AI: Research Challenges and Opportunities<\/h2>\n\n\n\n<p>This whitepaper explores the most critical challenges and opportunities at the intersection of AI and society and fosters interdisciplinary collaboration to address these issues effectively.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-fill-download\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/03\/Societal-AI-Research-Challenges-and-Opportunities.pdf\">Download the report<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:61%\">\n<h2 class=\"wp-block-heading\" id=\"societal-ai-research-agenda-1\">Societal AI Research Agenda<\/h2>\n\n\n\n<p>To achieve a harmonious, synergistic, and resilient integration of AI into society with minimal side effects, we propose a societal AI research agenda emphasizing multidisciplinary collaboration.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"697\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-1024x697.png\" alt=\"chart, diagram\" class=\"wp-image-1135007\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-1024x697.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-300x204.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-768x522.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-1536x1045.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-2048x1393.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Framework-01-240x163.png 240w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:42px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"societal-ai-research-questions\">Societal AI Research Questions<\/h2>\n\n\n\n<p>The 10 key societal AI research questions reflect deep interdisciplinary collaboration and insights from diverse experts. These questions highlight evolving challenges and opportunities, with ongoing refinement to keep the research agenda relevant and impactful.<\/p>\n\n\n\n<div style=\"height:42px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01.png\" alt=\"logo, icon\" class=\"wp-image-1134614\" style=\"width:113px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/01-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>How can AI be aligned with diverse human values and ethical principles?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02.png\" alt=\"logo, icon\" class=\"wp-image-1134634\" style=\"width:113px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/02-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>How can AI systems be designed to ensure fairness and inclusiveness across different cultures, regions, and demographic groups?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03.png\" alt=\"icon\" class=\"wp-image-1134635\" style=\"width:113px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/03-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>How can we ensure AI systems are safe, reliable, and controllable, especially as they become more autonomous?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04.png\" alt=\"icon\" class=\"wp-image-1134636\" style=\"width:113px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/04-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>How can human-AI collaboration be optimized to enhance human abilities?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05.png\" alt=\"icon\" class=\"wp-image-1134637\" style=\"width:113px;height:auto\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/05-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>How can we effectively evaluate AI&#8217;s capability and performance in new, unforeseen tasks and environments?<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.31%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06.png\" alt=\"logo\" class=\"wp-image-1134638\" style=\"width:113px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/06-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.69%\">\n<p class=\"has-text-align-left\">How can we enhance AI interpretability to ensure transparency and in its decision-making processes?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.31%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07.png\" alt=\"icon\" class=\"wp-image-1134639\" style=\"width:113px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/07-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.69%\">\n<p class=\"has-text-align-left\">How will AI reshape human cognition, learning, and creativity, and what new capabilities might it unlock?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.31%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08.png\" alt=\"icon\" class=\"wp-image-1134640\" style=\"width:113px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/08-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.69%\">\n<p class=\"has-text-align-left\">How will AI redefine the nature of work, collaboration, and the future of global business models?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.31%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09.png\" alt=\"icon\" class=\"wp-image-1134641\" style=\"width:113px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/09-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.69%\">\n<p class=\"has-text-align-left\">How will AI transform research methodologies in the social sciences, and what new insights might it enable?<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.31%\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"480\" height=\"480\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10.png\" alt=\"icon\" class=\"wp-image-1134642\" style=\"width:113px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10.png 480w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/10-360x360.png 360w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.69%\">\n<p class=\"has-text-align-left\">How should regulatory frameworks evolve to govern AI development responsibly and foster global cooperation?<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:42px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-media-text has-vertical-padding-none  is-stacked-on-mobile is-image-fill-element is-style-gray-background is-style-spectrum is-style-border is-style-offset-media--top is-style-offset-media--offset-\" style=\"grid-template-columns:30% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-1024x683.jpg\" alt=\"lidong zhou\" class=\"wp-image-1134817 size-full\" style=\"object-position:31% 20%\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-1024x683.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-300x200.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-768x512.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-1536x1024.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-2048x1365.jpg 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/Image-7-240x160.jpg 240w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>At Microsoft Research Asia, we believe AI is not just a technological advancement but a transformative force reshaping societies, economies, and daily lives. Developing more powerful AI models is not enough; we must examine how AI interacts with human values, institutions, and diverse cultural contexts. Our Societal AI team has actively engaged in global collaboration with leading social scientists to create a shared vision of AI that is responsible, inclusive, and beneficial. This white paper outlines key challenges and opportunities in Societal AI, presenting a research agenda to ensure AI evolves in a way that drives progress while mitigating risks and unintended consequences. We look forward to collaborating with more experts to promote the harmonious development of AI and society.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>\u2014\u2014Lidong Zhou, Corporate Vice President, Chief Scientist of Microsoft Asia Pacific R&D Group, Managing Director of Microsoft Research Asia<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text has-vertical-padding-none  has-media-on-the-right is-stacked-on-mobile is-image-fill-element is-style-spectrum is-style-border is-style-offset-media--top is-style-offset-media--offset-\" style=\"grid-template-columns:auto 30%\"><div class=\"wp-block-media-text__content\">\n<p>This white paper from Microsoft Research Asia represents a critical first step in addressing the societal implications of AI, especially large language models (LLMs), as they become more prevalent and are used for a broader range of purposes. It outlines ten essential research questions at the intersection of AI and society, covering areas such as value alignment, fairness, and human-AI collaboration. The paper\u2019s emphasis on the harmonious, synergistic, and resilient integration of AI into society is particularly noteworthy. It highlights the need for not only technical excellence but also a deep understanding of cultural differences, human cognition, and societal structures. Looking ahead, it will be essential to develop robust oversight systems and ensure AI alignment with human safety and prosperity. This white paper lays a solid foundation for future work and sets a positive direction for responsible AI development.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>\u2014\u2014James A. Evans, Max Palevsky Professor, Director, Knowledge Lab, The University of Chicago; Faculty Director, Masters Program in Computational Social Science, The University of Chicago; External Professor, Santa Fe Institute<\/p>\n<\/div><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"937\" height=\"1024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-937x1024.jpg\" alt=\"james\" class=\"wp-image-1134818 size-full\" style=\"object-position:61% 9%\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-937x1024.jpg 937w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-274x300.jpg 274w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-768x840.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-1405x1536.jpg 1405w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-1873x2048.jpg 1873w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james-165x180.jpg 165w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/james.jpg 2035w\" sizes=\"auto, (max-width: 937px) 100vw, 937px\" \/><\/figure><\/div>\n\n\n\n<p><\/p>\n\n\n\n\n\n\n\n\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th style=\"width:10%\">PI<\/th><th style=\"width:40%\">Title<\/th><th style=\"width:20%\">Entity<\/th><th style=\"width:30%\">Project<\/th><\/tr><\/thead><tbody><tr><td>Fang Luo<\/td><td>Professor of Department of Psychology<\/td><td>Beijing Normal University<\/td><td rowspan=\"2\">The convergence of assessing human and big model capabilities<\/td><\/tr><tr><td>Luning Sun<\/td><td>Research Director of The Psychometrics Centre and Research Associate in the Operations and Technology Management Group<\/td><td>Cambridge Judge Business School<\/td><\/tr><tr><td>Masashi Sugiyama<\/td><td>Professor, Department of Complexity Science and Engineering, Graduate School of Frontier Sciences<\/td><td>The University of Tokyo<\/td><td>Recent advances in robust machine learning<\/td><\/tr><tr><td>Tianguang Meng<\/td><td>Associate Dean, School of Social Sciences Professor, Department of Political Science, Deputy Director, Laboratory of Computational Social Science and State Governance Executive Director, Center for Data Governance<\/td><td>Tsinghua University<\/td><td>Measuring the occupational impact of LLM: Evidence from China<\/td><\/tr><tr><td>Rui Guo<\/td><td>Associate Professor of Law<\/td><td>Renmin University of China<\/td><td rowspan=\"2\">Enhancing Compliance of Large Language Models (LLMs) with Legal Requirements through Effective Communication<\/td><\/tr><tr><td>Haijun Jin<\/td><td>Associate Professor of Law<\/td><td>Renmin University of China<\/td><\/tr><tr><td>JinYeong Bak<\/td><td>Assistant Professor, College of Computing and Informatics<\/td><td>Sungkyunkwan University<\/td><td>Integrating Human Values into Language Models: Generating Human Value-Aligned Arguments<\/td><\/tr><tr><td>Yong Lim<\/td><td>Associate Professor, School of Law<\/td><td>Seoul National University<\/td><td rowspan=\"2\">A <em>Fair<\/em> Amount of Stereotypical Bias in Pretrained Language Models<\/td><\/tr><tr><td>Sangchul Park<\/td><td>Assistant Professor, School of Law<\/td><td>Seoul National University<\/td><\/tr><tr><td>Steven Euijong Whang<\/td><td>Associate Professor<\/td><td>Korea Advanced Institute of Science and Technology<\/td><td>Privacy, Fairness, and Robustness Techniques for Holistic Responsible AI<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th style=\"width:10%\">PI<\/th><th style=\"width:40%\">Title<\/th><th style=\"width:20%\">Entity<\/th><th style=\"width:30%\">Project<\/th><\/tr><\/thead><tbody><tr><td>Xu Chen<\/td><td>Associate Professor, Gaoling School of AI<\/td><td>Renmin University of China<\/td><td>Social Simulation with Humanoid Large&nbsp;Language Models<\/td><\/tr><tr><td>Weiran Huang<\/td><td>Associate Professor, Qing Yuan Research Institute<\/td><td>Shanghai Jiao Tong University<\/td><td>LLM&nbsp;Evaluation Based on Matrix Entropy<\/td><\/tr><tr><td>Linus Huang<\/td><td>Research Assistant Professor, Humanities<\/td><td>Hong Kong University of Science and&nbsp;Technology<\/td><td>Dynamic Value Alignment: Enhancing&nbsp;Ethical AI through User-Centric Adaptability<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/shaping-the-future-with-societal-ai-2024-microsoft-research-asia-startrack-scholars-program-highlights-ai-ethics-and-interdisciplinary-integration\/\">Learn more<\/a><\/div>\n<\/div>\n\n\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th style=\"width:10%\">PI<\/th><th style=\"width:40%\">Title and Entity<\/th><th style=\"width:50%\">Project<\/th><\/tr><\/thead><tbody><tr><td>Jos\u00e9 Hern\u00e1ndez-Orallo<\/td><td>Professor at the Universitat Polit\u00e8cnica de Val\u00e8ncia, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge<\/td><td>Evaluation of Foundation Models as a Predictive Problem<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n\n\n<figure class=\"wp-block-table\"><table><thead><tr><th style=\"width:30%\">Challenge<\/th><th style=\"width:10%\">Fellow<\/th><th style=\"width:60%\">Title and Entity<\/th><\/tr><\/thead><tbody><tr><td rowspan=\"2\">Copyright Protection for User Data in the Era of LLMs<\/td><td>Ryan Whalen<\/td><td>Associate Professor, University of Hong Kong Faculty of Law, Director, HKU Centre for Interdisciplinary Legal Studies (CILS)<\/td><\/tr><tr><td>Rita Matulionyte<\/td><td>Associate Professor, Director of Bachelor of Laws (Honours) Program\uff0c<br>Macquarie Law School<br><br><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ai-society-fellows\/overview\/\">Learn more<\/a><\/div>\n<\/div>\n\n\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Collaboration Contact<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/besh\/\">BeiBei Shi<\/a>, Senior Research PM<\/p>\n\n\n\n\n\n<p><strong>Lecture 1:&nbsp; Responsibility, Interpretability and Controllability of Human Behaviors<\/strong><\/p>\n\n\n\n<p><strong>Date: 2022\/11\/21<\/strong><\/p>\n\n\n\n\n\n<p>On the invitation of Microsoft Research Asia, Professor Wan Xiaohong from the State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University, and a researcher at the IDG McGovern Institute for Brain Research, delivered an online lecture on the theme of &#8220;Responsibility, Explainability, and Controllability of Human Behavior.&#8221; This lecture marked the first installment of the &#8220;Responsible Artificial Intelligence&#8221; series, chaired by Senior Researcher Wang Xiting from Microsoft Research Asia.<\/p>\n\n\n\n<p>Professor Wan Xiaohong discussed the criteria used to assess whether an individual should be held accountable for their actions, focusing on the explainability and controllability of behavior. She highlighted the inherent challenges in evaluating the intangible cognitive processes and internal states of the brain, which are difficult for both external observers and the individuals themselves to assess in terms of detailed states and causal relationships. Many human behaviors are driven by rapid, intuitive processes that leave room for unfounded post hoc rationalizations. Even controlled processes often come with explanations that are largely vague and biased.<\/p>\n\n\n\n<p>To address these issues, Professor Wan approached the topic from the perspective of the neuro-mechanisms of human behavior, expanding on the mechanisms and algorithms of human-human and human-machine joint decision-making. She proposed research paradigms and theoretical models to advance the field of human-machine hybrid intelligence.<\/p>\n\n\n\n<p>The lecture was met with enthusiastic response and active interaction from the researchers. Questions were raised by researchers such as Li Dongsheng, Xie Xing, and Yi Xiaoyuan on topics including the technical means of activating neurons, experimental frequency and costs, and the ethics of animal experiments. Further discussions were held on whether this brain function has played a positive role in human evolution and whether moral decision-making belongs to System 1 or System 2.<\/p>\n\n\n\n<p>This lecture was the first in the &#8220;Responsible Artificial Intelligence&#8221; series, with subsequent talks on the theme to be announced.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"276\" height=\"333\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-66f65d8f5e071.png\" alt=\"a man wearing glasses and smiling at the camera\" class=\"wp-image-1088571\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-66f65d8f5e071.png 276w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-66f65d8f5e071-249x300.png 249w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-66f65d8f5e071-149x180.png 149w\" sizes=\"auto, (max-width: 276px) 100vw, 276px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Wan Xiaohong<\/strong><\/p>\n\n\n\n<p>Professor Wan Xiaohong<strong>,<\/strong> Professor at the State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University and Researcher at the IDG McGovern Institute for Brain Research.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 2: Towards a Holistic Framework for Responsible AI<\/strong><\/p>\n\n\n\n<p><strong>Date: 2022\/11\/29<\/strong><\/p>\n\n\n\n\n\n<p>At the invitation of Microsoft Research Asia, Associate Professor Steven Euijong Whang from the Korea Advanced Institute of Science and Technology (KAIST) delivered an online lecture on &#8220;Building a Comprehensive Framework for Responsible Artificial Intelligence.&#8221; Whang emphasized that to ensure the responsibility of AI, it is essential to not only enhance model accuracy during training but also to ensure fairness, robustness, explainability, and privacy protection. He highlighted that these considerations apply to all machine learning steps, beginning with data, necessitating a holistic framework that supports these goals.<\/p>\n\n\n\n<p>During the lecture, Whang presented a range of research outcomes from his team on AI fairness, robustness, explainability, and privacy. He also proposed several potential collaboration directions, expressing a desire to further academic cooperation with Microsoft Research Asia on these topics. The session was interactive, with active participation from researchers including Wang Jindong and Wu Fangzhao, who asked questions about variable control in AI fairness and robustness research and data selection for training. Whang provided detailed answers, contributing to a lively online discussion.<\/p>\n\n\n\n<p>This lecture was the second in the &#8220;Responsible Artificial Intelligence&#8221; series, hosted by Chief Researcher Xie Xing from Microsoft Research Asia, indicating an ongoing commitment to exploring and promoting responsible AI practices within the tech community and beyond.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"271\" height=\"291\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image003-66f66094384c1.png\" alt=\"a man wearing glasses and smiling at the camera\" class=\"wp-image-1088598\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image003-66f66094384c1.png 271w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image003-66f66094384c1-168x180.png 168w\" sizes=\"auto, (max-width: 271px) 100vw, 271px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Steven Euijong Whang<\/strong><\/p>\n\n\n\n<p>Steven Euijong Whang, Associate Professor, Korea Advanced Institute of Science and Technology<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 3: Towards Holistic Adversarial Robustness for Deep Learning<\/strong><\/p>\n\n\n\n<p><strong>Date: 2022\/12\/13<\/strong><\/p>\n\n\n\n\n\n<p>Invited by Microsoft Research Asia, Chief Scientist Pin-Yu Chen from IBM Thomas J. Watson Research Center presented an online talk on &#8220;AI Model Detectors: Toward Holistic Adversarial Robustness in Deep Learning&#8221; as part of the &#8220;Responsible AI&#8221; lecture series. This session, the third in the series, was moderated by Senior Researcher Wang Jindong from Microsoft Research Asia.<\/p>\n\n\n\n<p>Chen offered insights into the field of adversarial machine learning, focusing on his significant research contributions. These include the development of optimization-driven adversarial attacks, their implications for model explainability and scientific discovery, the implementation of versatile defense strategies for model rectification, robustness assessment techniques that are independent of specific attacks, and efficient transfer learning through model reprogramming.<\/p>\n\n\n\n<p>The interactive event saw active engagement from participants, with researchers like Zhu Bin, Xie Xing, Zhang Huishuai, and Yi Xiaoyuan posing questions on various aspects of Chen&#8217;s research. Dr. Chen responded with detailed explanations, enhancing the understanding of the audience.<\/p>\n\n\n\n<p>This lecture series underscores the importance of integrating adversarial robustness into AI development to ensure the creation of secure and reliable intelligent systems, fostering further dialogue and collaboration in the realm of responsible AI.<\/p>\n\n\n\n<p><\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"231\" height=\"262\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image005-66f66098721e2.png\" alt=\"a man wearing glasses and smiling at the camera\" class=\"wp-image-1088601\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image005-66f66098721e2.png 231w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image005-66f66098721e2-159x180.png 159w\" sizes=\"auto, (max-width: 231px) 100vw, 231px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Pin-Yu Chen<\/strong><\/p>\n\n\n\n<p>Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-HBM AI Research Collaboration and Pl of ongoing MIT-IBM Watson Al Lab projects.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 4: Recent Advances in Robust Machine Learning<\/strong><\/p>\n\n\n\n<p><strong>Date: 2023\/1\/18<\/strong><\/p>\n\n\n\n\n\n<p>When machine learning systems are trained and deployed in the real world, we face various types of uncertainty. For example, training data at hand may contain insufficient information, label noise, and bias. In this talk, I will give an overview of our recent advances in robust machine learning, including weakly supervised classification (positive-unlabeled classification, positive-confidence classification, complementary-label classification, etc.), noisy label learning (noise transition estimation, instance-dependent noise, clean sample selection, etc.), and domain adaptation (joint importance-predictor learning for covariate shift adaptation, dynamic importance-predictor learning for full distribution shift, etc.).<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"234\" height=\"271\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image007-66f6609bb5544.png\" alt=\"a man wearing glasses and smiling at the camera\" class=\"wp-image-1088604\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image007-66f6609bb5544.png 234w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image007-66f6609bb5544-155x180.png 155w\" sizes=\"auto, (max-width: 234px) 100vw, 234px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Professor Masashi Sugiyama<\/strong><\/p>\n\n\n\n<p>Masashi Sugiyama received a Ph.D. in Computer Science from Tokyo Institute of Technology in 2001. He has been a Professor at the University of Tokyo since 2014 and concurrently Director of the RIKEN Center for Advanced Intelligence Project (AIP) since 2016.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 5: \u4eba\u5de5\u667a\u80fd\u5e94\u7528\u7684\u4f26\u7406\u5173\u5207\u53ca\u5176\u6cbb\u7406\u903b\u8f91<\/strong><\/p>\n\n\n\n<p><strong>Date: 2023\/03\/16<\/strong><\/p>\n\n\n\n\n\n<p>Artificial Intelligence (AI) is propelling society into an era of intelligence at an unprecedented pace, transforming both production and lifestyle. Alongside these transformations, AI has given rise to ethical challenges such as manipulation, &#8220;black box&#8221; operations, discrimination, privacy concerns, and accountability dilemmas. Invited by Microsoft Research Asia, Professor Meng Tianguang, Vice Dean and Tenured Professor at the School of Social Sciences, Tsinghua University, presented a lecture titled &#8220;AI Ethics Concerns and Governance Logic.&#8221;<\/p>\n\n\n\n<p>In this lecture, Professor Meng approached the topic from an interdisciplinary perspective, providing an insightful discussion on the eight dimensions of AI ethics: personal information protection, fairness, transparency, safety, responsibility, authenticity, human dignity, and human autonomy. Utilizing this framework, he conducted public opinion surveys and social media data mining to meticulously analyze societal concerns regarding AI ethics and their characteristics. He further explored solutions to AI ethical governance from the dimensions of rights, industry, and profession.<\/p>\n\n\n\n<p>The seminar was vibrant, with participants actively engaging with the presentation and delving into discussions with Professor Meng on topics such as copyright of AI-generated content, self-regulation of AI, and societal supervision. This exchange underscored the importance of addressing ethical considerations in the development and implementation of AI technologies.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"199\" height=\"223\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image009.png\" alt=\"a man wearing a suit and tie smiling at the camera\" class=\"wp-image-1088607\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image009.png 199w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image009-161x180.png 161w\" sizes=\"auto, (max-width: 199px) 100vw, 199px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Meng Tianguang<\/strong><\/p>\n\n\n\n<p>Meng Tianguang, Vice Dean and Tenured Professor at the School of Social Sciences, Tsinghua University.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 6: Shifting Winds, Changing Tides: Emerging Issues for Market Competition in the Next Phase of AI Evolution<\/strong><\/p>\n\n\n\n<p><strong>Date: 2023\/5\/18<\/strong><\/p>\n\n\n\n\n\n<p>The recent splash made by Transformer-based generative AI has spurred new discourse surrounding its potential impact on market competition. There are those who argue that we are heading towards natural monopolies in relevant markets due to the resource-intensive nature of training and moderating generative AI and other related intelligent systems. Others see glimmers of hope that AI-based innovation will manage to upend current dominance in digital markets; yet some take a more nuanced view, arguing that market disruption, if any, will primarily come from incumbents, thereby signifying a shift in focus from accuracy to computational efficiency.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"237\" height=\"234\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image011.png\" alt=\"a person posing for the camera\" class=\"wp-image-1088610\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image011.png 237w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image011-182x180.png 182w\" sizes=\"auto, (max-width: 237px) 100vw, 237px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Yong LIM<\/strong><\/p>\n\n\n\n<p>Yong LIM is an Associate Professor at Seoul National University, School of Law, where he also served as Associate Dean of Student Affairs until 2020.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 7: Heterogeneity of AI-Induced Societal Harms and the Failure of Omnibus AI Laws<\/strong><\/p>\n\n\n\n<p><strong>Date: 2023\/5\/18<\/strong><\/p>\n\n\n\n\n\n<p>Trustworthy AI discourses postulate the homogeneity of AI systems, aim to derive common causes regarding the harms they generate, and demand uniform human interventions. Such &#8220;AI monism&#8221; has spurred legislation for omnibus AI laws requiring any &#8220;high-risk&#8221; AI systems to comply with a full, uniform package of rules on fairness, transparency, accountability, human oversight, accuracy, robustness, and security, as demonstrated by the EU&#8217;s draft AI Regulation, the U.S.&#8217;s draft Algorithmic Accountability Act of 2022, and Korea&#8217;s AI Bill.<\/p>\n\n\n\n<p>However, it is irrational to require &#8220;high-risk&#8221; or critical AIs to comply with all the safety, fairness, accountability, and privacy regulations when it is possible to separate AIs entailing safety risks (safety risks (robots and other intelligent agents), biases (discriminative models), infringements (generative models), and accuracy\/robustness\/privacy problems (cognitive models). Alternatively, I propose the following four initial categorizations, subject to ongoing empirical reassessments:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Intelligent Agents<\/strong>: For self-driving cars, safety regulations must be adapted to address incremental accident risks arising from autonomous behavior through test tracking, safety warnings, event data recording, and kill switches.<\/li>\n\n\n\n<li><strong>Discriminative Models<\/strong>: For models like credit-scoring or hiring AI, law must focus on mitigating allocative harms and disclosing the marginal effects of immutable features such as race and gender.<\/li>\n\n\n\n<li><strong>Generative Models<\/strong>: For language models or AI-powered content creation, law should optimize developers&#8217; liability for data mining and content generation, balancing potential social harms arising from infringing content and the negative impact of excessive moderation.<\/li>\n\n\n\n<li><strong>Identification and AI Diagnostics<\/strong>: Quality of service related to safety should effectively address privacy, surveillance, and security issues.<\/li>\n<\/ol>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"123\" height=\"138\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image013.png\" alt=\"a person posing for the camera\" class=\"wp-image-1088613\" style=\"width:180px\"\/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Sangchul PARK<\/strong><\/p>\n\n\n\n<p>Sangchul PARK is an Assistant Professor at Seoul National University, School of Law, with joint appointments at the Interdisciplinary Program in AI and the Department of Mathematical Science.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 8: Computing Based on Societal Thinking<\/strong><\/p>\n\n\n\n<p><strong>Date: 2023\/11\/9<\/strong><\/p>\n\n\n\n\n\n<p>This speech will discuss how to integrate big data and AI methods with a focus on social issues. We will explore two research and practical applications in this area. The first case involves examining how the yin-yang theory in Chinese philosophy can be applied to understand the emergence of collective intelligence within teams in an enterprise. Two factors\u2014high knowledge diversity and tight collaboration networks within the team\u2014are inherently contradictory. A dense network tends to bring about homogeneity, while diversity fosters a sparse network. However, in actual big data analysis, we can observe that the impact of these two factors on the emergence of collective intelligence is sometimes reinforcing and sometimes restraining. We will address the question of how to promote reinforcement and avoid restraint.<\/p>\n\n\n\n<p>Another case explores how large language models can be used in the governance of local government to assist in &#8216;one-click reporting&#8217; and provide &#8216;one-click assessments&#8217; of various NGOs and social workers. The goal is to use AI to replace form-filling and truly reduce the workload in community governance.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"183\" height=\"196\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image015.png\" alt=\"a man smiling for the camera\" class=\"wp-image-1088616\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image015.png 183w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image015-168x180.png 168w\" sizes=\"auto, (max-width: 183px) 100vw, 183px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Jar-Der Luo<\/strong><\/p>\n\n\n\n<p>Jar-Der Luo is a Joint Appointed Professor at Tsinghua University (Beijing), Chief Editor of the Journal of Social Computing, and PI at the Tsinghua U. Computational Social Sciences & National Governance Lab.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"192\" height=\"205\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image017.png\" alt=\"a person posing for the camera\" class=\"wp-image-1088628\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image017.png 192w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image017-169x180.png 169w\" sizes=\"auto, (max-width: 192px) 100vw, 192px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Yuanyi Zhen<\/strong><\/p>\n\n\n\n<p>Yuanyi Zhen is a Ph.D. candidate in the Department of Sociology at Tsinghua University, focusing on the Science of Science, Social Computing, and Complex Social Theories.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 9: Can Large Language Models Transform Computational Social Science?<\/strong><\/p>\n\n\n\n<p><strong>Date: 2024\/7\/5<\/strong><\/p>\n\n\n\n\n\n<p>Large language models (LLMs) provide great opportunities for analyzing text data at scale and have transformed the way humans interact with AI systems in a wide range of fields and disciplines. This talk shares two distinct approaches to how LLMs can influence and potentially transform computational social science research.<\/p>\n\n\n\n<p>The first part analyzes the zero-shot performance of 13 LLMs on 24 representative computational social science benchmarks to provide a roadmap for using LLMs as computational social science tools. The second part explores social skill training with LLMs, presenting how we use LLMs to teach conflict resolution skills through simulated practice. We conclude by discussing concerns about using LLMs in the social sciences and offering recommendations on how to address them.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"220\" height=\"250\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image019.png\" alt=\"a person smiling for the camera\" class=\"wp-image-1088631\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image019.png 220w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image019-158x180.png 158w\" sizes=\"auto, (max-width: 220px) 100vw, 220px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Diyi Yang<\/strong><\/p>\n\n\n\n<p>Diyi Yang is an assistant professor in the Computer Science Department at Stanford University, affiliated with the Stanford NLP Group, Stanford HCI Group, and Stanford Human Centered AI Institute.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 10: From Leaderboards to Operating Conditions<\/strong><\/p>\n\n\n\n<p><strong>Date: 2024\/7\/10<\/strong><\/p>\n\n\n\n\n\n<p>AI evaluation is much more than benchmarks, metrics, and leaderboards. It should also be much more, and much better, than &#8216;evals&#8217;. This talk will cover the state of AI evaluation through three major obstacles:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Diverse Paradigms<\/strong>: There are very different paradigms and communities that often talk past each other: 1) the TEVV (testing, evaluation, verification, and validation) school, 2) the benchmark school, 3) the &#8216;evals&#8217; school, and 4) the cognitive school.<\/li>\n\n\n\n<li><strong>Understanding Capability<\/strong>: There is limited understanding of what capability means and how to measure it, as opposed to performance.<\/li>\n\n\n\n<li><strong>Predictability Focus<\/strong>: There is little explicit recognition that AI evaluation is mostly about predictability: shifting from the question &#8220;is it accurate or safe in general?&#8221; to &#8220;will it work for this operating condition?&#8221;<\/li>\n<\/ol>\n\n\n\n<p>Understanding AI evaluation as a prediction problem clarifies research challenges and opportunities, leading to the goal of making Predictable AI a reality.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"253\" height=\"280\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image021.png\" alt=\"a man looking at the camera\" class=\"wp-image-1088634\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image021.png 253w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image021-163x180.png 163w\" sizes=\"auto, (max-width: 253px) 100vw, 253px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Jos\u00e9 Hern\u00e1ndez-Orallo<\/strong><\/p>\n\n\n\n<p>Jos\u00e9 Hern\u00e1ndez-Orallo is Professor at the Universitat Polit\u00e8cnica de Val\u00e8ncia, Spain, and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 11: The Promise and Peril of AI Stand-ins for Social Agents and Interactions<\/strong><\/p>\n\n\n\n<p><strong>Date: 2024\/9\/6<\/strong><\/p>\n\n\n\n\n\n<p>Large Language Models (LLMs), through their exposure to massive collections of online text, learn the ability to reproduce the perspectives and linguistic styles of diverse social and cultural groups. This capability suggests a powerful social scientific application\u2014the simulation of empirically realistic, culturally situated human subjects.<\/p>\n\n\n\n<p>Synthesis of recent research in artificial intelligence and computational social science outlines a methodological foundation for simulating human subjects and their social interactions. I then identify nine characteristics of current models that impair realistic human simulation, including atemporality, social acceptability bias, response uniformity, and poverty of sensory experience. For each of these areas, I explore promising approaches to overcome their associated shortcomings. I conclude with a discussion of technological implications and ethical considerations. Given the rapid changes in these models, I advocate for an ongoing methodological program on the simulation of human subjects and collectives that keeps pace with technical progress.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"184\" height=\"208\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image023.png\" alt=\"James Evans wearing a suit and tie\" class=\"wp-image-1088637\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image023.png 184w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image023-159x180.png 159w\" sizes=\"auto, (max-width: 184px) 100vw, 184px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>James Evans<\/strong><\/p>\n\n\n\n<p>James Evans is the Director of the Knowledge Lab and a Professor of Sociology at the University of Chicago, also serving as Faculty Director of the Computational Social Science program.<\/p>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n\n\n<p><strong>Lecture 12: The Social Impact of Generative LLM-Based AI<\/strong><\/p>\n\n\n\n<p><strong>Date: 2025\/3\/21<\/strong><\/p>\n\n\n\n\n\n<p>We are likely to enter a new phase of human history in which Artificial Intelligence (AI) will dominate economic production and social life \u2013 the AI Revolution. Before the actual arrival of the AI Revolution, it is time for us to speculate on how AI will impact the social world. In recent years, we have focused on the social impact of generative LLMbased AI (GELLMAI), discussing societal factors that contribute to its technological development and its potential roles in enhancing both between-country and within-country social inequality. There are good indications that the US and China will lead the field and will be the main competitors for domination of AI in the world. We conjecture the AI Revolution will likely give rise to a post-knowledge society in which knowledge per se will become less important than in today\u2019s world. Instead, individual relationships and social identity will become more important. So will soft skills.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"484\" height=\"484\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1.png\" alt=\"Yu Xie\" class=\"wp-image-1135275\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1.png 484w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-1-360x360.png 360w\" sizes=\"auto, (max-width: 484px) 100vw, 484px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Yu Xie<\/strong><\/p>\n\n\n\n<p>Yu Xie, sociologist, is a member of the American National Academy of Sciences, the American Academy of Arts and Sciences, and Academia Sinica. He is also Bert G. Kerstetter &#8217;66 University Professor of Sociology and PIIRS at Princeton University, Chair Professor and director of Center for Social Research at Peking University.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 13: Exploring the Intersection of Computational Science and Communication Research<\/strong><\/p>\n\n\n\n<p><strong>Date: 2025\/3\/27<\/strong><\/p>\n\n\n\n\n\n<p>As computational science advances, its integration with social sciences is becoming increasingly vital in tackling complex societal issues. Social problems often involve high-dimensional data, abstract concepts, and significant uncertainties\u2014challenges that demand interdisciplinary collaboration. This report explores the use of computational methods for collective sentiment mining, framed within key communication studies questions. Additionally, the speaker will discuss unresolved issues in communication research, providing some insights and perspectives.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"656\" height=\"656\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2.png\" alt=\"Xiaokun Wu\" class=\"wp-image-1135276\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2.png 656w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2-300x300.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image-2-360x360.png 360w\" sizes=\"auto, (max-width: 656px) 100vw, 656px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Xiaokun Wu<\/strong><\/p>\n\n\n\n<p>Xiaokun Wu is a Professor in the School of Journalism & Communication at Renmin University of China.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 14: Heads-Up Computing: Towards the Next Interaction Paradigm for Wearable Intelligent Assistants<\/strong><\/p>\n\n\n\n<p><strong>Date: 2025\/5\/20<\/strong><\/p>\n\n\n\n\n\n<p>Heads-up computing, an emerging paradigm in human-computer interaction (HCI), aims to create seamless interactions with technology through wearable intelligent assistants. This vision relies on three crucial components: (1) bodily compatible hardware, (2) multimodal complementary interactions, and (3) interfaces that accommodate fragmented attention and are aware of potential resources. Recent advancements in large language models (LLMs) have significantly accelerated progress in these areas, enabling more natural, context-aware, and proactive systems. These developments are pushing heads-up computing beyond simple notifications to complex, multi-modal interactions that blend seamlessly with our environment and daily activities, allowing for efficient information processing in everyday life. However, as we integrate these AI-driven assistants more deeply into our lives, we must carefully consider ethical implications such as privacy and cognitive load. Balancing technological advancement with human-centered principles is crucial to create systems that enhance productivity while respecting user autonomy and well-being, ultimately augmenting human capabilities without compromising fundamental values.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"613\" height=\"613\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao.jpg\" alt=\"a person posing for the camera\" class=\"wp-image-1156646\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao.jpg 613w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-shengdong-zhao-360x360.jpg 360w\" sizes=\"auto, (max-width: 613px) 100vw, 613px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Shengdong Zhao<\/strong><\/p>\n\n\n\n<p>Shengdong Zhao is a Professor in the School of Creative Media and the Department of Computer Science at City University of Hong Kong. He established and led the Synteraction (formerly NUS-HCI) research lab in 2009 at the National University of Singapore.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 15: The Social Structure of Scientific Evaluation: AI, Benchmarking, and the Deep Learning Monoculture<\/strong><\/p>\n\n\n\n<p><strong>Date: 2025\/6\/27<\/strong><\/p>\n\n\n\n\n\n<p>Evaluation systems in science do more than assess past work\u2014they shape the future direction of research. Most scientific fields are primarily organized around \u201corganic\u201d evaluation systems (e.g., peer review, citation) that holistically weigh contributions across multiple epistemic values. However, artificial intelligence (AI) has diverged sharply from this model. Drawing on interviews, &nbsp;computational analyses, and archival materials spanning AI\u2019s history (1956\u20132021), this paper examines how AI evolved from a fragmented discipline with weak organic evaluation into a field driven by benchmarking\u2014a \u201cformal\u201d evaluation system that defines progress quantitatively as accuracy on commercial tasks. Benchmarking\u2019s emphasis on accuracy nurtured the rise of deep learning because it uniquely excelled under this vision of progress. Yet this success came at a cost: benchmarking\u2019s narrow focus discouraged the development of alternative technologies that could address deep learning\u2019s limitations (e.g., theoretical opacity, inefficiency, low interpretability), fostering an \u201cepistemic monoculture.\u201d At the same time, deep learning\u2019s opacity made organic evaluation increasingly difficult, further entrenching benchmarking as the field\u2019s dominant evaluative institution. We discuss how the spread of AI technologies will bring benchmarking\u2019s influence into other disciplines (and domains of social life). While this shift will undoubtedly transform sciences from biology to sociology, AI\u2019s success raises a deeper question: is organic evaluation an essential feature of science?<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"529\" height=\"528\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch.jpg\" alt=\"a man smiling\" class=\"wp-image-1156644\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch.jpg 529w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-bernard-koch-360x360.jpg 360w\" sizes=\"auto, (max-width: 529px) 100vw, 529px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Bernard Koch<\/strong><\/p>\n\n\n\n<p>Bernard Koch is a computational social scientist and Assistant Professor of Sociology at the University of Chicago.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n<p><strong>Lecture 16: Making AI Systems Work for Imperfect Humans<\/strong><\/p>\n\n\n\n<p><strong>Date: 2025\/7\/4<\/strong><\/p>\n\n\n\n\n\n<p>General-purpose AI systems are increasingly envisioned to support users on any tasks. However, a prerequisite for this vision is that the user need is clearly \u201ccommunicated\u201d to the AI, which is in itself a nontrivial step: users often begin with vague or unformed goals, and even when they have a clear idea of what they want, their instructions may be ambiguous or misaligned with how the AI interprets them. Simply put, humans are not perfect oracles of their own intentions. How can we design AI systems that better support imperfect users? In this talk, I will share some of our recent work aimed at making AI more practically useful. This involves reflections on the right representations and metrics to capture user needs and task utility, and methods for improving the goal capturing, either by training the model to better guess the user need, or training the human to better express themselves.<\/p>\n\n\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:15%\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"572\" height=\"572\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu.jpg\" alt=\"a person smiling\" class=\"wp-image-1156642\" style=\"width:180px\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu.jpg 572w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/societal-ai-lecture-sherry-wu-360x360.jpg 360w\" sizes=\"auto, (max-width: 572px) 100vw, 572px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20px\"><\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>Sherry Wu<\/strong><\/p>\n\n\n\n<p>Sherry Wu is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University.<\/p>\n<\/div>\n<\/div>\n\n\n\n\n\n\n\n<p>English Blogs: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/news.microsoft.com\/tools-and-weapons-podcast\/\">Tackling the toughest challenges at the intersection of tech and society<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-asia\/articles\/shaping-the-future-with-societal-ai-2024-microsoft-research-asia-startrack-scholars-program-highlights-ai-ethics-and-interdisciplinary-integration\/?msockid=01324a2ab63f67df05db5ec5b76266ee\">Shaping the Future with Societal AI: 2024 Microsoft Research Asia StarTrack Scholars Program Highlights AI Ethics and Interdisciplinary Integration &#8211; Microsoft Research<\/a><\/li>\n<\/ul>\n\n\n\n<p>Chinese Articles:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/value-compass-benchmarks\/\">Value Compass Benchmarks\u81ea\u8fdb\u5316\u8bc4\u6d4b\u6846\u67b6\uff0c\u6df1\u5ea6\u5256\u6790\u5927\u6a21\u578b\u201c\u4e09\u89c2\u201d<\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.ccf.org.cn\/article\/articleDetail.html?type=xhtx_thesis&_ack=1&id=7517784791336960\">CCF&nbsp;\u901a\u8baf\u6587\u7ae0:&nbsp;\u4ef7\u503c\u89c2\u7f57\u76d8\u8bc4\u4f30\u4e2d\u5fc3\uff1a\u9762\u5411\u4eba\u673a\u4ea4\u4e92\u7684\u5927\u6a21\u578b\u4ef7\u503c\u89c2\u8bc4\u6d4b\u5e73\u53f0<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/societal-ai-understanding-evaluating-big-models-for-human-intelligence-and-learning\/\">\u5927\u6a21\u578b\u65f6\u4ee3\uff0c\u5982\u4f55\u8bc4\u4f30\u4eba\u5de5\u667a\u80fd\u4e0e\u4eba\u7c7b\u667a\u80fd\uff1f<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/societal-ai-society-and-advancements-in-technology\/\">AI\u5c06\u600e\u6837\u5f71\u54cd\u4eba\u7c7b\u793e\u4f1a\uff1f<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/societal-ai-legal-and-ethical-governance\/\">\u77e5\u8bc6\u4ea7\u6743\u3001\u9690\u79c1\u548c\u6280\u672f\u6ee5\u7528\uff1a\u5982\u4f55\u9762\u5bf9\u5927\u6a21\u578b\u65f6\u4ee3\u7684\u6cd5\u5f8b\u4e0e\u4f26\u7406\u6311\u6218? <\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/xing-xie-societal-ai-2\/\">\u8de8\u5b66\u79d1\u5408\u4f5c\u6784\u5efa\u5177\u6709\u793e\u4f1a\u8d23\u4efb\u7684\u4eba\u5de5\u667a\u80fd<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/xing-xie-societal-ai\/\">\u8c22\u5e78\uff1a\u505a\u7ecf\u5f97\u8d77\u65f6\u95f4\u68c0\u9a8c\u7684\u7814\u7a76\uff0c\u6253\u9020\u8d1f\u8d23\u4efb\u7684\u4eba\u5de5\u667a\u80fd<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/articles\/xing-xie-societal-ai-2\/\">\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u7684\u4ef7\u503c\u89c2\uff0c\u548c\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u667a\u80fd\u540c\u6837\u91cd\u8981<\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/3TA41LtBn9khgpzZ04ogKg\">\u5168\u7403\u4e13\u5bb6\u9f50\u805a\uff0c\u9080\u60a8\u5171\u540c\u63a2\u8ba8 \u201c\u8d1f\u8d23\u4efb\u7684\u4eba\u5de5\u667a\u80fd\u201d <span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/OyUYGLwANQ0UP40J4Gnw2A\">\u7814\u8ba8\u4f1a\u76f4\u64ad\uff1a\u63a2\u8ba8\u201c\u8d1f\u8d23\u4efb\u7684\u4eba\u5de5\u667a\u80fd\u201d\u65f6\uff0c\u5168\u7403\u5404\u9886\u57df\u4e13\u5bb6\u5728\u5173\u6ce8\u4ec0\u4e48\uff1f <span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.ccf.org.cn\/article\/articleDetail.html?id=7298573780109312&type=xhtx_thesis\">CCF\u901a\u8baf\u6587\u7ae0\uff1a \u4ef7\u503c\u89c2\u53f8\u5357\uff1a\u4ece\u5177\u4f53AI\u98ce\u9669\u5230\u5927\u6a21\u578b\u57fa\u672c\u4ef7\u503c\u89c2<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/B-CeM3_oJahLcn5_RITtVQ\">\u96c6\u667a\uff1a \u4ef7\u503c\u89c2\u7f57\u76d8\uff1a\u5982\u4f55\u8ba9\u5927\u6a21\u578b\u4e0e\u4eba\u7c7b\u4ef7\u503c\u89c2\u5bf9\u9f50? <span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/i8-2ScUwQHkpbfiHw9ViYg\">\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u7684\u4ef7\u503c\u89c2\uff0c\u548c\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u667a\u80fd\u540c\u6837\u91cd\u8981<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n\n\n\n<p>Podcasts and Talks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sstd2025.github.io\/keynotes.html\" target=\"_blank\" rel=\"noopener noreferrer\">Keynote Talk at International Symposium on Spatial and Temporal Data Conference:&nbsp;Mapping Cultural Values Across Regions for AI Alignment<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.far.ai\/events\/sessions\/xiaoyuan-yi-value-compass-leaderboard-platform-for-llms-value-evaluation\" target=\"_blank\" rel=\"noopener noreferrer\">Alignment Workshop: Value Compass Leaderboard-Platform for LLMs\u2019 Value Evaluation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/denevil-towards-deciphering-and-navigating-the-ethical-values-of-large-language-models-via-instruction-learning\/\">Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning &#8211; Microsoft Research<\/a><\/li>\n\n\n\n<li>TEDxBeijing<br><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/AUPIvVAN-KEbMHyugBAYRQ\">Talk Introduction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> | <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/watch?v=EOmuxol26N8&t=2s\">Talk Video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n\n\n\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/dl.ccf.org.cn\/video\/videoDetail.html?id=6806153663596544\">CCF \u5b66\u751f\u9886\u822a\u8ba1\u5212Talk: \u4eceAI\u5b89\u5168\u5230\u57fa\u672c\u4ef7\u503c\u89c2\u5bf9\u9f50<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n\n\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The emerging general-purpose AI models (e.g., LLMs) have shown potential to enhance productivity, creative expression, and scientific research with their capabilities that are close to humans. As Brad Smith noted, \u201cThe more powerful the tool, the greater the benefit or damage it can cause.\u201d Despite the benefits, their significant technical and social challenges, such as [&hellip;]<\/p>\n","protected":false},"featured_media":1114188,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-995412","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[1134093,1167325],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[1096245,1099011,939423,931014,918618,875082],"related-opportunities":[],"related-posts":[1136599,1138012,1138048],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Xing Xie","user_id":34906,"people_section":"Section name 0","alias":"xingx"},{"type":"user_nicename","display_name":"Jianxun Lian","user_id":38470,"people_section":"Section name 0","alias":"jialia"},{"type":"user_nicename","display_name":"Fangzhao Wu","user_id":36473,"people_section":"Section name 0","alias":"fangzwu"},{"type":"user_nicename","display_name":"Jing Yao","user_id":41925,"people_section":"Section name 0","alias":"jingyao"},{"type":"user_nicename","display_name":"Xiaoyuan Yi","user_id":40768,"people_section":"Section name 0","alias":"xiaoyuanyi"},{"type":"user_nicename","display_name":"Beibei Shi","user_id":42162,"people_section":"Section name 0","alias":"besh"},{"type":"user_nicename","display_name":"Haotian Li","user_id":43593,"people_section":"Section name 0","alias":"haotianli"},{"type":"user_nicename","display_name":"Yang Ou","user_id":37742,"people_section":"Section name 0","alias":"yaou"}],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/995412","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":121,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/995412\/revisions"}],"predecessor-version":[{"id":1157199,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/995412\/revisions\/1157199"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1114188"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=995412"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=995412"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=995412"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=995412"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=995412"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}