{"id":993510,"date":"2023-12-18T01:15:58","date_gmt":"2023-12-18T09:15:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&#038;p=993510"},"modified":"2023-12-18T19:05:08","modified_gmt":"2023-12-19T03:05:08","slug":"shaping-the-future-with-societal-ai-2024-microsoft-research-asia-startrack-scholars-program-highlights-ai-ethics-and-interdisciplinary-integration","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/shaping-the-future-with-societal-ai-2024-microsoft-research-asia-startrack-scholars-program-highlights-ai-ethics-and-interdisciplinary-integration\/","title":{"rendered":"Shaping the Future with Societal AI: 2024 Microsoft Research Asia StarTrack Scholars Program Highlights AI Ethics and Interdisciplinary Integration"},"content":{"rendered":"\n<p>The rapid development of Generative Pre-trained Transformer (GPT) technologies and the advent of the large model era have significantly impacted every facet of the information world. As AI steps into the complex web of human society, it is transitioning from a mere technological tool to a social entity with significant influence. In the third installment of our exclusive series on the 2024 Microsoft Research Asia StarTrack Scholars Program, we explore the critical role of AI in society, emphasizing the need for AI, as advocated by the Societal AI team at Microsoft Research Asia, to understand and adhere to human societal values. To explore the full scope of the 2024 program, visit our official website: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/microsoft-research-asia-startrack-program\/\">Microsoft Research Asia StarTrack Scholars Program &#8211; Microsoft Research<\/a>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"navigating-the-intersection-of-ai-and-society\">Navigating the Intersection of AI and Society<\/h4>\n\n\n\n<p>Over the past year, artificial intelligence has exhibited remarkable advancements, surpassing previously held expectations. Amidst the excitement, a crucial question arises: Is technology itself neutral in terms of values? After all, the intelligence of Large Language Models (LLMs) is based on human-generated corpora, which inevitably are embedded with human biases and values, influencing the reasoning and judgment of machines.<\/p>\n\n\n\n<p>\u201cThe rapid development of artificial intelligence is increasingly impacting human society,\u201d said Xing Xie, Senior Principal Research Manager at Microsoft Research Asia. \u201cTo ensure that AI evolves as a socially responsible technology, our research is directed towards \u2018Societal AI.\u2019 This approach involves interdisciplinary collaboration with social sciences, including psychology, sociology, and law, to explore how AI can understand and adhere to the mainstream values of human society. Our goal is to enable AI to make decisions aligned with human expectations and develop more accurate evaluation models to precisely gauge its actual value orientations and level of intelligence.\u201d<\/p>\n\n\n\n<p>To ensure that AI adheres to the principle of benefiting humanity, Xing Xie and his colleagues at Microsoft Research Asia believe it\u2019s imperative to not only develop technologies aligned with this objective but also to establish rules and methodologies that extend beyond the technological realm. Their area of study involves value orientations as well as AI safety, verifiability, copyright, and model evaluation, which are all closely related to social responsibility.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"preparing-for-greater-impact\">Preparing for Greater Impact<\/h4>\n\n\n\n<p>Years ago, Microsoft identified \u201cResponsible AI\u201d as a core principle in AI research and development, encompassing aspects such as privacy, security, fairness, and explainability. This foresight has become increasingly relevant with AI\u2019s explosive growth over the past year, making Societal AI a forward-looking research direction.<\/p>\n\n\n\n<p>As AI\u2019s capabilities increase and its societal impact expands, even a minor misalignment in its values could potentially trigger significant consequences. As Microsoft President Brad Smith suggests in his book <em>Tools and Weapons: The Promise and the Peril of the Digital Age<\/em>, the more powerful the tool, the greater the benefit or damage it can cause. Therefore, in pursuing more powerful AI, it is crucial to simultaneously focus on AI\u2019s role in social responsibility and prepare for any potential impacts on human society. The aim of Societal AI is to ensure that AI becomes a technology accountable to society.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"setting-value-based-guardrails-for-artificial-intelligence\">Setting Value-Based Guardrails for Artificial Intelligence<\/h4>\n\n\n\n<p>Xing Xie and his colleagues believe that in building Societal AI, they should consider the following: value alignment, data and model safety, correctness or verifiability, model evaluation, and interdisciplinary collaboration.<\/p>\n\n\n\n<p>Value alignment, a nascent field, has already gained widespread recognition for its importance in both industry and academia. In simple terms, it means ensuring that AI, when cooperating with humans and society, follows the same mainstream values as humans and achieves goals consistent with human expectations. This approach helps avoid unexpected outcomes from AI automation or the misuse of AI in ways that are detrimental to human welfare. Traditional practices such as reinforcement learning from human feedback (RLHF) are being reevaluated. In Societal AI research, the team\u2019s goal is to elevate AI from merely following human instructions and preferences to embracing basic human values, allowing AI to assess its own actions based on these values. To achieve this, the team has initiated the Value Compass Project, which focuses on directly aligning AI models with human values established in sociology, ethics, and other areas.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"562\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757-1024x562.jpg\" alt=\"pic\" class=\"wp-image-993525\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757-1024x562.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757-300x165.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757-768x422.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757-240x132.jpg 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image001-65800a946e757.jpg 1295w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>According to the team, the challenge they are faced with in this endeavor involves three parts: first, translating abstract human values into concrete, measurable, and practical definitions for AI; second, technically regulating AI behavior with these value definitions; and third, effectively evaluating AI to demonstrate its alignment with genuine human values.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"ensuring-ai-remains-within-human-oversight\">Ensuring AI Remains within Human Oversight<\/h4>\n\n\n\n<p>As AI\u2019s intelligence leaps ahead, its evaluation faces new challenges. Traditional task-oriented machine learning allows for quantifiable evaluation standards, but as AI\u2019s work types diversify, new methodologies are needed. To address this, Xing Xie and his team have developed an evaluation route based on the PromptBench architecture, which covers infrastructure, various tasks and scenarios, and evaluation protocols.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"361\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002-1024x361.png\" alt=\"pic\" class=\"wp-image-993528\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002-1024x361.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002-300x106.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002-768x270.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002-240x85.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/12\/image002.png 1403w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In terms of specific evaluation methods, they are exploring two approaches. One is a dynamic and developmental evaluation system. Current static public benchmarks have limitations, such as an inability to accurately evaluate the improving intelligence of large models and running the risk of being fully mastered by them, akin to memorizing a whole exam database. Because developing a dynamic and evolving system is key to achieving fair and accurate AI evaluation, the team has developed the DyVal algorithm for dynamic evaluation of large language models, generating test samples through a directed acyclic graph and allowing for scalable complexity.<\/p>\n\n\n\n<p>The other approach views AI as a general intelligence agent similar to humans and uses methodologies from social sciences such as psychology and education for AI evaluation. The team has initiated interdisciplinary collaboration with experts in psychometrics and believe that the methodologies used for evaluating unique human functions can apply to general AI, offering abilities that traditional benchmarks lack. Their latest paper details the feasibility and potential of psychometrics in AI evaluation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"cross-industry-and-cross-disciplinary-collaboration\">Cross-Industry and Cross-Disciplinary Collaboration<\/h4>\n\n\n\n<p>Just as methodologies from psychology are essential for AI testing, blending Societal AI with other disciplines, especially social sciences, is critical. Key areas such as value alignment, safety, and model evaluation in AI require integration with social sciences, since computer science alone cannot fully address many of the challenges.<\/p>\n\n\n\n<p>Unlike previous interdisciplinary collaborations in computer science, Societal AI presents unique challenges, such as bridging significant disciplinary divides, and requires new approaches. It not only needs to integrate the arts and sciences but also needs to reposition computer technology as an entity that is being empowered rather than one that empowers. Social sciences provide fresh perspectives and tools, necessitating the construction of new theoretical frameworks and methodologies from scratch.<\/p>\n\n\n\n<p>While researchers in engineering, biology, physics, chemistry, and mathematics have begun integrating AI into their studies, there is a significant dearth of talent capable of supporting interdisciplinary research, particularly in social sciences like sociology and law. Balancing and combining the fast-paced, iterative approach of computer science with the long-term research and observational methods of social sciences remains an area of exploration.<\/p>\n\n\n\n<p>In addressing these unresolved and challenging issues, Microsoft Research Asia StarTrack Scholars Program advocates an open attitude, encouraging dialogue and joint experimentation with researchers from various disciplines to discover viable solutions.<\/p>\n\n\n\n<p>As we delve deeper into the realms of Societal AI, we are increasingly recognizing the need for fresh perspectives and innovative minds to tackle the intricate challenges that lie at the convergence of technology and human society. If you are an aspiring young researcher with a zeal for exploring how AI can be made to align with human societal values and are eager to contribute to groundbreaking work in AI safety, verifiability, and value alignment, we invite you to apply to the Microsoft Research Asia StarTrack Scholars Program. Join us in this exciting journey to shape AI into a responsible, value-driven technology that resonates with and enhances human society. Applications are now open for the 2024 program. Apply now and become a part of this transformative journey. For more details and to submit your registration, visit our official website: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/microsoft-research-asia-startrack-program\/\">Microsoft Research Asia StarTrack Scholars Program &#8211; Microsoft Research<\/a>.<\/p>\n\n\n\n<p><strong>References:<\/strong><\/p>\n\n\n\n<p>1. Yao, J., Yi, X., et al. (2023). \u201cFrom Instructions to Intrinsic Human Values &#8212; A Survey of Alignment Goals for Big Models.\u201d arXiv. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2308.12014\">Access the paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>2. Yi, X., & Xie, X. (2023). \u201cUnpacking the Ethical Value Alignment in Big Models.\u201d Journal of Computer Research and Development, 60(9), 1926-1945. DOI: 10.7544\/issn1000-1239.202330553. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/crad.ict.ac.cn\/cn\/article\/doi\/10.7544\/issn1000-1239.202330553\">Access the paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>3. Zhu, K., Wang, J., et al. (2023). \u201cPromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts.\u201d arXiv. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2306.04528\">Access the paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>4. Microsoft. PromptBench. GitHub repository. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/promptbench\">Access PromptBench<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>5. Zhu, K., Chen, J., et al. (2023). \u201cDyVal: Graph-informed Dynamic Evaluation of Large Language Models.\u201d arXiv. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2309.17167\">Access the paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>6. Wang, X., Jiang, L., et al. (2023). \u201cEvaluating General-Purpose AI with Psychometrics.\u201d arXiv. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2310.16379\">Access the paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>7. Xie, X. \u201cAligning AI with human values is as important as making AI intelligent (\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u7684\u4ef7\u503c\u89c2\uff0c\u548c\u8ba9AI\u62e5\u6709\u4eba\u7c7b\u667a\u80fd\u540c\u6837\u91cd\u8981).\u201d Microsoft Asia Research Asia WeChat Account, October 26, 2023, 5:02 PM, Beijing. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/i8-2ScUwQHkpbfiHw9ViYg\">Access the article<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>8. Smith, B. (2023, May 30). Governing AI: A blueprint for our future. In <em>Tools and Weapons Podcast<\/em> (Season 2, Episode 6). Microsoft News. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/news.microsoft.com\/tools-and-weapons-podcast\/\">Access the podcast<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>9. Smith, B., & Browne, C. (2019). <em>Tools and Weapons: The Promise and the Peril of the Digital Age<\/em>. Penguin Press. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/news.microsoft.com\/on-the-issues\/tools-and-weapons\/\">Access Microsoft&#8217;s introduction to the book<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>10. Microsoft Research Asia. \u201cIntellectual Property, Privacy, and Technology Misuse: How to Face the Legal and Ethical Challenges of the Large Model Era? (\u77e5\u8bc6\u4ea7\u6743\u3001\u9690\u79c1\u548c\u6280\u672f\u6ee5\u7528\uff1a\u5982\u4f55\u9762\u5bf9\u5927\u6a21\u578b\u65f6\u4ee3\u7684\u6cd5\u5f8b\u4e0e\u4f26\u7406\u6311\u6218\uff1f).\u201d Microsoft Research Asia WeChat Account, August 17, 2023, 5:01 PM, Beijing. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/FVQR9oZWJlSJXHMBeUMZZw\">Access the article<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p><strong>Theme Team:<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xingx\/\">Xing Xie<\/a>&nbsp;(Engaging Lead), Senior Principal Research Manager, Microsoft Research Asia<\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangzwu\/\">Fangzhao Wu<\/a>, Principal Researcher, Microsoft Research Asia<\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jialia\/\">Jianxun Lian<\/a>, Senior Researcher, Microsoft Research Asia<\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jindwang\/\">Jindong Wang<\/a>, Senior Researcher, Microsoft Research Asia<\/p>\n\n\n\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xiaoyuanyi\/\">Xiaoyuan Yi<\/a>, Senior Researcher, Microsoft Research Asia<\/p>\n\n\n\n<p><em>If you have any questions, please email Ms. Beibei Shi, program manager of the Microsoft Research Asia StarTrack Scholars Program, at besh@microsoft.com<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The rapid development of Generative Pre-trained Transformer (GPT) technologies and the advent of the large model era have significantly impacted every facet of the information world. As AI steps into the complex web of human society, it is transitioning from a mere technological tool to a social entity with significant influence. In the third installment [&hellip;]<\/p>\n","protected":false},"author":34512,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":199560,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-993510","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":199560,"type":"lab"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/993510","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/34512"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/993510\/revisions"}],"predecessor-version":[{"id":993948,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/993510\/revisions\/993948"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=993510"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=993510"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=993510"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=993510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}