Research challenges
The Microsoft Research AI & Society Fellows program is seeking eminent scholars and leading experts in various fields to support the following research challenges. We invite you to consider each of the challenges below.
You can submit a proposal on the “How to apply” tab or beneath each research challenge description. An individual can submit to multiple research challenges (a maximum of three) provided they are eligible. A candidate will only be selected to join one research challenge as a “fellow”.
AI in Organizational Settings
-
Principal Investigator: danah boyd
Additional Microsoft researcher(s): Nancy Baym, Neha Shah
External collaborator type: Academia, Non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description: Experts are speculating that tools based on large-language models and other forms of generative AI are going to play a significant disruptive role in various industries, organizations, and institutions. But what does this look like on the ground? And how does this vary across different sectors? We are looking to work with ethnographically oriented scholars who are well-positioned to conduct fieldwork that will ask critical questions about how organizations are responding to and being reconfigured by the introduction of AI. How are these technologies being embraced or rejected? How are senior leaders, managers, and frontline workers envisioning and deploying these technologies? How does the hype intersect with on-the-ground realities? What visions of the future are already underway within organizations and how is that shaping organization planning and practice? What kinds of organizational transformations are unfolding in reaction to AI? What do sector-wide responses to AI say about organizational resilience?
We are looking for proposals for empirical qualitative projects that ask critical questions about how new AI technologies are playing out in organizational settings. While we are interested in organizational studies from a range of sectors and industries, we are particularly interested in studies situated in one of the following sectors:
- Agriculture
- Energy
- Financial services
- Government
- Insurance
- Manufacturing
- Retail or supply chains
Ideal candidate
The ideal submission will pursue a novel perspective on questions at the intersection of AI and organizations in at least one of the highlighted sectors and industries. This fellowship is for qualitative social scientists. Scholars may come from a range of social science disciplines, including but not limited to organizational behavior, sociology, management, anthropology, and communication. The target recipient is a professor or civil society researcher. (Students and postdocs may apply alongside a faculty/researcher collaborator.) The expected output of this project should be academic publications, talks, and other public-facing contributions.
Proposed studies should be both critical and empirical in orientation and grapple with the relationship between technology and organizations holistically. We are specifically looking for scholars who are oriented towards a systems or structural approach to analyzing sociotechnical phenomena in organizational life. For this particular fellowship, we are seeking projects that go beyond analyzing the impact on individual workers to considering transformations in organizations, industries, and institutional arrangements.
Prerequisite considerations
- Because of the time frame of this fellowship, access to field sites should already be pre-negotiated or easily accessible. We expect that most candidates for this fellowship will be extending existing work. We recognize that most fieldwork-based studies are conducted by individual scholars, but we welcome collaborative projects. Collaborative proposals may submit up to two scholars across universities or a faculty member and a student.
- Applicants should also desire to participate in a cohort of other scholars working on adjacent projects and relish opportunities to share findings in a way that may lead towards collaborations. For example, fellows would be expected to participate in regular meetings with danah boyd and other researchers. The goal of these “research jams” would be to share insights across individual projects, request or offer feedback to each other’s studies, and iteratively work together towards shared insights. Depending on the particular proposal and scholarly desire, deeper collaborations could unfold.
- Given that danah and others are doing adjacent work, this might mean conducting interviews together, weaving together insights from two distinct field studies asking similar questions, and co-authoring papers. Any collaboration would need to be mutually beneficial.
- Applicants should articulate what kind of collaboration with danah boyd or other MSR researchers would make this an enriching experience.
- When filling out the “proposal package,” please make sure to include a detailed description of the research question, the field site, and relevant literature in the proposal summary and flag any related publications.
*Note: This challenge is accepting collaborative proposals whereby up two (maximum) candidates can submit one proposal jointly. To submit a collaborative proposal with another candidate, please submit a single proposal to this research challenge. In the submission, please indicate as part of your “Statement of Interest” (document uploaded during submission) that this is a “collaborative proposal” and include the additional collaborator name, organization/affiliation, and how this individual will contribute to this work. Please contact us if you have any questions.
AI in the Production of Culture, Media, and the Arts
-
Principal Investigator: Tarleton Gillespie
Seeking collaborators from: Academia
Preferred geographies: Africa, Canada, Europe, United States
Description:
Generative AI promises to transform cultural production: fiction, photography, film, music, journalism. But as they are taken up by creators, the particular tendencies of generative AI tools will shape the forms of culture produced: what stories get told, what images and video look like, what music and voices sound like, what is presented as important or newsworthy, how problems are framed, how narratives are structured. While proponents of generative AI tools promise they will soon produce “anything” in any form, evidence suggests that is not yet the case. We find it important to investigate which cultural forms these tools do and do not approximate. What makes some cultural forms more difficult to generate than others? And what are the implications, if some forms of culture are easier to generate and others much less so?
We seek scholars who are investigating the forms, media, and genres of culture that generative AI tools are producing or are failing to. This is a multi-modal concern that should look beyond text and images to other forms of culture and media, such as music, video, gaming, or animation.
Ideal candidate
We seek scholars who are investigating the forms, media, and genres of culture that generative AI tools are producing, and are failing to. We see this as a multi-modal concern that should look beyond text and images to other forms of culture and media, such as music, video, gaming, or animation. Researchers may come from the social sciences or the humanities, so long as their interest bridges the sociocultural and the technical. The aim is to produce a white paper on the impact of AI on the production of culture and host a workshop to gather scholars examining these issues.
Additional details
Generative AI promises to transform cultural production: fiction, photography, film, music, journalism. Amid their rapid public introduction, early public debates have focused on the capacities and shortcomings of the tools, and the warnings from existing industries fearful of being disrupted or replaced. But as they are taken up by creators, the particular shape, tendencies, and biases of generative AI tools will matter also for the forms of culture produced: what stories get told, what images and video look like, what music and voices sound like, what is presented as important or newsworthy, how problems are framed, how narratives are structured. Like other production tools, from the paintbrush to the camera lens, generative AI will subtly change what can be made, and what will.
While proponents of the latest generative AI tools promise they will soon produce “anything” in any form, evidence suggests that is not the case, at least not now or in the immediate future. Advances in the areas of text and image overshadow the fact that sound and video have been much more difficult to generate to the same degree and quality thus far. AI tools – trained on content stripped of its context, to produce language distributions that value association, not meaning – may be an ill fit for cultural production, where the meaning of a narrative or image exists both within and beyond the text itself. We find it important to investigate which cultural forms these tools do and do not approximate. What makes some cultural forms more difficult to approximate than others? What parts of cultural texts are desiccated when they are treated as content and produced by systems trained on data? And what are the implications, if some forms of culture are easier to generate and others much less so?
AI Powered Community Micro-Grid for Resiliency and Equitability
-
Principal Investigator: Peeyush Kumar
Additional Microsoft researcher(s): Vaishnavi Ranganathan, Shivkumar Kalyanaraman, Srikanth Kandula, Srinivasan Iyengar, Swati Sharma, Weiwei Yang, Bodhi Priyantha, Asta Roseway
Seeking collaborators from: Academia, non-academia
Preferred geographies: Canada, India, United States
Description:
The rise of affordable small-scale renewable energy, particularly rooftop solar, is revolutionizing energy systems around the world. Traditional large-scale electric grids often pose inefficiency and equity issues, more acutely affecting marginalized communities. Often marginalized communities face the brunt of “energy poverty”, where a significant portion, upto 5x the average rate, of a household’s income is spent on energy needs. This model further neglects these communities in energy decisions, often resulting in inequitable service during disasters and unrepresentative rate structures. AI powered microgrids offer a way to address these issues: small, smart grid versions that emphasize individuals as proactive contributors. These community-based & AI powered microgrids can enhance energy resilience, notably when infrastructure failings or natural calamities disrupt power. These grids encourage local self-reliance, thereby increasing energy efficiency due to minimized transmission distances. Integrable with renewable energy and scalable, microgrids promote sustainability and can reinvigorate local economies through job creation and technological advances. More importantly, they hold the potential to democratize energy distribution, combatting energy poverty. In embracing this challenge, we envision collaborating with external fellows from academic institutions, public and private labs, government agencies, and grassroots non-profits, all of whom share an interest in community-driven microgrid development. Our goals encompass generating cross-disciplinary research insights with tangible outcomes in AI analytics, operations research, systems design, social innovation, ethics in AI, energy economics, climate action and urban planning. Specific outcomes will be co-defined with fellows during collaboration.
Ideal candidate
Ideal collaborators will be multi-disciplinary, including grassroot innovators or academic experts from Urban Planning, Environment Justice, Social Work, Community Psychology, Economics, and/or Computer Science.
Fellowship candidates are still being considered for this research challenge, and we may announce additional fellows during the month of February. If you have applied to this challenge, you can expect to receive your proposal status notification by the end of February 2024. For questions, please contact msfellow@microsoft.com.
Additional details
The rise of affordable small-scale renewable energy, particularly rooftop solar, is revolutionizing energy systems around the world. Traditional large-scale electric grids often pose inefficiency and equity issues, more acutely affecting marginalized communities. Often marginalized communities face the brunt of “energy poverty”, where a significant portion of a household’s income is spent on energy needs. Large utility providers frequently fail to account for income disparities in their rate structures, thereby imposing flat rates that disproportionately burden lower-income households. These cost structures include operational cost which account for high energy consumptions in richer neighborhoods which is normalized across all, thus making marginalized communities having to pay for higher energy usage by richer neighborhoods. They are also susceptible in case of disaster events, often being last to receive service restorations after blackouts, increasing their vulnerability. This centralized model presents challenges to the integration of diverse, local renewable energy sources and could contribute to high and unpredictable energy prices. Energy decisions are typically made by distant entities, which can result in a lack of representation for marginalized communities, neglecting their unique needs and challenges. Lastly, traditional utilities often lack the local focus required to provide job growth, community empowerment, and energy democracy.
Microgrids offer a way to address these issues. As the name implies, community microgrids are small-scale versions of the grid, except that they are smarter and put more focus on individuals as active participants instead of as passive consumers. Community-centered micro-grids can contribute to energy resilience, especially in the face of frequent natural disasters and aging infrastructure that may leave communities without power. They empower local regions by reducing reliance on distant power plants and increasing energy efficiency, as energy does not have to travel long distances. Furthermore, micro-grids are highly scalable and can integrate with renewable energy sources, making them a sustainable choice for communities. They also spur local economic growth by creating jobs and fostering technological innovation. By establishing localized control over energy, communities can also ensure more equitable distribution and pricing, thereby reducing energy poverty.
This is a recently developing field with rising research and grassroot interest in community-supported microgrid. Cross disciplinary research will yield insights and research papers across multiple domains. Ideal collaborators will be multi-disciplinary, including experts from Urban Planning, Social Work, Community Psychology, Economics, Computer Scientists, Electrical and Civil Engineers.
This work has the potential to develop insights and publish academic papers in the field of AI (for data-driven analytics and decision making), operations research (for optimization), systems (for the design, development and implementation), social innovation (on the intersection tech innovation and community engagement), ethics and AI (to address systemic AI challenges on marginalized communities), urban planning, energy economics and others. We invite academic institutions, public and private labs, government agencies and non-profits mobilizing grassroot engagements which would yield opportunities for panel discussions, workshops, etc.
Copyright Protection for User Data in the Era of LLMs
-
Principal Investigator: Fangzhao Wu, Xing Xie
Seeking collaborators from: Academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
Large language models like ChatGPT are trained on an extremely huge amount of data collected from the Internet, most of which is generated by Internet users and some of them may be protected by copyright licenses. However, currently these users get no credit from the companies who train and own these LLMs. It is important to protect the copyright of user data against possible infringement in the age of large language models, which is the goal of this research challenge. However, this task is not easy, and there are many key challenges such as how to define and prove copyright infringement for a large language model, how to make user data more robust towards copyright infringement, and how to make copyright infringement easier to be detected if it happens, which can be the research problems in this research challenge. We want to collaborate with interdisciplinary researchers like law scholars who are passionate about this challenging but important topic through this project. The possible outcome of this collaboration may include publishing academic papers to report our findings and methodologies on copyright protection of user data, as well as developing open-source tools or datasets to enhance research and facilitate application in this field.
The research problems for this project include but are not limited to:
- How to define and prove copyright infringement for a large language model? Is it possible to verify whether a text with copyright is used for training an LLM?
- How to make the user data public on Internet more robust to copyright infringement of large language models?
- Is it possible to design techniques like watermarking to help detect copyright infringement easier when it happens?
- Is it necessary to protect the copyright of the content generated by large language models under human supervision? If so, how to protect it?
Ideal candidate
The ideal external collaborator for this research challenge is either of the following:
1) Legal scholar who has experience in copyright protection research and practice, and has high interest in LLMs and Responsible AI
2) AI researcher or data scientist who are working on user data copyright protection for LLMs and have interest in analyzing its social impact.
*Note: This challenge is accepting collaborative proposals whereby up two (maximum) candidates can submit one proposal jointly. To submit a collaborative proposal with another candidate, please submit a single proposal to this research challenge. In the submission, please indicate as part of your “Statement of Interest” (document uploaded during submission) that this is a “collaborative proposal” and include the additional collaborator name, organization/affiliation, and how this individual will contribute to this work. Please contact us if you have any questions.
Fellowship candidates are still being considered for this research challenge, and we may announce additional fellows during the month of February. If you have applied to this challenge, you can expect to receive your proposal status notification by the end of February 2024. For questions, please contact msfellow@microsoft.com.
Generative AI and Plural Governance: Mitigating Challenges and Surfacing Opportunities
-
Principal Investigator: Madeleine I. G. Daepp
Additional Microsoft researcher(s): E. Glen Weyl, Gonzalo Ramos
Seeking collaborators from: Non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
Recent advances in the development of generative artificial intelligence create both new possibilities and unprecedented challenges for democratic governance processes. Plural societies, those that aim to foster constructive and inclusive governance, require robust mechanisms for surfacing and integrating perspectives of diverse constituents. Fellows will explore one of two aspects of this moment: (1) harnessing large language models or other AI tools for improved facilitation and sensemaking of public engagement processes or (2) protecting established democratic and participatory systems from the misuse of generative AI.
Applicants could include nonprofit leaders, journalists, or researchers focused on robust democratic processes. We are seeking practitioners whose work is affected by challenges with respect to generative AI or who actively run deliberative processes that would benefit from AI-based tooling. We are particularly keen on collaborating with fellows with expertise (practical or academic) in political science, urban planning, law, social work, or related disciplines.
We expect the work to lead to tangible outcomes such as open-source code releases, new standards, or case studies in addition to papers or reports. We are particularly interested in identifying practitioners who conduct deliberative processes and who, as collaborators, could provide real-world testbeds to support the co-design of tooling with our team, or developers who would contribute new tools. Finally, we expect our collaborators to participate in a relevant workshop or MSR summit panel that helps to coalesce the nascent research space around generative AI and democratic governance.
Ideal candidate
We are seeking practitioners whose work is affected by challenges with respect to generative AI or who actively run deliberative processes that would benefit from AI tooling. Applicants could include nonprofit leaders, journalists, or researchers focused on robust democratic processes. We are particularly keen on collaborating with fellows with expertise (practical or academic) in political science, urban planning, law, social work, or related disciplines.
Additional details
Recent advances in the development of generative artificial intelligence create both new possibilities and unprecedented challenges for democratic governance processes. Plural societies, those that aim to foster constructive and inclusive governance, require robust mechanisms for surfacing and integrating perspectives of diverse constituents. Fellows will explore one of two aspects of this moment: (1) harnessing large language models or other AI tools for improved facilitation and sensemaking of public engagement processes or (2) protecting established democratic and participatory systems from the misuse of generative AI.
Potential application topics could include:
- The development of an open-source prompt library offering templates for the use of LLMs to facilitate sensemaking, generating accurate and representative reports from public comment processes that are robust to known biases in foundation models.
- Efforts supporting election integrity in the context of generative AI, such as evaluations of the efficacy of fact-checking or other mitigation strategies to minimize the effects of generative AI on opinion formation
- The creation of existing digital deliberation tools with interactive and adaptive modules that use LLMS to better elicit rich responses from diverse stakeholders.
- The deployment of provenance tools to support media and factchecking organizations in understanding and correctly portraying the origin of content, including the role large foundation models played in generating it and the origins and endorsement it has in relevant political actors.
Multimodal Knowledge Understanding and Representation for Population-scale Copilots
-
Principal Investigator: Akshay Nambi, Tanuja Ganu
Seeking collaborators from: Academia, Non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
Large Language Models (LLMs) are rapidly transforming the technology landscape and enabling and empowering users including developers and end users, to address real-world applications in various domains. Multimodal knowledge understanding and representation is critical to perceive general modalities of data, including texts, images, audio, and videos. Accurate understanding and grounding based on such multimodal representation is essential for downstream copilot application in domains like education (aligning the geometry figures or flow charts along with the corresponding text), healthcare (aligning x-ray images with the reports), sustainability (representation of large process diagrams with user manuals, operations logs and compliance requirements etc.) and many more. Additionally, in most of such population-scale societal applications, the multimodal artifacts (text, images, audios etc.) could be multi-lingual including various high, medium or low resourced languages. Having a unified and accurate multimodal and multilingual representation of such artifacts is a challenge.
This research challenge aims to bring together fellows interdisciplinary from the Systems, AI/ML, Linguists, and Human Computer Interaction (HCI) backgrounds to work with researchers at Microsoft to:
- Investigate the applicability and current limitations of multimodal, multilingual LLMs for societal applications.
- Develop new models, datasets and architectures for multimodal understanding and representation.
- Build and evaluate multimodal, multilingual LLMs for few societal copilots in the space of education, healthcare and sustainability.
We expect that this research challenge will produce, but will not be limited to white papers, research papers, new models/architectures, new benchmarks and datasets, and broader collaboration with academia and industry on deployments of such models for real-world applications.
Ideal candidate
The ideal external collaborator would be a researcher (academia or industry) who has exposure and experience on a multi-modal and multi-lingual knowledge representation and would be interested in applying this to one of the societal real-world applications like education or healthcare. The collaborator could also be from an NGO or startup already working in the specific application domain with domain expertise and interested and committed to addressing this research problem.
Additional details
To learn more about this area of research, view our project page: Project VeLLM.
Reducing the Digital Divide of Generative AI in the Global South
-
Principal Investigator: Daniela Massiceti
Additional Microsoft researcher(s): Cecily Morrison, Jacki O’Neill, Maxamed Axmed
Seeking collaborators from: Academia, non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
Generative AI (Gen AI) models, such as ChatGPT and DALL-E, hold the potential to transform how we access information across applications including healthcare, agriculture, education and the future of work. These models, however, are deeply rooted in the Global North – from the companies that develop them, to the data and metrics used to train and evaluate them. This leaves an open question as to how they well they will serve the Global South, with the significant risk that they will widen the digital divide and existing inequalities.
This research challenge aims to move toward more geographically equitable Gen AI models and applications through a deeply socio-technical study of these models in Global South contexts. The selected Fellows will work alongside a multi-disciplinary team of Microsoft researchers toward the following 3 objectives:
- To systematically analyze the current robustness of Gen AI models for the Global South, based on the study of specific applications, communities, and people.
- To design and validate approaches to create more robust and inclusive experiences for diverse, global users of Generative AI.
- To create a multi-year research roadmap that identifies the key challenges and opportunities to deliver equitable AI-infused applications that serve the Global South.
We welcome proposals from multi-disciplinary social scientists and development economists with experience in studying AI-infused technologies.
Ideal candidate
Proposals are specifically sought from social scientists and development economists from the Global South with a PhD degree and at least 2 years of work experience in a relevant field. Candidates should have deep expertise in mixed methods research (quantitative and qualitative). They should also have practical experience working with communities and organizations in the Global South, and a deep understanding of the social and cultural contexts that shape the use and adoption of AI technologies. Experience in current-day AI technologies (e.g., generative models) is preferred.
Additional details
Generative AI (Gen AI) models such as ChatGPT and DALL-E hold enormous potential for transforming the way we interact with technology and access information, promising to accelerate development in the Global South in fields like healthcare, agriculture, education and future of work.
Current models, however, are deeply rooted in the Global North – from the companies that develop them, to the data that is used to train them, to the metrics we use to evaluate them. This poses a “contextual gap” in our understanding of how these Gen AI models will perform and be experienced by users in the Global South – a gap that limits our understanding of how well AI-infused services will serve these communities, with the potential that, in the worst case, they will fail and lead to increasing the digital divide and inequality between communities and countries.
This research challenge aims to move toward more geographically equitable Gen AI models through a deeply socio-technical study of these models in Global South contexts. The challenge will bring together a multi-disciplinary team, spanning computer scientists and social scientists, to achieve the following three objectives:
- A systematic and fundamentally multi-disciplinary analysis of how Gen AI-infused applications perform in Global South contexts, through the study of specific applications, communities and people. The analysis will specifically aim to surface and categorize the failings of current models, building on existing research from the Microsoft Africa Research Institute (MARI) and Microsoft Research Cambridge and India labs.
- The design and validation of approaches to address these failings and create more equitable experiences for diverse sets of users, focusing on those on the Global South. Approaches could address datasets (e.g. rethinking data collection paradigms), evaluation (e.g. the development of novel socio-technical metrics) and user experience (e.g. techniques that allow a user to steer a model’s performance), and might intervene at the level of the underlying model or the applications built on top of it.
- The development of a multi-year research roadmap that identifies the key challenges and opportunities to deliver equitable AI-infused applications that serve the Global South. This roadmap will be used by Microsoft, decision makers and technologists alike to inform future research and investment opportunities.
Fellows will work alongside a team of Microsoft machine learning and social science researchers toward these 3 objectives.
Regulating AI in Light of the Challenges of Doing Responsible AI in Practice
-
Principal Investigator: Solon Barocas
Additional Microsoft researcher(s): Jenn Wortman Vaughan, Hanna Wallach
Seeking collaborators from: Academia, non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
A small but growing body of social scientific research has explored the many challenges of doing responsible AI in practice. Using empirical methods like ethnography, site observations, interviews, and more, this work has found that there is often a significant gap between the aspirational goals of responsible AI principles and what is currently being achieved by the frontline workers charged with realizing these principles in practice.
These studies have identified a range of difficulties that offer important lessons for anyone seeking to develop an effective regulatory regime for AI.
First, practitioners often confront technical limitations when seeking to assess AI systems and address possible problems. The underlying science of AI evaluation remains immature in many cases. The difficulty of reliably assessing the capabilities of frontier models provides a clear example of this problem: current methods are far from systematic and provide rather spotty coverage. As a result, those doing the work to ensure responsible development and use of such models often butt up against the limits of the current scientific understanding of the technology.
Second, practitioners also face practical constraints that limit how well they can do their jobs. For example, developing an evaluation dataset to perform a rigorous assessment of the fairness of an AI system can be a significant and difficult undertaking, especially when it requires collecting sensitive information about demographic attributes like gender, race, etc. Similarly, adapting general purpose methods from the academic literature for practical use in evaluating specific products or services is rarely straightforward or easy. The same often holds for fairness toolkits, even those purported to be “general purpose.”
Third, organizational dynamics also complicate practitioners work on responsible AI. Such work often requires collaboration across teams with quite different skills and expertise, leading to cross-functional frictions. Practitioners also sometimes lack the necessary institutional support to effectively execute their duties in practice. And incentive structures don’t always align with the goals of responsible AI policies, creating difficult tensions for practitioners. This is not to even mention the challenge of keeping pace with rapid technological developments and the push to ship products and services.
The coming wave of AI regulation needs to grapple with these challenges if it is to have its intended effects in practice. Many of the regulatory proposals currently under discussion, especially those that include requirements for evaluation (e.g., audits, impact assessments, etc.), often take for granted that actors will be able to figure out how to put in place appropriate processes, procedures, and tools to fulfill these requirements—and will have the necessary institutional support to do so. The research has so far suggested this is not often the case.
These difficulties are sometimes invoked as reasons to resist AI regulations. In this challenge, we instead call on the broader community, especially those working in law and policy and the social sciences, to collaborate with Microsoft Research to figure out how the law can be made more responsive to the known challenges of doing responsible AI in practice and those yet to be discovered. Legislation and regulation should reflect what is technically possible at the moment, what can be made possible with sufficient investment, and what is likely to remain infeasible. They should also explicitly target the practical impediments and organizational dynamics that impede efforts at responsible AI.
To that end, this challenge seeks to (1) promote original legal scholarship that is more deeply informed by social scientific insights on the challenges of doing responsible AI in practice, (2) engage in further social scientific research on responsible AI, with a particular focus on how regulated entities are seeking to comply with existing or forthcoming laws, and (3) bring these insights into the policymaking process via direct engagement with government and civil society stakeholders.
Ideal candidate
We are particularly interested in applications from (1) legal scholars who rely on methods from the social sciences or engage with findings from the social sciences to develop a better understanding of how law and policy functions on the ground, ideally with prior experience studying the regulation of technology and crafting empirically-informed policy recommendations; (2) social scientists who study the everyday practices of teams tasked with identifying and addressing the risks posed by technology, including complying with law and policy, ideally with a focus on responsible AI; (3) human-computer interaction researchers who study the practical challenges that teams face when seeking to identify and address the risks posed by technology, with a focus on organizational dynamics, cross-functional collaboration, and design considerations, ideally with prior experience studying responsible AI specifically.
We welcome applications from researchers based in the academy, industry, government, or civil society.
Regulatory Innovation to Enable Use of Generative AI in Drug Development
-
Principal Investigator: Stephanie Simmons
Additional Microsoft researcher(s): Junaid Bajwa (Health Futures), Hannah Richardson (Health Futures Scientific & Regulatory Affairs team), Grace Huynh, Mandi Hall, Michelle Francis
Seeking collaborators from: Academia, non-academia
Preferred geographies: Canada, Europe, United States
Description:
This research challenge is to develop policy and practice recommendations for regulators and trade organizations in the biopharma industry, to enable trustworthy use of foundation models and generative AI in the drug development process. According to most estimates, it takes an average of 10-15 years and costs between $1-2.5B to bring a drug to the US market. AI/ML could significantly accelerate development timelines, reduce costs, and optimize current processes, while enabling a future of personalized medicine.
Generative AI has demonstrated remarkable capabilities in data structuring and causal reasoning, and further research involving multimodal and longitudinal datasets could transform healthcare at both the individual and population level. Despite the potential benefits, and the opportunity costs of not adopting advanced technologies, there is a lack of shared best practices and regulatory guidance on employing AI in the drug development process. There are also potential barriers to collaboration such as competitive pressures, ethical issues, and cultural dynamics. Establishing appropriate regulatory frameworks and standards will involve building trust and supporting transparency. It may also require policymakers and industry leaders to rethink traditional business models and approaches.
This challenge is timely, as promising use cases for generative AI across the drug development lifecycle are being explored and regulators such as the FDA, EMA and MHRA are actively seeking feedback. Thus, we propose exploring regulatory frameworks and other conditions necessary for the successful implementation of AI/ML in the biopharma industry. We expect to participate in industry working groups and convene multidisciplinary industry stakeholders to develop balanced, risk-based, and well-grounded regulatory policy and practice proposals.
Ideal candidate
The Scientific & Regulatory Affairs team within Health Futures is well placed to support a fellow for this research topic. We have a small but diverse team that combines clinical, legal, compliance and regulatory expertise in healthcare and life sciences. We translate legal and regulatory requirements into practices that can be operationalized within our dynamic business. We take an agile approach to developing programs that balance compliance (quality systems, safety, privacy, security, ethics, etc.) with support for innovation.
We would be seeking a regulatory affairs attorney with an additional degree or strong background experience in pharmaceutical development and/or healthcare policy (ideally both). A fellow with this expertise and skillset would complement our team by bringing experience developing new policy proposals and putting them into practice in real-world biopharma settings. This would provide our team with an outside-in perspective and ground our work in practical experience.
We are seeking a futurist with a general understanding of different types of AI systems who appreciates the potential that these systems have to transform the drug development process as we know it. The ideal candidate would have a track record of applying innovative approaches to law, regulation and policy. The fellow would develop policy positions in a highly complex and dynamic area. Although proposed standards may be without much precedent, they would be informed by a background in the drug development field and an understanding of implementation challenges.
Alongside this expertise, the fellow should also have demonstrated success in policy or academic writing, as well as leading policy discussions, to enable productive and high-quality generation of outputs based on our research questions.
Fellowship candidates are still being considered for this research challenge, and we may announce additional fellows during the month of February. If you have applied to this challenge, you can expect to receive your proposal status notification by the end of February 2024. For questions, please contact msfellow@microsoft.com.
Additional details
The use of AI, including generative AI, in drug development is becoming increasingly popular due to its ability to accelerate the discovery of new drugs, reduce R&D spending, and efficiently analyze real-world data. AI has the potential to generate $100B across the pharmaceutical and medical product industries, according to McKinsey1. However, there are concerns about the trustworthiness of generative AI models and their ability to produce reliable results.
This research challenge is to develop policy recommendations for regulators and relevant trade organizations in the biopharma industry to enable use of generative AI in the drug development process. We would examine existing regulatory frameworks for the use of AI in medical devices and propose measures to address specific risks associated with the use of generative AI models. This work aligns to Microsoft’s work on Responsible AI generally, and the AI innovations being developed within Health Futures specifically, including AI technologies that generate real-world evidence to support regulatory decision-making.
An appropriate regulatory framework must deal with questions about accountability, transparency, explainability, privacy, fairness, and reliability and safety. For example:
- AI for drug discovery and repurposing: AI can be used to identify new drug candidates, optimize existing drugs, or find new indications for approved drugs by analyzing large and complex datasets. However, the underlying algorithms and models may not be fully interpretable or explainable, which raises questions about how to validate their accuracy, reliability, and safety. The industry needs clear guidance on how to ensure data quality, model transparency, and algorithmic accountability for AI-based drug discovery and repurposing.
- AI for clinical trial design and execution: AI can be used to enhance various aspects of clinical trials, such as patient recruitment, stratification, randomization, monitoring, adherence, endpoint assessment, and analysis. However, the use of AI may introduce new sources of variability, uncertainty, or bias that may affect the validity and integrity of the trial results. The industry needs clear guidance on how to ensure scientific rigor, ethical conduct, and regulatory compliance for AI/ML-based clinical trials.
In addition to proposing an appropriate risk-based regulatory framework, our research will include a consideration of implementation science, to facilitate successful implementation of AI systems and processes within the biopharma industry. This work may involve conducting interviews to understand common workflows and developing a change management protocol, to enable stakeholders to adapt internal processes and culture to successfully implement these technologies.
Medical drug and device regulators, such as the US FDA and UK MHRA, have not yet provided guidance on how to ensure trustworthiness of AI in the drug development process, and this area is ripe for regulatory innovation proposals. We propose examining specific use cases to assess what AI systems are fit for what purposes and whether some systems should be classified for research use only, rather than for use in regulatory reviews. Regulators are seeking feedback from stakeholders2 to shape the future landscape, and this is an opportune time to consider policy proposals to advance the use of LLMs and other foundation and generative models in this critical industry.
Sociotechnical Approaches to Measuring Harms Caused by AI Systems
-
Principal Investigator: Hanna Wallach
Additional Microsoft researcher(s): Chad Atalla, Solon Barocas, Su Lin Blodgett, Alex Chouldechova, Miro Dudik, Alexandra Olteanu, Emily Sheng, Dan Vann
Seeking collaborators from: Academia and non-academia
Preferred geographies: Canada, Europe, United States
Description:
This research challenge focuses on sociotechnical approaches to measuring harms caused by AI systems, as well as measurement in the context of AI systems more generally. The challenge will bring together a small number of fellows to work with researchers and applied scientists at Microsoft on developing new measurement approaches and interrogating existing ones. The challenge will involve a) developing methods for establishing the validity and reliability of different measurement approaches; b) developing new measurement approaches that are valid, reliable, specific, extensible, interpretable, and actionable; c) developing ways to prioritize sociotechnical considerations when developing and using different measurement approaches. The outcomes of the challenge will be collaboratively authored research papers, as well as a shared language and a set of approaches that Microsoft and other organizations could use to measure harms caused by AI systems. For example, the resulting approaches might provide a way for organizations to operationalize parts of the NIST AI Risk Management Framework’s “measure” function. The shared language and set of approaches will also be documented in a public whitepaper, along with recommendations for their use and guidance on when they shouldn’t be used.
Ideal candidate
Applicants should have a strong demonstrated commitment to sociotechnical work. Their research should be in or span the following fields or fields related to them: information science, human—computer interaction, computational social science, statistics, political science, sociology, science and technology studies, public policy, and law. Interdisciplinary scholars are especially welcome to apply. We are interested in applicants from academia and civil society, and especially applicants from programs or organizations that have a deep commitment to measurement. We are also interested in applicants from other industry organizations, who have expertise that is complementary to that found at Microsoft.
Storytelling and Futurism
-
Project Lead: Matt Corwine
Additional Microsoft researcher(s): to be announced after fellows selected (will align with research explorations)
Seeking collaborators from: Academia, non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
Storytelling can be a powerful tool to inspire, guide and share excitement about research. The Microsoft Research “storyteller-in-residence” uses stories as a framing device to support interdisciplinary teams of researchers in developing and communicating the broader vision for their work. Through collaborations with researchers, fellows will create artistic works that explore and illustrate multiple envisioned futures for society and humanity.
Ideal candidate
This fellowship would be ideal for students who have completed or are pursuing a Master of Fine Arts (MFA) or a Ph.D. in creative writing, art, design, literature, journalism or a related field. Their skills could include creative writing, filmmaking, or the development of graphic novels. Writers working in any genre or style are welcome to apply. Letters of recommendation that demonstrate your curiosity, thoughtfulness and imagination are encouraged.
Supporting the Responsible AI Red-Teaming Human Infrastructure
-
Co-Principal Investigator(s): Jina Suh, Mary Gray
Seeking collaborators from: Academia, Non-academia
Preferred geographies: Africa, Australia, Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, New Zealand, Singapore, Taiwan, United States
Description:
This research challenge proposal aims to understand and support the emerging form of digital labor that enables the safe and responsible deployment of generative AI technologies. As AI becomes increasingly integral to our digital ecosystem, the potential risks and harms of unchecked AI capabilities grow. Red teaming, a novel form of what has been called “data enrichment work,”, involves probing AI models for harmful outputs and developing mitigation strategies. However, this work exposes red teamers to harmful content, potentially leading to psychological distress, secondary trauma, and moral injury. Despite the critical nature of this work, the psychological impact and necessary support mechanisms remain largely unexplored. This proposal focuses on understanding the psychological impact of red teaming, developing practical recommendations for responsible AI practices, and exploring how generative AI technologies can optimize data enrichment work. The ultimate goal is to ensure that the humans enabling responsible AI are protected from undue harm.
In response to the rising popularity of RAI red-teaming as a solution for AI safety, it is crucial that we focus on the human infrastructure that enables the deployment of responsible AI technologies. This research challenge proposal aims to advance our scientific knowledge around the human infrastructure necessary to ensure safe deployment of foundational models along three key pillars:
- Understanding the Psychological Impact of Red-Teaming: We aim to explore how the psychological impact of red-teaming work differs across roles, activities, employment status, and involvement levels. We seek to understand personal psychological triggers and trauma from exposure, to quantify the psychological impact of red-teaming work, and to identify positive coping mechanisms and organizational support structures, and to quantify the psychological impact of red-teaming work. In this aim, understanding individual differences in the context of standardized and generalized organizational practices is crucial.
- Developing Practical Recommendations for RAI Practices: We aim to provide guidance on recruiting, onboarding, training, and maintaining the human infrastructure for RAI. We seek to recommend short- and long-term psychological wellbeing support mechanisms for all data enrichment workers, from crowd workers, vendors, to full-time employees. In this aim, balancing the needs of individuals and the business and organizational needs is crucial.
- Developing Tools for Red-Teaming: We aim to explore how generative AI technologies can be utilized to optimize data enrichment work, to minimize exposure, to support intelligent and personalized exposure, or to help cope with exposure for individuals, teams, and organizations. We seek to explore the sociotechnical considerations for deploying such technologies. In this aim, understanding the impact of deploying such automation on data enrichment work, the workers, and their profession is crucial.
Ideal candidate
Accepting collaboration submissions from the following fields of expertise:
- Clinical psychologists with expertise in secondary trauma, occupational wellbeing, or data enrichment work could provide insights into the psychological impact of red-teaming work and evidence-based interventions to reduce exposure, monitor symptoms, and cope with exposure, to understand the association between psychological impact and triggers such as harmful content exposure and behaviors, and to measure the impact of mitigating measures. Their expertise would be particularly valuable in understanding individual differences in the context of standardized and generalized organizational practices.
- Organizational psychologists with expertise in occupational wellbeing, data enrichment work, remote/hybrid work and organizational support structures could provide insights into how to recruit, onboard, train, and maintain the human infrastructure for RAI. They could help develop short- and long-term psychological wellbeing support mechanisms for all data enrichment workers, from crowd workers to full-time employees. Their expertise would be crucial in balancing the needs of individuals and the business and organizational needs.
- Social scientists with expertise in sociology, information science, and communications with experiences in studying data enrichment workers or crowd workers could help examine the experiences of red teamers, including their working conditions, the psychological impact of their work, and the broader societal implications of these forms of digital labor. They could investigate content moderation and red-teaming as new forms of information work that require special and timely attention of the RAI, sociology, occupational health, and related fields. They could help understand the impact of deploying generative AI technologies on data enrichment work, the workers, and their profession. Their expertise would be particularly valuable in exploring how generative AI technologies can be utilized to optimize data enrichment work as well as how to develop policies and processes aimed at support workers.
We are also accepting submissions from scholars who have demonstrated or are interested in interdisciplinary research problems across social, behavioral, and clinical science and who bring interdisciplinary perspectives working with the above fields of expertise.
Additional details
Artificial Intelligence (AI) has become an integral part of our digital ecosystem, offering unprecedented capabilities while also introducing potential risks and harms if misused. With the advent of generative AI technologies, we are at a new frontier where the potential risks of harm are even greater if these AI capabilities remain unchecked. As part of deployment safety and Responsible AI (RAI) processes, red-teaming has emerged as a critical process in developing responsible AI and achieving compliance before the deployment of generative AI technologies. Recent commitments by leading tech companies to bolster their red-teaming efforts underscore the importance of this process in ensuring AI safety.
RAI red-teaming, along with content moderation and data curation and labeling, are content-based activities required to minimize harmful content generation and to ensure the robustness and safety of generative AI systems. Unlike AI security red-teaming, which focuses on simulating potential adversaries to identify potential attack vectors and security vulnerabilities, RAI red-teaming involves probing AI models for harmful outputs (e.g., child endangerment, abuse, violence, self-harm, profanity, weapons) and then developing mitigation strategies and updating the models to avoid such outputs. While red-teaming work generally involves adversarial thinking (i.e., putting themselves in the malicious actor’s shoes), RAI red-teaming requires both generating potentially harmful content (content generation) and reviewing the nature of harm by that content (content moderation).
Prior research has shown that people that conduct such content work are exposed to harmful content that may lead to psychological distress, secondary trauma, and poor mental health. In addition, people engaging in red-teaming may face a moral dissonance from having to balance their professional obligation (e.g., successfully simulating a bad actor) with their personal values and ethical standards, and place negative judgment on themselves as a consequence. However, the nature and the psychological impact of such critical content work and the tools and processes necessary to support that work remains largely unexplored.
Towards Creative-Centered AI: Opportunities and Challenges at the Intersection of Creatives, AI, and Society
-
Co-Principal Investigators: Vera Liao (opens in new tab), Gonzalo Ramos (opens in new tab), Kate Crawford (opens in new tab), Jenny Williams (opens in new tab)
Seeking collaborators from: Academia, non-academia
Preferred geographies: Canada, Europe, Hong Kong, India, Japan, Korea, Latin America, Singapore, Taiwan, United States
Description:
The emergence of generative technologies is on a course to become inextricably engrained into the tools, processes, and mindset of creative practitioners. However, little is yet understood about the effects of this disruption this technology adoption has and will have on the people, practices, and society.
This research challenge aims to bring together fellows interdisciplinary from the social sciences, humanities, arts, and sociotechnical systems backgrounds to work with researchers at Microsoft to: 1) Investigate the impact of AI technologies, especially the recent advancement of large-scale generative models, on artists and creatives; 2) Understand the creatives’ values, needs and concerns working with AI technologies through participatory and community-driven approaches; and 3) Shape the future development of AI-based solutions by informing technologies, services, infrastructures, and policies to empower creatives’ capabilities, their social contract, and new emerging media.
We expect that this research challenge will produce, but will not be limited to:
- White papers and academic publications that contribute to the public discourse and scientific understanding around AI’s impact on creative work, creative communities, and their place in society.
- A set of recommendations and approaches that will enable Microsoft and other organizations to develop creative-centered AI technologies.
- Building communities of stakeholders (creatives, audience, policy makers, academia, industry and beyond) that will continue working together to bring benefits and mitigate risks of AI to human creativity.
Ideal candidate
Successful candidates for this challenge should possess interdisciplinary backgrounds across some of these dimensions: social sciences, humanities, arts, sociotechnical systems, creative technologists, and so on. We will prioritize individuals with existing deep connections with creative communities (e.g., communities of creative writers, visual and mixed-media artists, makers, as well as design and creative learning institutions), and/or that bring unique perspectives and approaches for studying, engaging, and participating in activities with these communities.
These candidates’ affiliations can range from an academic placement (such as professors, graduate students, or university researchers) to public or private organizations such as laboratories and non-profit organizations. Candidates should provide value through their specific background and perspective into the right sociotechnical matters one should focus on and that lie at the center of the disruption that AI introduces to the creative profession and practice.