Research Forum | Episode 2 - abstract chalkboard background

Research Forum Brief | March 2024

Generative AI and Plural Governance: Mitigating Challenges and Surfacing Opportunities

Share this page

Madeleine Daepp

“Democracy requires healthy dialogue and debate. It is actively threatened by generative AI’s misuse. Neither civil society nor technology companies can challenge these problems in isolation. The disruption of our digital public sphere is an all-of-society challenge that requires an all-of-society response.”

Madeleine Daepp, Senior Researcher, Microsoft Research Redmond

Transcript: Lightning Talk 5

Generative AI and plural governance: Mitigating challenges and surfacing opportunities

Ashley Llorens, CVP, Microsoft Research (closing remarks)
Madeleine Daepp, Senior Researcher, Microsoft Research Redmond
Vanessa Gathecha, Research and Policy Manager, Baraza Media Lab

Madeleine Daepp talks about the potential impacts and challenges of generative AI in a year with over 70 major global elections, and AI & Society fellow Vanessa Gathecha discusses her work on disinformation in Kenya and sub-Saharan Africa.

Microsoft Research Forum, March 5, 2024

ASHLEY LLORENS: Thank you all for joining us for Episode 2 of Research Forum, both the folks here in Building 99 and those joining live on our online platform.

More than ever, as we’ve seen today, pushing the frontiers of research demands collaboration across disciplines and institutions. Through our work on our AI & Society Fellows program, we are aiming to catalyze collaboration at another essential intersection: AI and society. To close us out today, I’m going to invite my colleague Madeleine Daepp and her collaborator under this AI & Society Fellows program, Vanessa from the Baraza Media Lab foundation, to tell us more about their work.

MADELEINE DAEPP: Thank you, Ashley. This year is a big year for democracy. In fact, it’s the biggest election year in history with more people voting than ever before. And it’s happening just as generative AI is showing unprecedented new capabilities. Now as a Microsoft researcher, I love generative AI. I use it every day to speed up my code, to punch up my essays. I use it to send emails to my non-English-speaking relatives because German grammar is hard. My colleague Robert Ness and I wanted to understand what a, sorry … but as a Microsoft researcher, I also recognize that AI can be misused. So my colleague Robert Ness and I wanted to understand what that misuse might look like in order to help protect against it. Now we are empiricists, which means that we didn’t want to rely on hypotheticals. We didn’t want to give way to histrionics. We wanted real use cases. And so we went to Taiwan, a place that the Swedish V-Dem Institute has found was subject to the most disinformation of any democracy in the world. And we met with the fact-checkers, journalists, and officials on the infodemics frontlines.

Now as you might expect, we saw deepfakes. But the reality is that deepfakes are just one case of a bigger problem. We’re calling it generative propaganda—generative AI software that makes it easy to turn propaganda text into thousands of videos. Now why is that such a big deal? Because text is boring. Videos are something that you can scroll through for hours. We also saw crisis content creation. When something unexpected happens in the world—a natural disaster or a political gaffe—whoever speaks first often sets the narrative. With generative AI, even if you do not speak the language of the affected place, you do not have to wait for a copywriter. You can automatically generate content about events as they emerge.

We are beginning to see these malicious tactics all around the world. As Microsoft researchers, we belong to a global organization with researchers on many, many continents. And this—well, all of them, except Australia and Antarctica, specifically. This gives us an obligation and an opportunity to do globally relevant work. But you cannot do good global work without understanding local context. And that’s why I am always scouting for collaborators in the places I hope to study. The AI & Society Fellows gives us an opportunity to learn from and with Vanessa Gathecha, a Nairobi-based researcher and policy analyst who works at the intersection of global governance and human welfare. I’ll let Vanessa describe the challenges that she is working on, in her own words.

[Beginning of pre-recorded presentation from Vanessa Gathecha]

VANESSA GATHECHA: Thank you, Madeleine, for this opportunity. And one of the tracks of work that we are working on is on generative AI and plural governance. This is one of the biggest election years in the history of the world, and 12 countries in sub-Saharan Africa are slated to go to the polls. One of the challenges we will likely experience is a spread of hate speech, myths, and disinformation, especially where elections are highly contested. This is really something that affects credible reporting, especially when it comes to just, um, journalism or any aspects of the media, and it also affects access to information for the general public. One of the ways we can curb this is to ensure that just as we do have a broad-based access to this technology is to have collective action when it comes to also the regulation. We need to work together on all levels of governance, on all sectors, but also to ensure that the regulatory framework is not fragmented. Thank you very much for this opportunity. I’m looking forward to collaborating with the rest of the team.

[End of pre-recorded presentation from Vanessa Gathecha]

DAEPP: We need to work together. Tech companies cannot challenge misuse of generative AI in isolation. We need to work with the people on the infodemics frontlines. Democracy requires healthy dialogue and debate. It is actively threatened by generative AI’s misuse. Neither civil society nor technology companies can challenge these problems in isolation. The disruption of our digital public sphere is an all-of-society challenge that requires an all-of-society response. The AI & Society Fellows program is helping to build much-needed connections—in this case, across places, across academic disciplines, and across society’s sectors—to help us understand the problem and work towards an impactful response.

Thank you all.