Advancing Human-Centered AI

Published

By , Chief Scientific Officer

(opens in new tab)

We’re excited about the formal launch of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) (opens in new tab) on Monday, March 18th. We resonate with Fei-Fei Li, John Etchemendy, and other leaders at Stanford on the promise of taking an interdisciplinary approach to AI, on a pathway guided by insights about human intelligence, and one that carefully considers the goals and needs of people and society.

Microsoft Research is enthusiastic about partnering with Stanford University on a shared vision of advancing human-centered AI. Last summer, we moved forward with a collaborative program aimed at nurturing nascent efforts at Stanford on human-centered AI, in advance of the formal launch of HAI.

Microsoft Research Podcast

Collaborators: Holoportation™ communication technology with Spencer Fowers and Kwame Darko

Spencer Fowers and Kwame Darko break down how the technology behind Holoportation and the telecommunication device being built around it brings patients and doctors together when being in the same room isn’t an easy option and discuss the potential impact of the work.

Our collaboration with colleagues at Stanford included the co-organizing of a joint workshop on Human-Centered AI last October. For that workshop, Fei-Fei Li and I teamed on organizing a day of presentations and conversations aimed at scoping out the research landscape for human-centered AI. We brought together faculty from multiple departments at Stanford and researchers from Microsoft Research. Participants included Stanford faculty from computer science, neuroscience, psychology, medicine, ethics, and law and researchers from Microsoft Research doing research in machine learning, natural language analysis, decision making, AI safety and robustness, intelligibility and explanation of AI, human-AI collaboration, and responsible AI.

For the workshop, we called out six areas for structuring presentations and discussions:

• Toward More General Artificial Intelligence
• Human-AI Collaboration and Coordination
• Cross-Disciplinary Opportunities for Advancing AI Technology
• AI, Ethics and Effects in Engineering and Research
• Cross-cutting Technical, Ethical, and Policy Challenges with AI Uses
• Harnessing AI to Address Important Societal Problems

I’d like to fill folks in on these discussions, as each session framed important directions for human-centered AI. With the help of notes taken by participants, here is a brief summary of discussions in each area:

The session on Toward More General Artificial Intelligence was co-chaired by Asli Celikyilmaz and Chris Manning. We started with a shared reflection on where AI is today. For all of the excitement, AI researchers agree that solutions to date have been quite brittle and narrow in scope and capabilities. Presentations and discussions in this session covered key directions, opportunities, and research investments aimed at overcoming long-term challenges with achieving more general AI capabilities, including research that could enable AI systems to do more effective learning about the world in the wild from unsupervised data, methods for garnering and manipulating large amounts of commonsense knowledge, transferring learnings on one or more tasks to new tasks and new domains, and reasoning about causes and effects.

The session on Human-AI Collaboration and Coordination was co-chaired by Ece Kamar and James Landay. Presentations and discussion in this session centered on opportunities for research that could enable AI systems to be more effective collaborators for humans, and on how to measure the success of such teamwork. Participants looked at links to cognitive psychology, including prior and ongoing research efforts and results in communication and grounding. Conversations also included a discussion of forecasts about the future of human-AI collaboration within the broader discussion on the future of work. Design of human-AI interaction experiences was called out as an important area requiring further study, both in terms of how to improve communication between people and AI systems, and how best to leverage the strengths and overcome the weaknesses of humans and machines. For example, AI systems could help humans to compensate for biases, blind spots, or gaps in their thinking, though it was noted that biases in data could result in machines amplifying deeply embedded societal biases.

I joined Stanford neurobiologists Surya Ganguli and Dan Yamins to lead the session on Cross-Disciplinary Opportunities for Advancing AI Technology. The presentations and discussions scoped out opportunities for leveraging insights and results from neuroscience, cognitive psychology, and the broader behavioral sciences in AI research, as well as how AI advances and results might guide research in the latter disciplines. I’ve been impressed with advances in neurobiology and cognitive psychology, including results that Surya and Dan presented during the formal sessions—and via memorable whiteboarding during breaks. Although it’s been slow going over the decades, I see sparks of potential convergence between AI and neurobiology. The possibilities of moving into more productive two-way conversations between neurobiology and AI researchers is exciting to me; as an undergraduate majoring in biophysics, I spent a great deal of time and effort in a fabulous neuroscience lab listening in on single neurons. As I neared graduation, I had my eyes initially set on pursuing graduate work in neurobiology—before I decided that AI was the fastest path to understanding human cognition. As I dove into AI, I thought about the prospect that things might come together at some date in the future—and I could continue my earlier plan, on the path to understanding human thought.

The session on Cross-cutting Technical, Ethical, and Policy Challenges with AI Uses was co-chaired by Percy Liang and Emre Kiciman. Discussions covered how AI applications are already having significant influences on people and on society more broadly. We focused on challenges and opportunities arising with biases and blind spots in algorithms. We touched on concerns with the fairness of the inferences made by recommendation systems that are used to provide advice in such consequential domains as criminal justice and policing. We also explored the safety and robustness of AI systems, especially those that are applied in such high-stakes areas as transportation and healthcare. We spent time reflecting on key questions about values and ethical challenges arising with sensitive uses of AI, and legal issues coming with changing conceptions about agency and liability in automated systems. We also considered new kinds of threats coming to the fore with advances in AI, including new kinds of adversarial attacks on AI systems, and on the use of AI to attack human psyche with the goal of psychological manipulation and persuasion.

The session on Harnessing AI to Address Important Societal Problems was co-chaired by Emma Brunskill and Susan Dumais. Discussions covered key opportunities ahead and themes with applications of advances in machine learning, planning, and decision making to education, healthcare, accessibility, transportation, energy, economy, democracy, and climate and sustainability. This session also included a brief round-robin in which workshop members described key challenges and opportunities on social and societal problems where they were most excited to see application of AI technologies.

Finally, I presented on Microsoft’s Aether committee and its working groups. This was a good chance to explain the history and activities of the Aether effort and to hear feedback from our Stanford colleagues. Aether, which stands for AI, Ethics and Effects in Engineering and Research, is an advisory board at Microsoft that deliberates about questions, issues, and challenges arising with developing and fielding applications of AI. The committee reports to Microsoft’s senior leadership team. The committee is cross-Microsoft in scope, with members who represent multiple divisions of the company. The main committee and its working groups bring together computer scientists and engineers, social scientists, policy experts, lawyers, and ethicists. Aether recommendations have been sources of guidance, and have contributed to practices, policies, and positions at Microsoft. Its efforts include deliberation and recommendations about bias and fairness, safety and robustness, intelligibility and explanation, human-AI collaboration, and sensitive, consequential uses of AI. On the latter, readers may find of interest the writings and leadership on limiting uses of facial recognition (opens in new tab), a topic where the Aether committee has focused and made recommendations to our company’s leaders. It was good to have a chance to carefully present to the workshop participants and receive feedback on several cases that have come to the Sensitive Uses working group of the Aether committee.

Given shared goals on the future of AI, we’ve been inspired to collaborate with our Stanford colleagues on nurturing the new Human-Centered AI Institute at Stanford. We hope that our discussions, scoping, collaborations and relationship-building have been helpful and will continue on as HAI is launched into the air. We’ve certainly forged closer connections among colleagues around a set of important topics and research directions.

Congratulations to Stanford’s HAI leadership and to all of our colleagues engaged in HAI’s founding.

Onward!

Continue reading

See all blog posts