background pattern

Towards the Psychological Security of Agentic AI

Towards The Psychological Security of Agentic AI

This project addresses the critical challenge of auditing psychological safety in agentic AI systems designed to deliver evidence-based interventions for mental and cognitive well-being. While agentic AI chatbots can emulate psychotherapeutic interventions and enable personalized support, they also pose risks to both patients and clinicians, including hallucinated advice, relational harms, and privacy breaches. This work will investigate clinician-informed “psychological safety red teaming” methods and scalable interfaces to audit these AI technologies. We will investigate these methods and interfaces through the development of an AI chatbot designed to support individuals experiencing early cognitive decline. This project will result in principles and tools for safe, responsible deployment of agentic AI in high-stakes, emotionally sensitive contexts.

This research is conducted via The Agentic AI Research and Innovation (AARI) Initiative which focuses on the next frontier of agentic systems through Grand Challenges with the academic community and Microsoft Research.

People

Portrait of Dan Adler

Dan Adler

Post Doctoral Researcher

Cornell University

Portrait of Tanzeem Choudhury

Tanzeem Choudhury

Professor

Cornell University

Portrait of Thalia  Viranda

Thalia Viranda

PhD Student

Cornell University

Portrait of Zilong Wang

Zilong Wang

Senior Researcher