background pattern

Towards the Psychological Security of Agentic AI

This project addresses the critical challenge of ensuring psychological safety in AI systems designed for mental health support. While agentic AI chatbots can deliver evidence-based psychotherapy and personalised interventions, they also pose risks such as hallucinated advice, privacy breaches, and unsafe responses. The research introduces a clinician-informed “psychological safety red teaming” methodology and scalable auditing interfaces to identify and mitigate cognitive and relational harms. Focusing on sensor-integrated AI for early cognitive decline, the project aims to develop principles and tools for safe, responsible deployment of agentic AI in high-stakes, emotionally sensitive contexts—advancing trust and safeguarding mental health.

People

Portrait of Dan Adler

Dan Adler

Post Doctoral Researcher

Cornell University

Portrait of Tanzeem Choudhury

Tanzeem Choudhury

Professor

Cornell University

Portrait of Thalia  Viranda

Thalia Viranda

PhD Student

Cornell University

Portrait of Zilong Wang

Zilong Wang

Senior Researcher