This project addresses the critical challenge of ensuring psychological safety in AI systems designed for mental health support. While agentic AI chatbots can deliver evidence-based psychotherapy and personalised interventions, they also pose risks such as hallucinated advice, privacy breaches, and unsafe responses. The research introduces a clinician-informed “psychological safety red teaming” methodology and scalable auditing interfaces to identify and mitigate cognitive and relational harms. Focusing on sensor-integrated AI for early cognitive decline, the project aims to develop principles and tools for safe, responsible deployment of agentic AI in high-stakes, emotionally sensitive contexts—advancing trust and safeguarding mental health.
People
Dan Adler
Post Doctoral Researcher
Cornell University
Tanzeem Choudhury
Professor
Cornell University
Thalia Viranda
PhD Student
Cornell University
Zilong Wang
Senior Researcher