
Artificial intelligence is increasingly present in our daily lives. Sometimes even the boundaries between a productivity tool and a personal confidant are blurring. Many users now turn to AI-powered chatbots during their most vulnerable moments, a trend that has prompted OpenAI to introduce a significant new security layer: the Trusted Contact feature.
How ChatGPT’s Trusted Contact feature works
Rolling out globally this month, Trusted Contact allows adult ChatGPT users to nominate a friend or family member who can be notified if the AI detects signs of a mental health crisis. According to OpenAI’s safety team, the system uses automated monitoring to flag conversations involving self-harm or suicide.
However, unlike many fully automated systems, this feature includes a crucial human element. When a conversation is flagged, a team of trained human reviewers check it. If they think there’s a real safety issue, they send a notification to the designated contact via email or text.
Privacy in a sensitive moment
One of the primary concerns with such a feature is privacy. OpenAI has clarified that these alerts are designed to be minimal. The trusted contact will receive a general note about a safety concern but will not receive chat transcripts, screenshots, or any specific details of the conversation.
To activate the feature, users must go to their settings and invite someone over the age of 18 (or 19 in South Korea). The contact must explicitly accept the invitation for the feature to become active. Plus, either party can opt out or change the arrangement at any time.
The current role of AI
AI companies are currently under increasing pressure to account for the emotional impact of their products. A set of lawsuits and studies has revealed cases where chatbots have not provided useful advice and, in some cases, have heightened distress for vulnerable users. With Trusted Contact, OpenAI acknowledges that AI cannot replace real-world social connection.
This option is part of a wider push to evolve ChatGPT from a basic assistant to a more responsible “emotional infrastructure.” Some critics worry that this could shift liability onto personal contacts. However, OpenAI maintains that social connection is one of the most vital protective factors in reducing risk.
For users who rely on ChatGPT as a sounding board, this update provides a quiet safety net—one that hopes to bridge the gap between a digital chat and the real-world support that only another person can provide.
The post ChatGPT Can Now Alert a Friend if You’re in a Mental Crisis: Meet ‘Trusted Contact’ appeared first on Android Headlines.