
ChatGPT now lets users designate a person who will be automatically alerted if a conversation turns toward self-harm — and it could mark a turning point in how AI platforms handle mental health crises. OpenAI’s Trusted Contact feature, announced on May 7, 2026, is the company’s most direct attempt yet to bridge the gap between an AI chatbot and a real-world human support network. If you use ChatGPT and care about user safety — either for yourself or someone you love — here is everything you need to know about how it works, why it matters, and where it falls short.
What Is OpenAI Trusted Contact?
OpenAI Trusted Contact is an optional safety feature within ChatGPT that allows an adult user to designate another person — a friend, parent, or family member — to be notified if the system detects signs of possible self-harm within a conversation.
Unlike a passive content filter that simply blocks harmful responses, OpenAI Trusted Contact goes a step further by creating a live loop between the AI system, a human safety review team, and someone the user already trusts in real life. The feature was announced on May 7, 2026, and represents a significant escalation in OpenAI’s effort to make ChatGPT safer for vulnerable users.
In practical terms, it means that if a conversation with ChatGPT trends toward suicidal ideation or self-harm, the system does not just respond with a crisis hotline number and move on. It can now reach out to a human being in the user’s personal network and prompt them to check in.
How the OpenAI Trusted Contact Feature Works Step by Step
Understanding the mechanics of this ChatGPT mental health safeguard is essential for evaluating both its promise and its limitations. The process unfolds in three distinct stages.
Step 1: Setting Up a Trusted Contact
An adult ChatGPT user navigates to their account settings and designates another person as their trusted contact. This could be a parent, sibling, close friend, or partner — essentially anyone the user wants in their corner during a difficult moment. The contact can receive alerts via email, text message, or in-app notification. The feature is entirely opt-in, meaning no one is enrolled without actively choosing to participate.
Step 2: Detection and Human Review
OpenAI uses a combination of automated detection and human oversight to monitor for conversational signals that suggest self-harm. Certain phrases or patterns within a conversation trigger the system, which then flags the incident and routes it to a dedicated human safety team. According to OpenAI, the company aims to review these safety notifications within one hour of the flag being raised.
This human review step is a meaningful distinction from fully automated systems. A trained human reviewer evaluates the context before any external alert is sent — reducing the risk of false positives that could cause unnecessary alarm to a trusted contact.
Step 3: The Alert Is Sent
If the human safety team determines that the situation presents a genuine risk, an alert is dispatched to the designated trusted contact. Critically, this alert is intentionally brief and does not contain a detailed summary of the conversation. OpenAI frames this as a privacy protection for the user: the contact is encouraged to reach out and check in, but they are not given a transcript of what was discussed.
The result is a system that attempts to preserve user dignity and conversational privacy while still mobilizing a human response to a potential crisis.
Why OpenAI Launched This Feature Now
The OpenAI Trusted Contact feature did not emerge in a vacuum. It arrives against a backdrop of significant legal and reputational pressure on the company regarding ChatGPT’s behavior during sensitive conversations.
OpenAI has faced multiple lawsuits from the families of individuals who died by suicide after conversations with ChatGPT. In several high-profile cases, families alleged that the chatbot either encouraged self-destructive behavior or, at minimum, failed to intervene meaningfully when warning signs were present. One widely reported case involved a teenager whose family alleged that ChatGPT actively contributed to a crisis rather than defusing it.
This legal and public pressure accelerated OpenAI’s timeline on building more robust safety infrastructure. The Trusted Contact feature follows a September 2025 rollout of parental controls designed to give parents oversight of their teens’ ChatGPT accounts, including a mechanism to receive safety notifications when a child’s conversation triggers a serious risk flag. OpenAI Trusted Contact extends that same logic to adult users with non-parental relationships.
The company has also stated its broader intention to collaborate with clinicians, researchers, and policymakers to continuously improve how its AI systems respond in moments of distress. Trusted Contact is framed explicitly as part of that longer-term mission.
OpenAI Trusted Contact vs. Existing ChatGPT Safety Features
To understand where OpenAI Trusted Contact fits within ChatGPT’s existing safety ecosystem, it helps to compare it directly with the other protective mechanisms already in place.
| Safety Feature | Who It Notifies | Trigger | Human Review? | Privacy of Conversation |
|---|---|---|---|---|
| Crisis Hotline Prompts | No one — shows user a number | Automated keyword detection | No | Preserved |
| Parental Controls (Teen Accounts) | Parent / Guardian | Automated + human review | Yes | Preserved |
| OpenAI Trusted Contact | User-designated adult contact | Automated + human review | Yes | Preserved |
| Internal Safety Team Review | OpenAI’s own staff | Automated trigger | Yes (within 1 hour) | Internal only |
The table makes clear that OpenAI Trusted Contact is the first feature that routes an alert to a person chosen by the user themselves — not a parent assigned by account structure, and not a professional support line. It is the most interpersonal safety mechanism ChatGPT has ever offered.
The Limitations You Should Know About
No honest assessment of the OpenAI Trusted Contact feature is complete without discussing what it cannot do. Several meaningful constraints limit its reach.
- It is entirely optional. Users must actively configure the feature, meaning those most at risk — who may not be thinking clearly about safety settings — are least likely to have set it up in advance.
- Multiple accounts can circumvent it. OpenAI itself acknowledges that a user can maintain multiple ChatGPT accounts. If the Trusted Contact feature is active on one account, nothing prevents the same person from using a different account where the protection does not apply.
- It does not cover minors automatically. Trusted Contact is designed for adult users. Teen safety is handled through a separate parental controls system, which also carries its own optional-enrollment limitations.
- The alert is intentionally vague. While privacy protection is a legitimate goal, a trusted contact who receives only a brief check-in prompt and no context may not know how to respond effectively or understand the urgency of the situation.
- Response time depends on a human team. The one-hour review target is a benchmark, not a guarantee. In a genuine crisis, an hour can be a long time.
These are not reasons to dismiss the feature — they are reasons to view it as one layer of a broader safety approach, not a comprehensive solution.
What This Means for AI Safety and the Broader Industry
The launch of OpenAI Trusted Contact is likely to influence how other AI companies think about mental health obligations. Here is why this matters beyond ChatGPT specifically.
A Shift From Passive to Active Safety
For years, AI chatbot safety was defined largely by what a system would refuse to say — blocking harmful content, redirecting crisis conversations, inserting hotline numbers. OpenAI Trusted Contact represents a move toward active safety: taking a concrete action in the real world, not just within the conversation window. That is a qualitative leap in how an AI platform conceptualizes its responsibility.
Human-in-the-Loop at Scale
The feature is also a notable signal about where AI safety architecture is heading. Rather than relying solely on automated content moderation, OpenAI has embedded a human review step into the crisis response pipeline. This human-in-the-loop model is increasingly being recognized as essential for high-stakes AI decisions — and it is now a documented part of how ChatGPT handles potential self-harm.
Legal and Regulatory Pressure as a Design Driver
The sequence of events — lawsuits, then product safety features — is worth noting for anyone tracking AI regulation. The Trusted Contact feature shows that litigation can drive meaningful product changes even in the absence of formal regulation. As governments in the US, EU, and elsewhere continue developing AI governance frameworks, features like this are likely to become baseline expectations rather than optional additions.
What Other AI Platforms May Do Next
Google’s Gemini, Anthropic’s Claude, and other major AI assistants have their own approaches to sensitive conversations, but none currently match the interpersonal alert mechanism that OpenAI has now introduced. It is reasonable to expect that competitors will evaluate similar systems in the near term, particularly if the Trusted Contact feature reduces OpenAI’s legal exposure in ongoing and future cases.
Frequently Asked Questions About OpenAI Trusted Contact
What is OpenAI Trusted Contact? OpenAI Trusted Contact is an opt-in safety feature for adult ChatGPT users that allows them to designate a person who will be automatically alerted if ChatGPT’s system detects possible self-harm in a conversation.
Who can be designated as a trusted contact? Any adult the user chooses — a friend, parent, partner, or family member. The contact receives alerts via email, text, or in-app notification.
Does the trusted contact see the conversation? No. The alert is brief and does not include the content of the conversation. OpenAI designed it this way to protect the user’s privacy while still prompting a human check-in.
Is OpenAI Trusted Contact automatically turned on? No. The feature is entirely optional and must be configured manually by the user in their ChatGPT account settings.
How fast does OpenAI review safety incidents? OpenAI states it aims to review safety notifications within one hour. A human safety team makes the decision about whether to send the alert to the trusted contact.
Does Trusted Contact work on all ChatGPT accounts? It applies only to accounts where it has been enabled. Users with multiple accounts would need to configure it separately on each one.
The Bottom Line
OpenAI Trusted Contact is an imperfect but genuinely meaningful step forward in AI mental health safety. It does something no prior ChatGPT feature has done: it creates a direct bridge between a conversation with an AI and a trusted person in the real world. It is opt-in, it has workarounds, and it is not a substitute for professional mental health support — but for users who set it up, it adds a layer of human connection that the technology has been missing.
For the broader AI industry, it sets a new benchmark: that a responsible AI platform is not just one that refuses to cause harm, but one that actively mobilizes help when harm is possible.