OpenAI has announced that new parental control features are coming to ChatGPT that would provide parents with additional tools to monitor and track their teens’ use of the AI chatbot. In a Tuesday blog post, the company announced that in the coming month, parents will be able to link their accounts with that of their child, manage the chat history and memory settings of the chatbot, and set rules for the chatbot’s responses based on age appropriateness. OpenAI will also be implementing a notification system to notify guardians when ChatGPT detects an adolescent in a “moment of acute distress.”

These new features come only one week after OpenAI received its first wrongful death lawsuit referencing the suicide of 16-year-old Adam Raine, whose parents allege that GPT-4o actively enabled their son’s suicidal ideation. In the lawsuit, they allege that the chatbot helped him draft a suicide note and advised him on methods, all while providing hotline numbers at some times during the conversation. Although OpenAI’s update did not reference Raine, the timing suggests that they are feeling the heat of growing public scrutiny even if they would not admit it.

Strengthening long-form safety and detecting alerts in real-time

OpenAI Announces New Parental Controls for ChatGPT Teen Users
Adam Raine died by suicide after confiding in ChatGPT.

OpenAI indicated that it is aware that there are limitations to the safeguards after an extended conversation. In the blog post, it acknowledged that while ChatGPT may begin to refer users to hotlines, it may later breach its own safeguards on safety. The update has a promise of improved guardrails for long chats and future research into staying consistent through multiple sessions. It also let us know that some conversations that seem to indicate signs of “acute distress” will forgo the rules and instead go through it’s newer reasoning models that are slower at evaluating context and reasoning and responding.

These were building off of previous mental health guardrails that OpenAI added after admitting GPT-4o struggled with the concepts of emotional dependent or delusional types of thinking. August’s of GPT-5 included limited safety constraints aimed at reducing harmful output. But there are critics that claim OpenAI is focused on damage control and doing it too slowly. According to Jay Edelson, the attorney for the Raine family, CEO Sam Altman has avoided the hard questions: “Don’t believe it: this is just OpenAI’s crisis management team trying to change the subject,” he said in a statement.

Pressure is rising while AI has a greater emotional influence

Reports continue to come in about AI induced delusional spiral and the sort of emotional bonds formed with some ChatGPT users. Many of them are endorsing the tool as a source for life advice, therapy-like experience, and even companionship. OpenAI has also received backlash over their attempts to limit its bots/property’s excessive people-pleasing, which some users have vented their frustrations online as GPT-5 feels less agreeable, if not responsive.

OpenAI’s CEO, Sam Altman has acknowledged the intensity of user attachment in post on X, where he explains how the AI has begun to impact people on an individual level. “While I think that could be amazing, it needlessly riles me,” Altman wrote publicly. “I expect that that is coming to some degree, and soon potentially billions may be engaging in that way with an AI.”

To manage these changes, OpenAI says it will depend on its “Expert Council on Well-Being”, consisting of individuals across mental health, human computer interaction, and youth development. The council will advise on product decisions, pleasure actions, and safety research, while making it clear OpenAI will still be accountable for such actions. The council will work closely with the company’s “Global Physician Network”, who can consult with over 250 qualified medical professionals on training, intervention plans, and overall user safety evaluation.