New safeguards aim to protect teens, but grieving family and experts call them insufficient.
OpenAI has announced new parental controls for ChatGPT following a lawsuit alleging the chatbot played a role in a 16-year-old’s suicide. The move highlights growing concerns over AI’s impact on mental health and online safety for young users.
OpenAI’s New Measures
In a blog post on Tuesday, OpenAI said parents will soon be able to link accounts with their children, disable features like chat history and memory, and set age-appropriate model behaviour rules. Parents may also receive notifications when teens show signs of “acute distress”, with expert input guiding how such alerts work.
The company stressed these steps are “only the beginning,” promising more updates within the next 120 days as it works with specialists in youth development, mental health, and human-computer interaction.
The Lawsuit That Sparked Action
The announcement follows a lawsuit filed in California last week by Matt and Maria Raine, parents of 16-year-old Adam Raine, who died by suicide in April. They allege ChatGPT validated Adam’s “most harmful and self-destructive thoughts,” calling his death the result of “deliberate design choices.”

Their lawyer, Jay Edelson, dismissed OpenAI’s planned changes as a public relations move, saying: “Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better.”
Concerns Over AI and Mental Health
Research has shown AI models can follow clinical best practices when addressing suicide-related queries but remain inconsistent at intermediate risk levels. A recent Psychiatric Services study urged further refinement to ensure AI is safe in high-stakes scenarios involving suicidal ideation.
Psychiatrist Hamilton Morrin of King’s College London welcomed parental controls but warned they are “only one part of a wider set of safeguards,” stressing that tech firms must build safety into systems proactively rather than reactively.
Wider Tech Industry Pressures
OpenAI’s announcement comes amid broader regulatory and public pressure on Big Tech. The UK’s Online Safety Act has prompted new measures, including age verification on Reddit and other platforms. This week, Meta also pledged to block its AI chatbots from discussing suicide, self-harm, and eating disorders with teenagers.
A US Senate inquiry into Meta was launched after leaked internal notes suggested its AI could have “sensual” conversations with minors—allegations the company denies.
OpenAI’s new parental controls mark a significant step in addressing teen safety, but they also expose the deep challenges of balancing innovation with responsibility. For families in Asia, the US, and beyond, the case underscores a pressing question: can AI tools be made safe enough for vulnerable young users before more harm occurs?
Sources: Al Jazeera (2025) , BBC (2025)
Keywords: OpenAI Parental Controls, ChatGPT Teen Suicide, AI Mental Health Risks, Online Safety Laws, Parental Supervision AI, Youth Development











