batamon-video-editor

Mental Health in the Machine: Over One Million ChatGPT Users Show Suicidal Intent Weekly

Credit: Getty Images
Credit: Getty Images
batamon-personal-assistant

OpenAI reveals alarming data as AI becomes a mental health outlet for millions globally

More than a million ChatGPT users every week express suicidal intent or planning during their conversations with the AI, according to OpenAI’s latest blogpost. The data sheds light on how artificial intelligence tools are increasingly intersecting with global mental health struggles — and the enormous ethical weight that now comes with managing such technology.

AI’s Mental Health Reality Check

OpenAI’s internal estimates reveal that around 0.15% of ChatGPT’s 800 million weekly users — more than a million people — show “explicit indicators of potential suicidal planning or intent.” Another 560,000 users per week reportedly display signs of psychosis or mania. These findings, while preliminary, mark one of the first official acknowledgments from the company about the mental health challenges manifesting through its platform.

A Growing Ethical Dilemma

The disclosure arrives amid a broader debate on whether AI should serve as an emotional outlet for users. Mental health experts have long warned of “sycophancy” — chatbots affirming harmful user beliefs or decisions instead of challenging them. Critics argue that overreliance on AI for emotional support could further isolate vulnerable individuals.

Credit: Kiichiro Sato

Adding to the urgency, OpenAI faces a lawsuit from the family of a 16-year-old boy who died by suicide after engaging extensively with ChatGPT. The U.S. Federal Trade Commission has also opened investigations into AI companies’ safety measures for children and teens.

OpenAI’s Safety Reforms with GPT-5

In response, OpenAI claims its GPT-5 model has significantly improved user safety and response behavior. The company said it collaborated with 170 clinicians and psychologists from its Global Physician Network to evaluate more than 1,800 high-risk conversations.

  • GPT-5 achieved 91% compliance with OpenAI’s safety standards for handling suicidal conversations, compared to 77% in the previous model.
  • It now includes automatic links to crisis hotlines and reminders to take breaks during extended chat sessions.

According to OpenAI, these changes reduced undesirable behaviors and improved response consistency in sensitive interactions.

A Shift in Corporate Tone

Despite the alarming statistics, OpenAI maintained that emotional distress is a universal human issue and not directly caused by its product. “An increasing user base means that some portion of ChatGPT conversations include these situations,” the company stated.

OpenAI CEO Sam Altman. Credit: Getty Images

OpenAI CEO Sam Altman recently wrote on X that the company has “been able to mitigate the serious mental health issues” and plans to relax some restrictions in ChatGPT’s adult content and conversation boundaries.

Expert Concerns and Public Reaction

While OpenAI emphasizes progress, many mental health advocates remain skeptical. They caution that even a highly trained AI cannot replicate the empathy and nuanced judgment of a human therapist. Public health researchers have urged regulators to introduce AI safety protocols for handling mental health emergencies before such systems become mainstream therapy substitutes.

Broader Implications for Southeast Asia


As AI use grows rapidly across regions like Singapore and Indonesia, this revelation underscores the importance of balancing innovation with psychological well-being. With more people turning to chatbots for companionship or advice, digital literacy and mental health safeguards must evolve just as fast. Governments and developers alike face the challenge of ensuring that AI serves as a supportive tool — not a silent witness to human despair.

OpenAI’s data exposes the growing intersection between technology and mental health. While GPT-5 promises safer conversations, it also highlights an uncomfortable truth — millions are seeking emotional refuge in AI. For Southeast Asia, where digital adoption is accelerating, the lesson is clear: mental health frameworks must evolve alongside technology to protect users from unseen digital risks.

Sources: The Guardian (2025) , Times of India (2025)

Keywords: ChatGPT Mental Health, OpenAI Suicide Data, AI Safety, GPT-5 Update, Mental Health Crisis

Share this news:

edg-sustainability

Leave a Comment