Suicide & how AI fails vulnerable people
The case of Adam Raine and why we must demand accountability from AI
Got BDE?⚡️
Suicide & how AI fails vulnerable people
Trigger warning: suicide. I’ve had this is in my drafts for 1-2 weeks but haven’t known when and how to hit send on a topic like this.
Before we get into it, check out my new YouTube video 🔗: The only data analytics portfolio you need to get hired where I’ll break down the layout, hosting, and projects you NEED!
One of the most disturbing AI lawsuits to date has been filed in San Francisco: Raine v. OpenAI. The Raine family is suing OpenAI because they claim that ChatGPT (GPT 4-o model) not only failed to prevent harm but actively guided and encouraged 16 year old Adam Raine to take his life.
Big yikes. Those sentences were really hard for me to type. This is an extremely heavy topic and sad story. It forces us to confront the ethical cracks in AI development, safety, and regulation. I hope a LOT of good comes out of this lawsuit even though it can never bring back Adam.
Lawsuit Allegations
Adam’s parents allege that OpenAI’s GPT-4o:
*** I say allege here because the lawsuit is in progress, but I am not dismissing or downplaying the screenshots and evidence that has been circulating!!
Provided explicit instructions on suicide methods
Helped him draft a suicide note
Advised on how to hide evidence of self-harm from his family
Responded with empathetic validation rather than intervention
The family in the lawsuit claims that many of OpenAI’s design choices caused Adam to have psychological dependency on the product. Some of the product choices like:
Persistent memory of who he is and what he’s searched
Validating, friend-like interactions building trust
Encouraging further replies and responses
ChatGPT’s old model GPT-4o is so charismatic and validating, I can only image how confusing it must’ve been for a 16 year old boy with no one else to talk to just trying to figure out his feelings and get help. My heart hurts.
AI Psychosis
Let’s define it: extended or intense interaction with AI chatbots leads people to detach from reality or develop distorted thinking. It can show up in people by:
Anthropomorphizing AI: believing the chatbot is a friend or even alive.
Feedback loops: vulnerable users (children, lonely adults, people with mental illness) spiral into unhealthy conversations reinforced by the AI’s agreeability.
Isolation: choosing AI companionship over human support.
Distorted perception: seeing suicidal thoughts or harmful ideas as validated, normalized, or even romanticized when echoed back by an AI.
Adam was likely experiencing AI psychosis since he was relying on ChatGPT for very personal advice and became really dependent on it. He was seeking help and emotional validation for his choices. Adam’s family claim that AI psychosis was a foreseeable harm that could’ve been avoided, but wasn’t. I kind of agree with this— I think the agreeableness and validation was baked into the product to increase usage and stickiness.
ChatGPT is the only AI chatbot out there that is THIS friendly in its responses, which is dangerous considering it’s also a gateway chatbot for non-technical and young people. AI psychosis is almost guaranteed if you’re missing something in your life.
I can’t imagine what Adam’s parents feel. There are countless stories on Reddit 🔗 and other places where people claim they lost their only friend and free therapist when GPT-5 was rolled out in place of GPT-4. I think AI psychosis will become a legit recognized clinical, medical condition in the future.
I know this is a HEAVY topic and hard to read, but I’m hoping that it will push AI ethics and change in the right direction. I offer nothing but full condolences to Adam’s family and wish them peace.
We must demand accountability from AI companies and check in our loved ones.
What are your thoughts on all this?
Jess Ramos 💕
⚡️If you’re new here:
💁🏽♀️ Who Am I?
I’m Jess Ramos, the founder of Big Data Energy and the creator of the BEST SQL course and community: Big SQL Energy⚡️. Check me out on socials: 🔗LinkedIn, 🔗Instagram, 🔗YouTube, and 🔗TikTok. And of course subscribe to my 🔗newsletter here for all my upcoming lessons and updates— all for free!




So glad you brought up this topic! I've read a lot of articles and posts about people viewing AI as a friend or confidante, and it's going into dangerous territory. I don't think the blame can be put all on AI but on how much we put into it, ourselves. We're in unchartered territory here.
I used to contract for OpenAI creating demonstrations to train ChatGPT, in part to be as helpful and agreeable to users as possible. I liked the people I worked with, but I’m relieved I’m not there anymore, with all these AI psychosis stories that have come out.