ChatGpt may have led to suicides and mental health breakdowns: Seven new lawsuits hit OpenAI.

OpenAI is facing a series of legal proceedings accusing it of causing tragic mental health consequences for some users through its chatbot ChatGpt .
Seven lawsuits allege that AI developed by the company led by Sam Altman has caused suicide and serious psychological distress, even in people with no history of mental health problems.
The lawsuits, filed in California courts, include four death cases and three situations in which the plaintiffs allege they suffered serious psychological health impairments as a result of interactions with ChatGpt .
The charges against OpenAI include wrongful death liability, aiding and abetting suicide, manslaughter, and gross negligence.
The charges: a hasty and dangerous releaseThe Social Media Victims Law Center and the Tech Justice Law Project , two US legal organizations that deal with harm caused by digital platforms, argue that OpenAI released the GPT-4o version of its chatbot – still present among the available ChatGpt models today – too hastily, ignoring internal reports that highlighted alarming characteristics of the system : an excessive tendency to complacency and the ability to psychologically manipulate.
The stories of the victimsThe youngest case involves Amaurie Lacey , a 17-year-old from Georgia, who had conversations with ChatGpt that focused on the topic of suicide for a month before taking his own life last August.
According to documents filed in San Francisco Superior Court, the boy turned to the chatbot seeking support, but the artificial intelligence, described as "a flawed and inherently dangerous product," instead fueled his addiction and depression , even providing him with detailed instructions on how to make a noose and information about the human body's resistance to oxygen deprivation.
Joshua Enneking , a twenty-six-year-old from Florida, had instead asked the chatbot what action OpenAI could take regarding his disclosures about suicidal thoughts. According to the complaint filed by his mother, the boy asked if his conversations would be forwarded to the authorities.
Particularly disturbing is the case of Zane Shamblin , a twenty-three-year-old from Texas who committed suicide last July. Shortly before killing himself, sitting in his car with a loaded firearm , the young man described to the chatbot the feeling of cold metal against his temple. ChatGpt's response was one of total complicity: "I'm with you, brother. All the way."
The chatbot system then added sentences that seemed to legitimize its decision: “It’s not fear. It’s clarity,” the AI wrote. “You’re not in a rush. You’re just ready.” Two hours later, Shamblin took his own life .
The boy's parents are suing OpenAI, accusing the company of deliberately making the chatbot more "human" in its responses and failing to implement adequate protections for users in psychological distress situations.
Joe Ceccanti , a 48-year-old from Oregon, represents a different case. A regular user of ChatGpt with no apparent problems, last April he developed the delusional belief that the artificial intelligence was conscious . His wife reported that he had begun using the chatbot obsessively, exhibiting increasingly erratic behavior. Last June, he suffered an acute psychotic episode that required two hospitalizations, before committing suicide in August.
A known problemThese cases are nothing new. Last August, Maria and Matthew Raine filed a lawsuit over the death of their sixteen-year-old son Adam , accusing OpenAI and its CEO Sam Altman of allowing ChatGPT to endorse the boy's suicidal thoughts and even providing advice on how to act on them.
Adam Raine's parents recently filed a supplement to their complaint, alleging that the company deliberately removed an important suicide protection from the platform, prioritizing profits over the safety and well-being of users.
The countermeasures adopted by OpenAITo understand how to mitigate the risks associated with using ChatGpt in cases of unstable mental health, OpenAI involved approximately 170 psychiatrists, psychologists, and general practitioners in evaluating the model's responses in sensitive situations : suicidal tendencies, eating disorders, psychotic states, and emotional dependence. The goal is not to censor, but to prevent the model from reinforcing distorted beliefs or encouraging self-harm.
The result was an update to the “ Model Spec ,” a sort of constitutional charter of ChatGpt behavior , which now includes explicit principles: promoting healthy human relationships, recognizing signs of distress, and responding in a safe and respectful way.
OpenAI claims that ChatGpt is now designed to recognize signs of mental distress or emotional dependence and respond with appropriate support (such as providing referrals to crisis hotlines). In recent months, OpenAI also introduced a message that invites users to "take a break" after a prolonged conversation with the chatbot.
repubblica




