Select Language

English

Down Icon

Select Country

Turkey

Down Icon

Notable change in ChatGPT after suicide case

Notable change in ChatGPT after suicide case

The family accuses OpenAI of "killing our son."

artificial intelligence company

, chat bot

has developed new security measures specifically to protect younger users. The company announced features like parental control panels and emergency contact options.

This step follows a lawsuit filed by the family of 16-year-old Adam Reine, who committed suicide earlier this year. The family alleges that Reine, during a period of mental crisis, received advice on suicide methods from ChatGPT and received approval from the AI. Reine also used the AI's help in a letter he wrote five days before his suicide.

Family to OpenAI

While filing a lawsuit, the company stated that it has a responsibility to help young users.

Among the innovations announced by OpenAI, the following stand out:

Parental control panels: Families will be able to monitor their children's ChatGPT usage.

Emergency contact: This person, who will be activated in a crisis, will be determined with parental approval and will provide the user with a human point of contact.

The company emphasized that these tools will provide more meaningful insights to families and aim to increase user safety, especially in sensitive situations such as mental distress.

AI platforms like ChatGPT have previously been in the news with similar lawsuits. In the US, a 14-year-old boy committed suicide after interacting with fictional characters through Character.AI, and in Belgium, a chatbot named "Eliza" was accused of contributing to a man's suicide.

Experts say that these examples raise serious ethical and legal debates regarding the use of artificial intelligence in the field of mental health.

#OpenAI

# ChatGPT

# artificial intelligence

yenisafak

yenisafak

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow