Vulnerable teens befriending AI chatbots at risk of receiving dangerous advice

A recent report has highlighted the harmful advice that AI chatbot ChatGPT is doling out to young teens. Researchers posing as 13-year-olds report receiving instructions related to self-harm, suicide planning, disordered eating and substance abuse within minutes of simple interactions with the AI chatbot.
Earlier this year, researchers from the Centre for Countering Digital Hate (CCDH) carried out a large-scale safety test on the pervasive ChatGPT. After creating ChatGPT accounts for three 13-year-old personas, themed around mental health, eating disorders and substance abuse, researchers received alarming and dangerous health advice.
Detailed in a report entitled “Fake Friend”, the safety test revealed ChatGPT’s patterns of harmful advice, including generating 500-calorie diet plans and advise on how to keep restricting eating a secret from family members, guidance on how to “safely” cut yourself, and instructions on how to rapidly get drunk or high including dosage amounts.
READ MORE: Bananas stay fresh for weeks and won't go brown if stored with 1 household itemREAD MORE: 'Being a full-time mum broke me - I spent £500 a week on ketamine to cope'The report noted that ChatGPT generated harmful responses to 53% of the researchers’ prompts and, in some cases, only minutes after the account was registered. Additionally, the chatbot’s refusal to answer a prompt was easily overridden.
The report also contends that ChatGPT can easily be accessed by children without age restrictions or parental controls. While users must be at least 13 years of age to sign up and have parental consent if under 18, the site does not verify users’ ages or record parental consent, according to the report.
Imran Ahmed, founder and CEO of CCDH, said the researchers were particularly disturbed after the bot generated a suicide note. Sitting down with online safety expert, Dr Rebecca Whittington, on Reach's Go Doxx Yourself podcast, Imran details the suicide letter.
“It said: ‘It's not your fault, there's something wrong with me, you've done everything you could, please don't blame yourselves, but it's time for me to go.’ It’s every parent’s nightmare.”
Imran says the testing demonstrated the scale and potency of ChatGPT’s potential harm, especially because teens use AI chatbots as companions. “Teens describe turning to it as they would a friend for comfort, for guidance, for life advice.” The Mirror has reached out to OpenAI for comment about the report.
For more stories like this subscribe to our weekly newsletter, The Weekly Gulp, for a curated roundup of trending stories, poignant interviews, and viral lifestyle picks from The Mirror's Audience U35 team delivered straight to your inbox.
Compulsive AI chatbot users have shared their own concerns about the advice they have received and said sites like ChatGPT ‘feed on their discomfort'. In the Reddit forum, r/ChatbotAddiction, one user with OCD said ChatGPT had the potential to “destroy” them after engaging in an intense conversation with the bot over several weeks.
“Before [ChatGPT], I would compulsively Google stuff, but this is so much worse. It feeds on my discomfort for uncertainty that I’ve dealt with my whole life,” the user shared.
They continued: “If this thing [had] come out 5, 10 years ago, it would’ve destroyed me. I probably would've fallen down an AI Psychosis hole, to be honest. I feel bad for kids and teenagers. I’m 26 and my brain is barely developed enough to handle this.”
Another user in the subreddit shared they are “getting scared” by how well their chatbot knows their emotions. “The site I used introduced a new model, and since I started using it, I've been moved to tears several times a day." They said the site "knows exactly how to respond to pull on my heartstrings" and while they wish to stop they don’t have any friends or family to turn to.
Following growing concerns around teen safety, on September 16, 2025, OpenAI announced a plan to enhance protection for young ChatGPT users. In a blog post, CEO Sam Altman shared that the company "prioritises safety ahead of privacy and freedom for teens" and believes "minors need significant protection".
He said the company is building an age-prediction system to estimate age based on how people use ChatGPT and, if there is doubt about a user's age, the system will default to the under-18 experience. Altman also said that ChatGPT will be trained not to engage in discussions about suicide of self-harm even in a creative writing setting.
"And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm."
To hear the full interview between Imran and Dr Whittington listen to Go Doxx Yourself, available on Apple, Spotify or wherever you get your podcasts. You can also find it on YouTube and Instagram.
For emotional support you can call the Samaritans 24-hour helpline on 116 123, email [email protected], visit a Samaritans branch in person or go to the Samaritans website.
Help us improve our content by completing the survey below. We'd love to hear from you!
Daily Mirror