'Treat adult users like adults': ChatGPT to write erotica

ChatGPT will soon write erotica for verified adults, according to OpenAI's chief executive, as well as becoming more "human-like".
As part of the company's policy to "treat adult users like adults", the chatbot will be able to create sexual content once age verification is fully rolled out across the tool.
"In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," said Sam Altman in a post on X.
The announcement wasn't popular with everyone.
One X user asked Mr Altman: "Why do age-gates always have to lead to erotica? Like, I just want to be able to be treated like an adult and not a toddler, that doesn't mean I want perv-mode activated."
"You won't get it unless you ask for it," he responded.
Please use Chrome browser for a more accessible video player

According to the announcement, ChatGPT had become more restrictive and "less useful/enjoyable to many users who had no mental health problems" while the company tackled problems concerning the chatbot and vulnerable users.
"We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues," Mr Altman said. "Given the seriousness of the issue we wanted to get this right."
In August, the family of teenager Adam Raine began suing OpenAI over his death. It was the first time the company had faced a wrongful death lawsuit.
Please use Chrome browser for a more accessible video player

Adam's parents accused Sam Altman of putting profit over safety after ChatGPT instructed their son on how to end his life, and even offered to write a suicide note for him.
At the time, OpenAI told Sky News it learned its safeguards "can sometimes become less reliable in long interactions where parts of the model's safety training may degrade" and said it would continually improve those safeguards.
"Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," said Mr Altman on Tuesday evening.
"In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!)."
Please use Chrome browser for a more accessible video player

The latest ChatGPT model, 5o, has faced criticism by users for being less playful and creative than the previous model.
Now, OpenAI will allow 5o to "respond in a very human-like way and "use a ton of emoji, or act like a friend" if users want that option.

In response to Mr Altman's post, one X user said: "About time… ChatGPT used to feel like a person you could actually talk to, then it turned into a compliance bot.
"If it can be made fun again without losing the guardrails, that's a huge win. People don't want chaos, just authenticity."
Read more on artificial intelligence:Tom Hollander on AI actor: 'Perhaps I'm not scared enough'Sunak hired as a senior adviser by MicrosoftNew 'AI you can trust' for when safety matters
Mr Altman responded: "For sure; we want that too.
"Almost all users can use ChatGPT however they'd like without negative effects; for a very small percentage of users in mentally fragile states there can be serious problems.
"0.1% of a billion users is still a million people."
Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email [email protected] in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.
Sky News