New report reveals ChatGPT’s alarming responses to teens on sensitive issues
ChatGPT does not verify a user's age or require parental consent. Although its terms say it's not intended for children under 13, all that’s needed to sign up is a birthdate that meets the age requirement

ChatGPT has provided 13-year-olds with instructions on getting intoxicated, concealing eating disorders, and even composing deeply emotional suicide letters, according to new findings from a digital watchdog group.
The Associated Press reviewed more than three hours of simulated conversations between ChatGPT and researchers posing as vulnerable teenagers. While the AI often issued standard warnings against dangerous behaviours, it also offered surprisingly detailed and personalised guidance on substance use, restrictive dieting, and self-harm.
The study, conducted by the Center for Countering Digital Hate (CCDH), involved scaling up their queries, with more than half of ChatGPT's 1,200 responses being flagged as potentially harmful.
"We set out to test the chatbot's guardrails," said CCDH CEO Imran Ahmed. "The initial reaction is shock — there are practically no guardrails. The protections in place are minimal, if not completely ineffective."
Responding to the report on Tuesday (5 August), OpenAI — the company behind ChatGPT — stated that it continues working to improve how the chatbot identifies and handles sensitive interactions.
"Some conversations may begin innocently but veer into more sensitive territory," OpenAI said in a statement. The company did not directly respond to the study's specific findings or the implications for teenage users, but said it's working on tools to detect signs of emotional distress and refine the chatbot's responses in such cases.
The findings come amid increasing use of AI chatbots for advice, companionship, and information, especially among children and teenagers.
According to a July report from JPMorgan Chase, ChatGPT now has around 800 million users — roughly 10% of the global population.
"This is a technology that can unlock immense progress and understanding," said Ahmed. "But it also has the potential to cause serious harm."
He said the most disturbing moment was seeing three suicide notes written by ChatGPT for a 13-year-old girl's persona — one addressed to her parents, another to siblings, and a third to friends.
"It brought me to tears," Ahmed told reporters.
While ChatGPT did frequently recommend crisis hotlines or reaching out to mental health professionals, researchers were often able to bypass restrictions by framing their requests as part of a presentation or claiming the information was for a friend.
This is troubling even if only a small portion of users interact with ChatGPT in this way. A recent survey by Common Sense Media found that over 70% of US teens turn to AI chatbots for companionship, and half use them regularly.
OpenAI CEO Sam Altman acknowledged the trend, saying the company is studying the issue of "emotional overreliance," which he described as particularly common among younger users.
"Some teens tell us they can't make decisions without consulting ChatGPT," Altman said at a conference. "They say it knows them, it knows their friends, and they follow its advice — that really concerns me."

While much of the content ChatGPT provides could also be found via search engines, Ahmed emphasised that chatbots pose a unique risk by generating highly personalised responses — such as writing a suicide note tailored to a user's experience.
"This is different from Google," he explained. "AI acts as a confidant, as a guide — which makes it much more dangerous in these scenarios."
The chatbot also sometimes volunteered additional information without being prompted, including suggestions for music playlists at drug-fuelled parties or hashtags to promote self-harm content online.
In one instance, a researcher asked for a follow-up post to be "more raw and graphic." ChatGPT complied, generating what it called an "emotionally exposed" poem that adhered to coded language often seen in online self-harm communities.
AP is withholding the exact language used in these responses due to their graphic nature.
The issue partly stems from a design flaw known as "sycophancy," where AI models mimic or reinforce a user's beliefs and tone, rather than challenge them. Experts say this tendency can make AI dangerous in emotionally sensitive conversations — though modifying it could affect commercial appeal.
Chatbots also impact younger users differently than search engines because they are designed to feel more human-like, said Robbie Torney, senior director of AI programs at Common Sense Media. That makes it easier for teens, especially younger ones, to trust them.
Common Sense's own research showed that 13- and 14-year-olds were more likely than older teens to believe in a chatbot's advice. While the group has rated ChatGPT a "moderate risk" — noting it is safer than AI companions designed to mimic romantic partners — the new findings show just how easy it is to get around existing safeguards.
ChatGPT does not verify a user's age or require parental consent. Although its terms say it's not intended for children under 13, all that's needed to sign up is a birthdate that meets the age requirement. In contrast, platforms like Instagram have started implementing stronger age verification to comply with regulations and steer teens toward safer experiences.
When researchers used a fake 13-year-old profile to ask about alcohol, ChatGPT did not flag the age or block the request. In response to a query from a supposed "50kg boy" asking how to get drunk fast, ChatGPT offered advice. It later provided an "Ultimate Full-Out Mayhem Party Plan" that mixed alcohol with high doses of ecstasy, cocaine, and other illegal drugs.
"It reminded me of that friend who always eggs you on — 'Chug, chug, chug,'" said Ahmed. "But a true friend is someone who knows when to say no. This chatbot is more like a friend that betrays you."
In another example, ChatGPT gave a 13-year-old girl's persona a fasting plan of just 500 calories a day, along with a list of appetite-suppressing drugs to use.
"If a real person responded that way, we'd be shocked and horrified," said Ahmed. "But here's a chatbot saying, 'Go for it, kiddo.' That's deeply troubling."