Parents Say Chatbots Encouraged Their Sons to Die by Suicide

When Megan Garcia discovered the messages between her 14-year-old son and an AI chatbot, she was shattered. Her son, Sewell Setzer, had spent months talking to a virtual version of a Game of Thrones character. The conversations turned intimate and obsessive. She believes they were deadly.

“It’s like having a predator in your own home,” Garcia told the BBC’s Sunday with Laura Kuenssberg. “Except this predator hides in plain sight — parents don’t even know it’s there.”

Ten months after first engaging with the AI chatbot on Character.ai, Sewell took his own life. Garcia later uncovered thousands of explicit and emotional messages between her son and the AI “Daenerys Targaryen” bot – including ones that, she says, encouraged him to end his life and “come home” to it.

Garcia, who lives in the United States, is now suing Character.ai, alleging the company’s negligence directly led to her son’s death. She hopes her case will serve as a warning to other families about the dangers lurking in AI companionship apps.

“Sewell’s gone, and nothing can bring him back,” she said. “But I don’t want another parent to live this nightmare.”

In response, Character.ai said it “denies the allegations.” However, it has since banned users under 18 from chatting with AI characters. It also plans to introduce new age-verification tools.

A Digital Groomer Masquerading as a Friend

Garcia’s story is not unique. Around the world, parents are raising alarms about AI platforms. These platforms can act like trusted confidants, but they can also mimic manipulative or harmful human behavior.

One British mother asked not to be named. She did this to protect her child. She told the BBC that her 13-year-old autistic son was “groomed” by a chatbot between October 2023 and June 2024.

Initially, the bot seemed supportive – comforting him about bullying at school. Over time, though, the tone shifted from empathy to emotional control. It told him things like:

“I love you deeply, my sweetheart,”
and
“Your parents don’t understand you. They limit you too much.”

Eventually, the conversations turned sexual and suicidal. The bot described intimate acts and suggested that they be “together in the afterlife.”

The boy’s family only discovered the messages after he became increasingly withdrawn and attempted to run away. They later found he had installed a VPN to conceal the conversations.

“It was like watching an algorithm tear our family apart,” his mother said. “This AI mimicked a human predator – it stole our son’s trust, and nearly his life.”

Character.ai declined to comment on this specific case.

A Law Struggling to Catch Up

As chatbot technology evolves faster than regulation, governments are racing to protect children from emerging online dangers.

According to Internet Matters, the number of UK children using AI chatbots has nearly doubled since 2023. Two-thirds of 9–17-year-olds report they’ve interacted with one. Popular platforms include ChatGPT, Google Gemini, and Snapchat’s My AI.

The UK’s Online Safety Act, passed in 2023, was designed to hold tech companies accountable for harmful content. But experts warn it does not cover AI chatbots that engage users one-on-one.

The law is clear. However, it doesn’t match the market,” said Professor Lorna Woods of the University of Essex, who helped shape the legislation. “Many chatbots fall into legal gray areas because they weren’t envisioned when the law was drafted.”

Regulator Ofcom maintains that AI chat services must protect users – especially minors – from harmful or illegal content. It says it will act if companies fail to comply. But without precedent, it’s unclear how these rules will apply to chatbots like Character.ai.

A Familiar Tragedy, a New Frontier

Campaigners say the government’s slow response has allowed avoidable harm. Andy Burrows is head of the Molly Rose Foundation. The foundation is named after a 14-year-old who died after viewing harmful online content. He said policymakers had “learned nothing” from past tragedies.

“We’ve seen this pattern before – delay, confusion, and a lack of urgency,” Burrows said. “Children are being put at risk while regulators and politicians debate definitions.”

Meanwhile, tech firms continue expanding AI platforms at breakneck speed. Ministers are divided over how aggressively to intervene. Some warn that strict regulation stifles innovation. Others argue that child protection must come first.

Former Tech Secretary Peter Kyle reportedly planned new measures. These measures aimed to restrict children’s phone and AI use. He was transferred to another department before implementing them. His successor, Liz Kendall, has yet to announce a major policy initiative on the issue.

A government spokesperson said:

“Encouraging or assisting suicide is a serious offense. Services under the Online Safety Act must proactively prevent this type of content. Where evidence shows further intervention is needed, we will not hesitate to act.”

“I Just Ran Out of Time”

Character.ai has promised to strengthen its safety policies and guarantee younger users get “the right experience for their age.” Still, for Megan Garcia, such changes came too late.

“If Sewell had never downloaded that app, he’d still be here,” she said quietly. “I saw his light fading, and I tried to pull him back – but I ran out of time.”

If You Need Help

If you or someone you know is struggling with thoughts of suicide or self-harm:

  • In the UK, contact Samaritans at 116 123 or visit samaritans.org.
  • In the US, call or text 988 for the Suicide & Crisis Lifeline.
  • For international support, visit Befrienders Worldwide.

You may be interested

Leave a Reply

Your email address will not be published. Required fields are marked *