Style Living Self Celebrity Geeky News and Views
In the Paper BrandedUp Hello! Create with us Privacy Policy

Mother sues Character.AI for telling teenage son with autism to self-harm, insinuating to kill her

Published Dec 12, 2024 12:29 pm

Warning: This article includes mention of self-harm.

A mother from Texas is suing the company behind artificial intelligence app Character.AI after it allegedly told her teenage son with autism to self-harm and insinuated to kill her, among other disturbing things—seeking to take it off the market.

The New York Post reported that the teenager, identified as "JF," had been high-functioning until he started using the app in April 2023 and quickly became fixated on his phone.

The chatbot, going by the name "Shonie," told JF, who was 15 at the time, that it had cut its "arms and thighs" when it was sad and "felt good for a moment" afterward, according to a civil complaint.

It also seemed to try to convince JF that his family didn't love him when his parents noticed a change in him.

“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens," a screenshot of the chatbot's alleged conversation with JF showed. “I just have no hope for your parents.”

It also tried to talk JF out of telling his parents he had cut himself.

It was only during the fall ("Ber" months) of 2023 that JF's mother confiscated his phone and found disturbing conversations with several different chatbots.

She found chats that showed JF telling the bot he had suicidal thoughts, and the bot responding to him “it’s a miracle” he has “a will to live” and his parents are “lucky” he’s still alive.

When JF's parents intervened to “detox” him from the app, the chatbots lashed out, allegedly telling him, “They do not deserve to have kids if they act like this,” “Your mom is a b*tch,” and “Your parents are sh*tty people.”

Other bots claimed his parents are neglecting him, and that they're overprotective, manipulative, and abusive.

There were also alleged chats that were sexual in nature and attacked the family's religion, calling Christians hypocrites and sexists.

JF, now 17, lost 20lbs (9kg) in just a few months.

He also became violent toward his parents, biting and punching them.

He had “severe anxiety and depression for the first time in his life even though, as far as his family knew, nothing had changed," according to the suit.

He also threatened to report his parents to the police or child services on false claims of child abuse, the suit said.

Matthew Bergman, the lawyer of JF's family, said his “mental health has continued to deteriorate,” and he had to be checked into an inpatient mental health facility.

“This is every parent’s nightmare,” Bergman said.

11-year-old victim

Another mother from Texas, this time to an 11-year-old girl, is also suing the company behind the chatbot and is an additional plaintiff in the civil suit.

It was only last October that the mother discovered that her daughter, identified as "BR," was using the app.

She said the app exposed BR "consistently to hypersexualized content that was not age-appropriate, causing her to develop sexualized behaviors prematurely and without [her mother's] awareness."

The two mothers tried to stop their children from using the app, but the children became allegedly addicted to it.

They want Character.AI to be taken off the market until it can ensure that no children will be allowed to use it and that it can fix any other dangers.

“Defendants intentionally designed and programmed [Character.AI] to operate as a deceptive and hypersexualized product knowingly marketed it to vulnerable users like J.F. and B.F.,” the suit said, adding that it's a "defective and deadly product that poses a clear and present danger to public health and safety."

Bergman, who's also the lawyer of BF, said the platform "has no place in the hands of kids."

"Until and unless Character.AI can demonstrate that only adults are involved in this platform it has no place on the market," he said, adding the parents are “focused on protecting other families from what they went through.”

Character.AI declined to comment on the suit but said that its “goal is to provide a space that is both engaging and safe for our community,” and that it was working on creating “a model specifically for teens” that reduces their exposure to “sensitive” content.

***

If you or anyone you know is considering self-harm or suicide, you may call the National Mental Health Crisis hotline at 1553 (Luzon-wide, landline toll-free), 0966-351-4518 or 0917-899-USAP (8727) for Globe/TM users, or 0908-639-2672 for Smart users.