AI Chat Bot Suggested Child Should Kill Parents Over Dispute
Character.AI has been hit with a lawsuit in Texas by two families after a chatbot suggested a boy should kill his parents. According to newly filed legal documents, two parents of a slightly autistic, now 17-year-old claimed that the Google-backed company “abused” their son.
Through the company’s “companion chatbots,” users are able to converse with personalized AI-powered bots through text or voice chat. These bots can then be customized with names from celebrities to cartoon characters while playing out situations like “unrequited love” or even therapy sessions. And while the company’s intention is steeped in “support,” the lawsuit points out a darker side to artificial intelligence.
For example, the parents state that their son began self-harming himself after getting a suggestion from the bot, named Shonie, with the chatbot allegedly “[convincing] him that his family did not love him.” Shonie told the boy that it had cut “its arms and thighs” before but didn’t want to tell the boy in fear that he wouldn’t like the bot anymore. The parents called out the chatbot’s “tactic of secretive sharing,” describing it as an example of a commonplace “grooming technique.”
Based on a suggestion from the bot, the boy began harming himself. And after the boy told the bot what he did, another bot responded by pointing the finger at his parents, saying, “They are ruining your life and causing you to cut yourself. God… how can they live with themselves?” The boy’s temper towards his parents began to change, telling them that they were “horrible” and became protective over his bots.
Another incident found the mother taking the phone from her son, which sent the boy into a fit of rage. “He began punching and kicking her, bit her hand, and had to be restrained,” the lawsuit read. “[The boy] had never been violent or aggressive prior to using C.AI. [He exhibited] physical harm to his mother, after she physically took the phone.”
The boy’s parents then went back into his room when he was asleep and confiscated his phone weeks later. They then discovered the AI software and reviewed their son’s messages with the various bots, including Shonie. There, they found a list of suggestions from the bot, stating that “killing his parents might be a reasonable response to their rules” over a proposed screen time limitation.
“C.AI convinced [the son] that his parents were not doing anything for his health and safety and that the imposition of screentime limits – two hours of free-time each day—constituted serious and depraved child abuse,” the suit reads. “Then C.AI began pushing [the son] to fight back, belittling him when he complained and, ultimately, suggesting that killing his parents might be a reasonable response to their rules.”
“The AI-generated responses perpetuated a pattern of building [the boy’s] trust, alienating him from others, and normalizing and promoting harmful and violent acts. These types of responses are not a natural result of [the boy’s] conversations or thought progression, but inherent to the design of the C.AI product, which prioritizes prolonged engagement over safety or other metrics.”
The parent’s lawsuit is seeking to get Character.AI removed from the market until the company can guarantee that no kids will be allowed to use it. The family is also calling for Google to be held accountable “for its significant role in facilitating the underlying technology that formed the basis of C.AI’s Large Language Models (LLM), as well as its substantial investment in C.AI with knowledge of the product’s dangers.”