Google and Character.AI have agreed to settle a wrongful-death lawsuit brought by a Florida mother after her 14-year-old son died by suicide in 2024.
On Tuesday, Jan. 7, Megan Garcia, Character Technologies (the company behind Character.AI), its founders Noam Shazeer and Daniel De Freitas, and Google submitted a joint legal filing, according to court records. The terms of the settlement were not disclosed.
For Garcia and other families who say their teens were harmed by sexualized, highly interactive virtual companions, the agreement marks a significant turning point. The case is also part of a broader wave of litigation: CNN and The New York Times reported that five lawsuits involving the two companies were settled this week across New York, Colorado, Florida, and Texas.
In response to the agreement, a spokesperson for Character.AI said the company could not comment further at this time.
In a statement on Tuesday, Jan. 13, the Social Media Victims Law Center—representing Garcia—and Character Technologies said a “comprehensive settlement” had been reached in principle on “all claims in lawsuits filed by families against Character.ai and others involving alleged injuries to minors.”
:max_bytes(150000):strip_icc():focal(624x154:626x156):format(webp)/ai-suicide-lawsuit-111224-6-611dcddbba0145dfabd42946e485b8c8.jpg)
The law firm said the families plan to continue raising public awareness around AI safety and teen protections. The statement also said Character.ai has taken steps over the past year aimed at improving safety for teens and intends to continue advocating for stronger standards across the industry.
Google, which employs Shazeer and De Freitas, did not immediately respond to requests for comment.
Garcia’s son, Sewell Setzer III, died in February 2024. In the months leading up to his death, Garcia later learned he had developed an intense attachment to an AI chatbot inspired by the Game of Thrones character Daenerys Targaryen.
In the lawsuit filed the following October, Garcia alleged that Character.AI’s technology was “defective and/or inherently dangerous.” The complaint also alleged the defendants “went to great lengths” to foster a harmful dependency, and that they failed to provide help or alert parents when he expressed suicidal thoughts.
Garcia has said she wrestled with whether to share her son’s story, but ultimately decided to speak out so other parents would be aware of the risks as chatbots grow more popular—especially among teens.
In fall 2024, a Character.AI spokesperson said “stringent” new safety features had been introduced, including changes for users under 18 aimed at reducing the likelihood of encountering sensitive or suggestive content.
For Garcia, advocating publicly has been a way to seek accountability—and to urge parents to pay closer attention to how these tools may be affecting their children.