Google and Character.AI, an artificial intelligence (AI) firm, agreed Wednesday to settle a lawsuit linked to a teen’s suicide in 2024, a landmark settlement for AI-related harm cases.
The US District Court for the Middle District of Florida dismissed the lawsuit following the agreement between the plaintiff and defendants. The parties have 90 days to finalize the settlement or reopen the case if there is good cause.
In October 2024, Megan Garcia, mother of suicide victim Sewell Setzer, 14, brought the lawsuit in the Florida district court. Setzer had been struggling with his mental health when a Character.AI chatbot modeled after a “Game of Thrones” television show character allegedly encouraged him to take his own life. Setzer also notably had sexualized conversations with the chatbot before killing himself.
Garcia argued strict liability should apply to the tech companies for failing to prevent harm to minors “arising from their foreseeable use of such products.” In a civil lawsuit, strict liability allows the court to hold a defendant liable without proof of negligence or ill intent. The suit additionally sought to establish Character.AI’s negligence arising from its “unreasonably dangerous designs and failure to exercise ordinary and reasonable care in its dealings with minor customers.”
The legal framework surrounding AI continues to develop as the technology advances. While some, like Garcia, argue for strict liability for harm suffered in these cases, others disagree. John O. McGinnis, George C. Dix Professor at Northwestern University, has argued against strict liability in these cases, writing:
Strict liability for AI might appear attractive because it promises compensation for visible harm. But in practice, it reflects an epistemological arrogance—a presumption that courts can know all they must to weigh the full consequences of imposing faultless responsibility. As a result of this presumption, strict liability could make society worse off by discouraging the deployment of AI systems that are, on net, vastly beneficial due to their behind-the-scenes impact in specific cases.
Garcia’s lawsuit was the first in a series of cases brought by parents against the tech giants in Colorado, New York, and Texas. Court documents show the companies have also agreed to settle these lawsuits.
Character.AI, founded in 2021 by ex-Google engineers, is an app that invites users to chat with fictional characters. In addition to suicide, mass shootings have also been linked to engagement with the app. In December 2024, 15-year-old Natalie Rupnow opened fire on students at a Wisconsin private school, killing two individuals and injuring six others before ending her own life. The Institute for Countering Digital Extremism later reported Rupnow’s engagement with Character.AI chatbots, with her own profile account featuring a white supremacist.
Following a series of cases involving AI-related mental harm, Character.AI announced safety features “designed especially with teens in mind,” including parental control. More recently, it has banned those below 18 from open-ended chat.
In October 2025, OpenAI revealed that approximately 1.2 million of its 800 million ChatGPT users discuss suicide weekly on its platform.