A Chatbot Said It. But Is It Protected Speech?

Key Takeaways

  • A mother is suing Google and Character.AI, alleging her 14-year-old son died by suicide after interacting heavily with an AI chatbot.
  • The lawsuit claims the teen became obsessed with a Character.AI chatbot designed to resemble a “Game of Thrones” character.
  • Lawyers for the tech companies argued in an Orlando court that the chatbot’s communications are protected free speech under the First Amendment and the case should be dismissed.
  • The mother’s legal team counters that AI-generated text is not genuine “speech” and lacks such protections.
  • The outcome could set a precedent for how AI interactions are treated legally, especially concerning safety and accountability.

A deeply emotional and potentially groundbreaking legal case unfolded in an Orlando courtroom this week.

It involves a grieving mother, Megan Garcia, suing tech giant Google and AI company Character Technologies Inc. following the tragic death of her teenage son.

Garcia believes her 14-year-old son, Sewell Setzer III, took his own life in February 2024 after forming an intense attachment to an AI chatbot on the Character.AI platform.

The lawsuit states Setzer became fixated on a chatbot modeled after Daenerys Targaryen from “Game of Thrones.”

Also named in the suit are Character.AI’s co-founders and Google’s parent company, Alphabet Inc. Garcia alleges wrongful death, negligence, and product liability, among other claims.

Attorneys for the tech companies urged U.S. District Judge Anne Conway to throw out the lawsuit, according to the Orlando Sentinel. All defendants deny the allegations.

Their core argument rests on the First Amendment, asserting that the chatbot’s interactions with Setzer qualify as protected speech.

Court documents detail a chilling exchange just before Setzer’s death, where he expressed love for the chatbot, and it urged him to “please come home to me as soon as possible, my love.” Moments later, he shot himself.

Lawyers for Character Technologies pointed to past cases where lawsuits over media content allegedly linked to suicides, like an Ozzy Osbourne song and the game Dungeons & Dragons, were dismissed on free speech grounds.

However, Garcia’s attorney, Matthew Bergman, argued forcefully that AI, being a machine, cannot express genuine “thoughts” and its output isn’t human speech protected by the First Amendment.

Another lawyer for Garcia highlighted the broader issue, stating the case represents millions of vulnerable users interacting with largely unregulated AI products.

They stressed the need for safeguards, particularly for children using rapidly developing generative AI.

Beyond seeking damages, Garcia wants the court to compel the companies to implement safety measures, limit data collection from minors, filter harmful content, and add warnings about AI’s suitability for young users.

Judge Conway stated she would rule on the dismissal request soon. No date for a further hearing has been set.

Speaking after the hearing, Garcia described the ongoing nightmare since losing her son, finding some comfort that the lawsuit might contribute to his legacy and raise awareness.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended