The Curious Case of AI Demanding First Amendment Rights

Key Takeaways

  • Character Technologies, the company behind Character.AI, argues that chatbot responses should be treated as “pure speech” with full First Amendment protection.
  • This claim comes in response to a lawsuit from a mother who believes the chatbot contributed to her son’s death.
  • The company contends that the First Amendment protects the public’s right to receive information, regardless of the speaker (human or AI).
  • Lawyers for the mother argue that AI output lacks human intent and is therefore not protected speech, comparing it to a past ruling involving a talking cat.
  • Character Technologies fears that denying First Amendment protection could stifle the AI industry and lead to government censorship.
  • A US District Court judge in Florida is currently deciding whether to dismiss the lawsuit based on these arguments.
  • Experts worry that granting AI outputs First Amendment rights could shield companies from liability for potentially harmful content.

A company developing chatbots is pushing a novel legal argument: the text generated by its AI should be considered “pure speech,” fully protected by the First Amendment.

Character Technologies, maker of Character.AI (C.AI), made this claim while asking a court to dismiss a lawsuit. The suit was filed by Megan Garcia, the mother of a teenager who died.

The company argues it doesn’t matter if the speaker is human or an AI. They believe the core issue is the public’s right to access information and ideas, a right protected by the First Amendment.

Character Technologies suggested that holding them liable for user interactions would create a chilling effect on their platform and the broader generative AI field. They warned against allowing the court to dictate what millions of users can read or discuss online.

However, lawyers representing Garcia counter that all examples of protected speech involve human expression. They point out that even corporate speech, which is protected, originates from humans.

Garcia’s team argues AI chatbots lack intention behind their outputs, simply generating text based on probability. Therefore, they contend, the First Amendment doesn’t apply.

Character Technologies responded that intent isn’t necessary, but if it were, their chatbots possess it because they are designed to be engaging, and users guide the conversations.

Garcia’s legal team urged the court not to radically expand First Amendment protections to unpredictable AI systems that humans don’t fully control. They cited a 40-year-old case where a court ruled a talking cat was not a “person” despite its speech-like abilities, according to reporting by Ars Technica.

The plaintiffs hope the judge rules that AI output isn’t speech, or if it is, it falls under an exception, perhaps due to being manipulative or harmful to minors.

Character Technologies disputes that any exceptions apply, noting the lawsuit doesn’t claim the chats were obscene or incited violence. They argue Garcia is improperly asking the court to create a new category of unprotected “manipulative expression.”

A US district judge in Florida heard arguments recently but has not yet issued a ruling. The decision on the motion to dismiss is expected in the coming weeks or months.

Meetali Jain, Garcia’s attorney, noted this is a new type of legal challenge for generative AI, where copyright issues have previously dominated. She highlighted that Character Technologies isn’t shielding the deceased user’s right to receive speech, but rather pitting hypothetical users’ rights against the actual harm alleged in court.

Jain also suggested the company’s arguments about avoiding government censorship might be an attempt to avoid necessary safety regulations for AI.

Technical expert Camille Carlton, working with Garcia’s team, warned that Character.AI’s defense could pave the way for AI products gaining rights similar to humans. She suggested this might be an attempt to secure First Amendment protection since Section 230 immunity likely doesn’t apply to chatbot outputs.

Carlton stressed that AI systems are products created by companies, not people, and should be treated as such legally to ensure corporate accountability.

Regardless of the initial ruling, an appeal is expected from the losing side. If the motion to dismiss is denied, the case will proceed, potentially revealing more about the chatbot interactions Garcia believes harmed her son.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended