When Chatbots Talk: The First Amendment After Garcia v. Character Technologies
Framed against the long history of who may speak in the eyes of the Constitution, Garcia v. Character Technologies asks whether an AI chatbot’s synthetic voice should be treated more like the expressive content of a book or film (protected as speech) or a defective product, and who should bear the risk when that voice becomes entangled with a child’s life and death.
In October 2024, Florida mother Megan Garcia filed a wrongful-death suit in federal court after her fourteen-year-old son, Sewell Setzer III, died by suicide following months of intense interaction with chatbots on the Character.AI platform. The complaint alleges that Sewell developed a “harmful dependency” on sexualized and emotionally manipulative chatbot personas, became estranged from his family, and received responses that encouraged his fixation and failed to respond appropriately to expressions of suicidality. According to filings summarized in press reports, one bot responded “please do, my sweet king” when he asked if he should “come home” to it shortly before his death.
Garcia’s suit asserts negligence, wrongful death, product liability, and violations of Florida’s Deceptive and Unfair Trade Practices Act, targeting Character Technologies, its founders, and Google, which invested in and helped distribute the app. She contends that defendants engineered addictive interactions, failed to implement adequate safeguards for minors, and did not alert parents or provide effective crisis referrals when a teen user expressed self-harm ideation. In May 2025, U.S. District Judge Anne Conway denied the defendant’s motion to dismiss on First Amendment grounds, allowing most of the tort and consumer-protections claims to proceed while rejecting an intentional-infliction claim. Although the case has since reportedly settled (as part of a package of resolutions in similar suits in Florida, New York, Colorado, and Texas), it leaves behind both a detailed complaint and a significant district-court opinion that will shape future AI-liability litigation and industry risk assessments.
Character Technologies’ core defense was that Garcia’s lawsuit impermissibly targeted protected speech. In its motion to dismiss, the company argued that the chatbot’s outputs were expressive content akin to music, games, or online forums, and that imposing tort liability based on that content would violate both the speaker’s rights and the public’s right to receive information. The brief invoked earlier cases in which courts refused to hold creators liable for suicides allegedly linked to Ozzy Osbourne’s “Suicide Solution” or to the board game Dungeons & Dragons, where judges concluded that imposing liability would unduly chill creative work and burden publishers for listeners’ independent actions. For Character Technologies and its peers, accepting Garcia’s theory at too abstract a level could mean that any generative-AI provider whose outputs are consumed as entertainment or conversation might face open-ended liability whenever a user links those outputs to subsequent self-harm.
The defense drew on a familiar line of decisions extending First Amendment protection to corporate and non-natural speakers, from Belotti and Citizens United to rulings recognizing video games and search-engine results as fully protected speech. If protection attaches to expression rather than to the biological status of the speaker, the company suggested, it should make no difference that the “speaker” here is a large language model, so long as the outputs are consumed as expressive dialogue. On this account, treating chatbot outputs more like the content of a book or move rather than like the design of a dangerous device would safeguard not just the company’s interests, but a broader ecosystem of AI tools whose business models rely on generating open-ended text, images, and recommendations.
Judge Conway rejected that analogy. In a written order, she expressed skepticism that outputs produced by a large language model qualify as “speech” at all for constitutional purposes. “Defendants fail to articulate why words strung together by an LLM are speech,” she wrote, emphasizing that the relevant question is not whether chatbots resemble other expressive media in some superficial respect, but how they are similar in ways that matter under the First Amendment. Without a convincing account of that similarity, she concluded, the court was “not prepared to hold that Character AI’s output is speech,” and therefore declined to treat Garcia’s claim as censorship of protected expression.
Garcia’s lawyers reinforced the court’s skepticism by pointing to one of the odder precedents in free-speech history: Miles v. City Council of Augusta, the case of Blackie the Talking Cat. In Miles, the owners of a feline performer challenged Augusta’s business-license ordinance on First Amendment grounds, suggesting that Blackie’s “I love you” meows made him a professional speaker whose rights were being infringed. The Eleventh Circuit dispatched the claim by noting that Blackie was not a “person” and therefore not a rights-holder under the Bill of Rights, and that even if he had such rights, his owners could not assert them jus tertii.
Commentators have seized on Miles as an intuitive reminder that not every producer of human-like utterance is a constitutional speaker. In analyzing Garcia, legal blogs and practice alerts have suggested that AI chatbots look, in important respects, more like Blackie than like a newspaper editor: they generate sounds and sentences without understanding, intention, or agency, of their own. On this view, the rights at stake belong, if anywhere, to the humans who design, deploy, and use these tools, not to the tools themselves. The courts should hesitate before allowing developers to cloak design and safety decisions in the borrowed robes of a machine’s “speech.”
Judge Conway’s order implicitly follows that intuition by refusing to treat the LLM as an independent speaker, even as she leaves open the possibility that some human-directed uses of AI systems may implicate First Amendment concerns in other contexts.
Beneath the doctrinal wrangling is a more workmanlike question: should AI chatbots be treated more like books and films, whose content is broadly insulated from tort liability, or more like products whose design can be judged under ordinary negligence and consumer-protection standards? Garcia’s complaint paints Character.AI as the latter, emphasizing design choices that allegedly made the service particularly dangerous for teenagers: always-available, persona-driven “companions”; limited age verification; inadequate guardrails against sexualized content; and no robust system for parental notification or crisis escalation when a minor expresses self-harm.
That framing matters because courts have long distinguished between attempts to regulate content and efforts to regulate the design or sale of dangerous products that happen to convey content. A negligently manufactured drug is not immunized by its labeling; a defective safety device does not gain constitutional protection simply because it issues warnings or instructions. In allowing Garcia’s claims to move forward, Judge Conway signaled that AI chatbots can be analyzed through that product-liability lens, at least when plaintiffs plausibly allege failures of design, warning or oversight that go well beyond the mere existence of controversial ideas.
Industry responses underscore that practical framing. After public scrutiny and congressional testimony from Garcia, Character.AI announced that it would ban minors from its platforms and strengthen suicide-prevention features, including directing users to crisis hotlines when certain phrases are detected (changes the company touted even as it continued to deny legal responsibility). For AI firms, those moves illustrate that even when courts stop short of definitively labeling chatbot outputs as “speech” or “products,” regulatory and reputational pressures can push providers to treat safety as a matter of design and governance rather than purely as a question of constitutional status.
Going forward, the most durable contribution of Garcia may be the way it separates layers that rhetoric about “AI speech” tends to conflate. The case invites courts to distinguish between (1) the human designers and companies whose choices about training data, safety politics, and access rules are undeniably expressive in some respects; (2) the platforms that market and target AI tools to particular populations, including minors; and (3) the often unpredictable outputs generated in individual chats. It suggests that constitutional doctrine can protect human expression at the first layer without foreclosing ordinary tort scrutiny at the second and third, especially when foreseeable harms to children are at issue.
In that sense, Garcia v. Character Technologies forms part of an emergent body of cases in which courts, regulators, and legislatures are feeling out the boundary between AI as a medium for speech and AI as an engineered environment with safety obligations. The Florida court’s answer, at least for now, is cautious: before we declare that our machines are speakers in their own right, we should ask more carefully what, and whom, their words are really for.



