Georgia state court grants summary judgment dismissing defamation claim targeting ChatGPT output accusing radio host of embezzlement, in light of disclaimers notifying users that ChatGPT sometimes generates inaccurate information, and user’s admission that he did not believe output relayed true facts.
Mark Walters, a nationally syndicated radio host and the self-described “loudest voice in America fighting for gun rights,” sued OpenAI LLC, the artificial intelligence (AI) company and maker of ChatGPT, for defamation after the generative AI chatbot falsely claimed that Walters had been accused of embezzling funds from a Second Amendment advocacy organization. A journalist for AmmoLand.com, Frederick Riehl, was researching a lawsuit filed against the attorney general of the state of Washington by the Second Amendment Foundation (SAF), SAF v. Ferguson. That suit claimed SAF—with which Riehl was affiliated—had been harassed by the attorney general due to its political beliefs and activities relating to its position on gun control. Though Riehl had received a press release about the suit and a copy of the Ferguson complaint, he also asked ChatGPT to summarize the Ferguson lawsuit. Riehl initially copied and pasted portions of the Ferguson complaint into ChatGPT, which generated an accurate summary. Riehl then gave ChatGPT a link to the complaint and asked it to summarize the document, but ChatGPT informed him that it was not connected to the internet and could not read or retrieve documents. After repeated attempts, ChatGPT then incorrectly stated that the Ferguson lawsuit involved allegations of embezzlement by an SAF treasurer and CFO, which ChatGPT identified as Walters. Having preexisting knowledge of the Ferguson lawsuit, and experience working with ChatGPT and its sometimes false or incorrect output, Riehl knew within an hour and a half of additional research that ChatGPT’s summary of the Ferguson lawsuit was not true.
Walters filed a defamation claim against OpenAI based on ChatGPT’s false claim that he had been accused of embezzling from SAF in the Ferguson lawsuit. The court granted summary judgment in OpenAI’s favor on three grounds. First, the court determined that ChatGPT’s output falsely claiming Walters had been accused of embezzlement did not communicate defamatory meaning as a matter of law. Under Georgia law, a plaintiff alleging defamation must show that the statement at issue could be “reasonably understood as describing actual facts about the plaintiff or actual events in which he participated.” The court found that a reasonable reader in Riehl’s position could not have concluded that ChatGPT was communicating “actual facts.” The court noted the fact that ChatGPT had warned Riehl that it could not access the internet when he first asked it to summarize the Ferguson complaint, and that ChatGPT had informed Riehl that it has a “knowledge cutoff date.” Additionally, OpenAI included multiple disclaimers, including in its terms of use, stating that ChatGPT sometimes provides factually inaccurate information. Riehl also admitted that he knew the ChatGPT output was false based on his knowledge of the SAF lawsuit and other research he performed after reading the output.
Second, the court held that Walters had not presented any evidence that OpenAI acted “with at least ordinary negligence” when ChatGPT published the false statements about Walters. “Ordinary negligence”—the baseline standard of conduct in defamation actions—is defined by reference to the actions a reasonable publisher, of skill and experience typical in the profession, would take in the same position as the defendant publisher. Here, the court found that Walters had presented neither evidence showing what that standard entailed nor evidence that OpenAI failed to meet the standard. To the contrary, the court found that OpenAI’s expert offered testimony “demonstrating that OpenAI leads the AI industry in attempting to reduce and avoid mistaken output like the challenged output here.” In designing ChatGPT, the court found, OpenAI has gone to great lengths to train and refine ChatGPT in order to reduce inaccurate outputs, or “hallucinations,” as they are known. The court also noted OpenAI’s extensive disclosures of the possibility of ChatGPT hallucinations, which further militated against a finding that OpenAI acted negligently. Walters argued that OpenAI was negligent in releasing ChatGPT to the public, knowing that the chatbot was prone to publishing inaccurate outputs. But the court rejected that argument, as it would amount to a strict liability standard that did not comport with the negligence standard for defamation under Georgia law.
In any case, the court held that Walters, a prominent radio host and commentator on constitutional rights—specifically gun rights and the Second Amendment—was, at a minimum, a limited-purpose public figure in connection with speech on that topic. Walters had voluntarily injected himself into the public discussion as a commentator and, due to his large radio audience and outlet, had access to public channels of communication to counteract any purported harm done to him by false statements, the court found. As a public figure, Walters was required to show that OpenAI acted with actual malice with regard to the ChatGPT statements—an even more demanding standard requiring a showing, by clear and convincing evidence, that OpenAI either “knew that the allegedly defamatory statements were false” or “was aware of the likelihood [it] was circulating false information.” As Walters had not made a showing that OpenAI acted with ordinary negligence, he also failed to establish that OpenAI acted with actual malice. Accordingly, the court concluded that Walters could not establish his claims as a matter of law and OpenAI was entitled to summary judgment on this ground.
Third, the court held that OpenAI was also entitled to summary judgment in its favor because Walters had not established that he suffered any damages. At his deposition, Walters admitted that he had not been damaged by the ChatGPT output and that he wasn’t seeking any damages in this case. Walters also could not recover punitive damages because he did not ask OpenAI to correct or retract ChatGPT’s inaccurate output—a requirement under Georgia law before a plaintiff can recover punitive damages in a defamation action. Walters also argued that he was entitled to presumed damages, claiming the ChatGPT output constituted defamation per se because it accused Walters of committing a crime. But the court held that Walters’ admission that he had not suffered any damages rebutted any presumption that he had suffered damage as a result of the false statement. Walters’ inability to prove that OpenAI acted with actual malice also precluded him from seeking punitive damages. The court therefore held that OpenAI was entitled to summary judgment in its favor on this ground as well.
Summary prepared by Tal Dickstein and Kyle Petersen
-
Partner
-
Associate