In copyright suit brought by music publishers against AI developer Anthropic, district court denies Anthropic’s motion to dismiss amended claims of secondary copyright infringement, finding publishers plausibly alleged that Anthropic knowingly allowed its large language AI model Claude to output copyrighted song lyrics without authorization, benefited financially from those outputs and selected algorithms to train Claude that removed copyright management information.
A group of eight music publishers sued Anthropic PBC, a generative AI company, for its unauthorized use of the publishers’ copyrighted song lyrics to train its AI large language model known as Claude. The music publishers alleged that this training allowed Claude’s users to obtain the copyrighted lyrics in an unauthorized manner. Plaintiffs asserted claims for direct and secondary copyright infringement as well as for the removal of copyright management information (CMI) in violation of the Digital Millenium Copyright Act. The court previously dismissed the music publishers’ secondary infringement and CMI claims, but allowed the publishers to file amended claims. After the publishers filed an amended complaint, Anthropic again moved to dismiss the claims for contributory and vicarious copyright infringement and for removal of CMI.
Regarding the music publishers’ claim for contributory infringement, Anthropic argued that plaintiffs failed to allege that Anthropic knew or was willfully blind to any specific instances in which plaintiffs’ copyrighted lyrics were provided to Claude users. The district court found that the publishers met their pleading burden based on allegations related to “guardrails” that Anthropic implemented to try to curb infringement. Those guardrails allowed Anthropic to analyze user prompts and Claude’s outputs, which alerted Anthropic to Claude’s output of copyrighted materials, including copyrighted lyrics. The district court concluded that these allegations could plausibly lead to the conclusion that Anthropic had actual knowledge of specific instances in which Claude users infringed the music publishers’ copyrighted works by prompting Claude for song lyrics.
As to the vicarious liability claim, Anthropic argued that the music publishers failed to allege that it profited from Claude users’ direct infringement while failing to stop any such infringement. The district court found the publishers satisfied their pleading burden by alleging that Anthropic “is paid every time … end users submit[] a request for Publishers’ song lyrics, and it is paid again every time its Claude API generates output copying and relying on those lyrics.” These allegations provided a foundation from which the court could plausibly infer that Anthropic benefited from the infringement, the court explained, because users would choose to use Claude in order to obtain the copyrighted materials.
Finally, regarding the CMI removal claim, Anthropic argued that the music publishers failed to sufficiently allege the required scienter under 17 U.S.C. §§ 1202(b)(1) and (b)(3)—specifically, that Anthropic knew removing the CMI would help conceal infringement. The district court pointed to the publishers’ allegations that Anthropic deliberately selected a particular “Newspaper” algorithm—which removed more CMI from training datasets, compared with other available methods—in order to prevent Claude from outputting CMI when prompted for copyrighted material by end users. The court found these allegations sufficient to support the scienter requirement and concluded that the publishers met their pleading burden on the CMI claim.
Summary prepared by Tal Dickstein and Sarah Levitan Perry
-
合伙人
-
资深顾问律师