Artists suing Stability AI, Deviant Art and Midjourney hit a roadblock this week in their quest to prove allegations that AI image generators illegally use copyrighted works to imitate unique artistic styles without compensation or consent.
On Monday, U.S. District Judge William H. Orrick dismissed many of the artists’ claims after finding that the proposed class action complaint was “defective in many respects.” Perhaps most notably, two of the three plaintiffs—independent artist Kelly McKernan and concept artist/professional illustrator Carla Ortiz—apparently never registered any of their disputed works with the Copyright Office. Orrick dismissed their claims with prejudice and withdrew them from the case.
But even though McKernan and Ortiz can no longer pursue their claims, the case is far from over. The lead plaintiff, cartoonist and illustrator Sarah Anderson, will have the next 30 days to amend her complaint and keep the copyright dispute alive.
Attorneys representing the suing artists, Matthew Butterick and Joseph Savery, confirmed in a statement to Ars that the artists will file an amended complaint next month, noting that discovery in the case is ongoing in the meantime. They also told Ars that nothing in Monday’s order was surprising because it was “consistent with the views” expressed by Orrick during an earlier hearing.
“Judge Orrick upheld the plaintiffs’ primary claim alleging direct copyright infringement by Stability AI, so this claim is now headed for trial,” the lawyers said in a statement. “As is customary in a complex case, Judge Orrick granted plaintiffs leave to amend most of their other claims. We are confident that we can address the court’s concerns.”
Stability AI, Deviant Art and Midjourney did not immediately respond to Ars’ request for comment.
The judge “largely” granted the motion to dismiss
Artists who are suing allege that the companies behind the popular AI image generators are guilty of direct and indirect copyright infringement, as well as violations of the Digital Millennium Copyright Act and California’s unfair competition laws and rights of publicity. They argued that because the text prompts could generate images “in the style of” their works, any image generated should be considered a “derivative work”—based on works of different artists’ copyrights—that could potentially be “wrong interpreted as forgeries’.
Orrick finds the complaint flawed, agreeing with the defendants that the artists seem a bit confused about how the image generators actually work. Their complaint alleges that Stable Diffusion performed “compressed copies” of images, which the defendants say “contradicts” how the plaintiffs describe the diffusion process as an “alternative way to store a copy of those images” by using ” statistical and mathematical methods to store these images in an even more efficient and compressed manner.” In his order, Orik asked for clarity on this matter, writing:
Plaintiffs will be required to amend to clarify their theory regarding the compressed copies of the Training Images and to plead facts in support of how Stable Diffusion—a program that is open source, at least in part—works with respect to the Training Images . If plaintiffs allege that Stable Diffusion contains “compressed copies” of the training images, they must define “compressed copies” and explain plausible supporting facts. And if Plaintiffs’ compressed copy theory is based on the claim that Stable Diffusion contains mathematical or statistical methods that can be performed by algorithms or instructions to reconstruct all or part of the Training Images to create the new source images, they need to clarify this and provide plausible supporting facts.
Andersen’s primary claim for direct copyright infringement would be against Stability AI, as the maker of the open-source image synthesis model, Stable Diffusion, but not against DeviantArt and Midjourney, which created tools using that model. but they had nothing to do with training it. (DeviantArt and Midjourney remain on the hook for other claims, which are subject to change, however.)
Stability AI tried and failed to argue that Andersen could not “proceed with its copyright infringement claims unless it specifically identified each of its registered works that it claims were used as training images” for the model of the open source image fusion company, Stable Diffusion. Orrick wrote that Anderson had sufficiently pleaded its case at this point and suggested that if Anderson could “plausibly plead that the defendants’ AI products enable users to create new works by expressly referencing Anderson’s works by name , inferences about how and how much of Anderson’s protected content remains in stable diffusion or is used by AI end products may be stronger.”
But Orrick seemed less convinced by the artists’ “indirect infringement theory,” where the artists argued that the defendants’ products could be used by “fraudsters” to produce “counterfeits.” He said that because the artists acknowledged that “none of the source images is likely” to be close to any particular image in the training data,” Andersen’s complaint was “devoid of any claim” that any of her “specific works (or any other member of the class) has been used to create “counterfeit works”. “She would have to fix that in order to make the specific claim.
To succeed in its main copyright claim against Stability AI, Andersen will likely have to prove that a specific copyrighted work was copied without permission, rather than referring to haveibeentrained.com, a way for artists to search for their works on the AI training data. The search results for AI training data provided “sufficient basis to allow its copyright claims to proceed,” Orrick wrote, ultimately denying Stability AI’s motion to dismiss, but those results were “insufficient” for to prove direct copyright infringement, as the website’s search result “pages show many hundreds of works not identified by specific artists”, not just Andersen images.
To support the DMCA claim, Andersen will also need to be more specific, Orrick wrote, identifying exactly what copyright management information was removed or altered by each defendant.
All other claims will need to be amended, Orrick wrote, “to address the many issues identified.”
But the artists succeeded on one major front: Midjourney’s motion to dismiss the class-action allegations was denied. Orrick wrote that it was too “premature” to rule out the “possibility” of class action certification in this case. With Andersen as the remaining artist in the proposed class, the fate of any artist hoping to protect copyrighted works from being fed to an artificial intelligence machine producing copies may depend on how she adjusts her complaint.
It’s the broad scope of the class action that Orrick seems to be looking at the most at this point. He wrote that one of the artists’ main claims—that every image generated by the Stable Diffusion model is a derivative work—is “simply not plausible” because there is no way to prove that every image in the training set is protected by copyrighted work. It probably also doesn’t help that his point appears to be borne out by the other named plaintiffs dropped from the suit this week, who claim their copyright-free works were also included in Stable Diffusion’s training data.
“Plaintiffs are granted leave to amend to provide clarity as to their theories of how each defendant individually infringed their copyrights, removed or altered their copyright management information, or violated their rights of publicity and fair facts in support,” Orik wrote.