Artists Lose First-Round Copyright Infringement Lawsuit Against AI Art Generator
Artists Lose First-Round Copyright Infringement Lawsuit Against AI Art Generator
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
Artists Lose First-Round Copyright Infringement Lawsuit Against AI Art Generator
In a groundbreaking lawsuit, artists who sued an AI art generator faced a setback as a federal judge dismissed most of their claims.
The plaintiffs alleged that these generators used billions of images downloaded from the internet without authorization to train AI systems and didn’t provide compensation.
Federal District Judge William Orrick of the United States ruled on Monday that the copyright infringement claims against Midjourney and DeviantArt could not proceed, citing “deficiencies in many aspects” of these allegations.
These issues include whether the AI systems they operate genuinely contain copyrighted image copies used to create infringing works and whether artists can prove infringement if the AI tools didn’t create identical materials.
Claims of infringement, public rights, unfair competition, and breach against both companies were rejected but may be subject to refile.

Notably, a direct infringement claim against Stability AI was allowed to proceed because the company allegedly used copyrighted images without permission in creating Stable Diffusion. Stability denies storing and incorporating these images into its AI system, asserting that it trained its model by developing parameters such as lines, colors, tones, and other attributes related to the subject and concept from these works, collectively defining the appearance of things. This issue may be crucial in deciding the case’s outcome but remains disputed.
The lawsuit revolves around Stability Company’s Stable Diffusion technology, which has been incorporated into their AI image generator DreamStudio. In this case, artists must demonstrate that their works were used to train AI systems. It is claimed that DreamUp by DeviantArt and Midjourney adopted the Stable Diffusion technology. One major hurdle faced by artists is that the training dataset is largely a black box.
In dismissing the infringement claims, Orrick wrote, “The plaintiffs’ theory is unclear about whether copies of training images stored in Stable Diffusion were used by DeviantArt and Midjourney.” He pointed out that the defendants believe it’s implausible to “compress billions of images into a working program” like Stable Diffusion. “The plaintiffs must amend to clarify their theory about compressed copies of training images and state facts supporting how Stable Diffusion (at least partially open source) operates in relation to training images.”
Orrick questioned whether if the AI systems “only contain algorithms and instructions that can be used to create images containing elements of a few copyrighted works,” Midjourney and DeviantArt should be held directly liable for the use of Stable Diffusion through their applications and websites.
The judge emphasized that the plaintiffs didn’t allege these companies played an active role in the alleged infringement, so they need to clarify their theory against Midjourney – whether it’s based on Midjourney’s use of Stable Diffusion or Midjourney’s independent use of Training Images to train Midjourney products, or a combination of both.
Under this order, artists may also need to provide evidence that the infringing works created by the AI tool are identical to their copyrighted materials. This might be a significant challenge as they admitted, “no image in the Stable Diffusion outputs based on specific text prompts may be substantially similar to any specific image in the training data.” The ruling states, “Derivative copyright claims cannot stand without allegations of ‘substantial similarity’ in the absence of ‘direct copying.'”
While the defendants presented “strong reasons” for the claims to be dismissed without the chance for further argument, Orrick noted the artists’ argument that AI tools could produce materials similar enough to their works to be mistaken for counterfeits.
Claims regarding alternative infringement, Digital Millennium Copyright Act (DMCA) removal of copyright management information, public rights, breach, and unfair competition requests were also rejected.
“Plaintiffs are allowed to amend to clarify their theory and provide reasonable facts about ‘compressed copies’ in Stable Diffusion and how these copies appear in or are provided to third-party DreamStudio, DreamUp, and Midjourney products, which must be equally clear and plausible to hold Stability liable for third-party use of its DreamStudio product,” Orrick wrote.
Concerning the public rights claim, where the defendants allowed users to request artwork in the style of the plaintiffs, thus capitalizing on their names, the judge emphasized there wasn’t enough information to support the argument that the companies used artists’ identities to advertise their products.
Of the three artists who filed the lawsuit, two dropped their infringement claims because they didn’t register their works with the Copyright Office before the lawsuit. Copyright claims will only be limited to the works of artist Sarah Anderson, who did register her works. Anderson’s basis for this was the search results of her name on haveibeentrained.com, proving that Stable Diffusion had been used on her works. The website allows artists to discover if their works have been used in AI model training and offers an opt-out option to prevent further unauthorized use.
While the defendants complained that the search results mentioned by Anderson on the “haveibeentrained” website were insufficient because the output page displayed hundreds of works without specific artist names, the defendants can examine Anderson’s claims during the discovery process.
Stability, DeviantArt, and Midjourney have not responded to requests for comments.
On Monday, President Joe Biden issued an executive order that outlines some safeguards for artificial intelligence. Although the executive order primarily focuses on reporting requirements for the national security risks posed by certain companies’ systems, it suggests adding watermarks to photos, videos, and audio created with AI tools to prevent Deepfake. Biden emphasized that this technology has the potential to “sully reputations, spread false information, and commit fraud.”
The Human Artistry Campaign stated in a release, “Incorporating copyright and intellectual property protection into the AI executive order reflects the importance of the creative community and knowledge-driven industries to the economic and cultural leadership of the United States.”
In a meeting in July, leading AI companies voluntarily agreed to establish guardrails to manage the risks posed by emerging technologies. The White House hopes to promote self-regulation in the AI industry without legislative restrictions on new tool development. Similar to the executive order issued by Biden, this executive order has no reporting system or timetable to legally bind companies to their commitments.