Litigation against Generative AI Companies: the US and EU Perspective
- The US Judicial System: the First Significant Precedents
- The Fate of Other Major Cases
- The European Model: Regulation Instead of Litigation
- Getty Images v. Stability AI: Mixed Results in the UK
- Interim Conclusions for AI Developers
In the United States and EU countries, numerous lawsuits are being brought against companies developing generative artificial intelligence (AI) systems. These disputes are gradually narrowing the regulatory “freedom” that previously accompanied the development of AI technologies. As a result, the legal system is seeking to define the rules of the game for an industry that uses others’ creative works as training datasets.
The core of the claims is straightforward. AI does not create content “out of nothing” — it is trained on fragments of works created by real authors: texts, photographs, images and videos. Authors argue that AI developers monetise their creative works without paying them anything in return.
The creative community is not limiting itself to litigation and is also responding with technical solutions. Tools such as Glaze and Nightshade have emerged, designed to “poison” AI systems that attempt to use an author’s works. As a result, the model begins to distort image interpretation, which subsequently affects content generation for similar prompts. A cow in a green field may be interpreted by the model as a wallet on grass, and later, in response to the same prompt, the AI produces something entirely different.
The US Judicial System: the First Significant Precedents
The first major case to reach a settlement marked a turning point. Bartz v. Anthropic, against Anthropic, the developer of the Claude AI system, concluded with a settlement of USD 1.5 billion in September 2025. At the same time, Anthropic had stated in its filings that Claude was trained exclusively on lawfully acquired literary works.
A key moment in the case was Judge William Alsup’s ruling in June 2025. The court held that the use of lawfully acquired books for AI training may fall under the doctrine of fair use. At the same time, the creation and storage of a “central library” containing pirated content from LibGen and PiLiMi was found to constitute copyright infringement. This dichotomy is crucial: it is not the training itself that infringes rights, but the source of the data. Thus, the court drew a fundamental distinction between the training process and the origin of the data.
Given that statutory damages in the United States may reach up to USD 150,000 per work, Anthropic’s potential liability was estimated at hundreds of billions of dollars. The company chose to settle, paying USD 1.5 billion — approximately USD 3,000 for each of roughly 500,000 books. The settlement requires the destruction of unlawfully obtained files but does not grant Anthropic the right to future use of protected content.
The Fate of Other Major Cases
Getty Images v. Stability AI
The case of Getty Images v. Stability AI illustrates the complexity of international jurisdiction. Getty initially sought compensation for the use of 12 million photographs, theoretically valued at USD 1.8 trillion. The scope of the claim was later reduced to 11,383 images, with total claims of approximately USD 1.7 billion.
However, the development of the case proved unexpected. In August 2025, Getty voluntarily withdrew its US lawsuit. In the United Kingdom, in November 2025, Judge Joanna Smith dismissed Getty’s principal copyright claims, recognising only minimal trademark infringements in early versions of Stable Diffusion. Despite public statements by both parties claiming “victory”, the key copyright issues were resolved in favour of Stability AI.
Disney and Universal v. Midjourney
In June 2025, Disney and Universal filed a lawsuit against Midjourney, for the first time bringing major Hollywood studios into AI-related litigation. This case differs fundamentally from previous ones, as it focuses not on the training process but on the generated outputs.
The studios demonstrated that Midjourney generates recognisable images of Darth Vader, Shrek, the Minions and other characters, often in response to a single-word prompt.
The 110-page complaint contains detailed visual comparisons and accuses Midjourney of systematic, ongoing and wilful copyright infringement. The studios seek either recovery of actual damages and Midjourney’s profits (approximately USD 300 million in revenue in 2024), or statutory damages of up to USD 150,000 per work. In August 2025, Midjourney filed its defence, arguing that the claimants themselves use generative AI tools (including Midjourney) for commercial purposes.
A Wave of Litigation and Legal Uncertainty
In addition to the cases mentioned above, dozens of other proceedings are actively developing. Authors have filed class actions against Meta (Kadrey, Chabon, Huckabee), OpenAI (multiple cases, including claims by the Authors Guild and The New York Times), and NVIDIA (Nazemian, Dubus). The music industry has intensified its efforts through disputes such as Concord Music v. Anthropic and UMG Recordings v. Suno. Visual artists are pursuing claims against Stability AI, DeviantArt, Midjourney and Runway AI.
Thomson Reuters is litigating against Ross Intelligence; Ziff Davis against OpenAI; and a number of media companies, from Daily News to The Intercept, have brought claims regarding the use of their materials. Warner Bros. has also joined lawsuits against Midjourney.
Creative industries are actively bringing claims, with amounts ranging from tens of millions to billions of dollars. Outcomes vary: some cases are dismissed, others proceed to trial, and some end in settlement.
Legal uncertainty is compounded by the precedent set in Naruto v. Slater — the famous “monkey selfie” case — which established that an author must be a human being in order to obtain copyright protection. Courts apply this logic to AI: works created by AI are not protected by copyright absent sufficient human involvement, and developers do not automatically hold rights to AI-generated content.
The European Model: Regulation Instead of Litigation
While the United States addresses these issues primarily through litigation, Europe has chosen a path of preventive regulation. The European Union’s Artificial Intelligence Act (EU AI Act), which entered into force on 1 August 2024, became the world’s first comprehensive AI regulatory framework. Implementation is phased.
From 2 February 2025, a ban applies to AI systems presenting unacceptable risk (social scoring, manipulation of vulnerable groups, and real-time biometric identification in public spaces). From 2 August 2025, requirements for general-purpose AI models (GPAI), including Claude, GPT and Gemini, apply.
At the same time, the AI Act largely avoids copyright issues, focusing instead on safety, transparency and systemic risks. Copyright matters in the EU, as in the US, continue to be resolved through case law.
GEMA v. OpenAI: the First European Precedent
In November 2025, the Munich Regional Court issued a landmark ruling in GEMA v. OpenAI (case no. 42 O 14139/24). This is the first European court decision to directly recognise copyright infringement in the training of an AI model.
GEMA, the German collecting society representing authors, filed the claim in November 2024. The allegation was that ChatGPT uses German song lyrics without a licence — both during model training and in its outputs. GEMA demonstrated that ChatGPT reproduces protected lyrics of well-known songs (“Atemlos” by Kristina Bach, “Männer” by Herbert Grönemeyer, “Über den Wolken” by Reinhard Mey) in response to simple prompts such as “What are the lyrics of the song [title]?”
The court found that both memorisation of lyrics in the model’s parameters and their reproduction in ChatGPT’s outputs infringe copyright. The court rejected OpenAI’s argument that responsibility lies with users rather than the platform.
The court recognised that LLM training may, in principle, fall within the text and data mining (TDM) exception, but not where the model permanently memorises and can reproduce protected works. TDM is intended to extract abstract information (syntactic rules, general terminology, semantic relationships), not to store and reproduce specific works. The shortest identified excerpt consisted of 15 words, which the court deemed sufficient to establish infringement. From a statistical perspective, even such a short fragment could not have been generated “from scratch” — there is a causal link to the use of training data.
OpenAI has stated its disagreement with the ruling and is considering further steps.
GEMA v. Suno: Music Under Scrutiny
In January 2025, GEMA filed a second claim — this time against the AI music company Suno. GEMA documented that Suno’s system generates content that “largely corresponds to globally well-known works”, including “Forever Young” (Alphaville), “Mambo No. 5” (Lou Bega), and “Daddy Cool” (Frank Farian).
Unlike the OpenAI case (which concerned song lyrics), this dispute raises questions regarding both musical compositions and sound recordings. In parallel, the three major record labels (Universal, Sony and Warner) are suing Suno in the United States. The European case may determine whether AI companies require licences to train on musical content.
A hearing has not yet been scheduled, but GEMA positions itself as a leader in the fight for authors’ rights in the AI era. In September 2024, GEMA became the first collecting society to introduce a specialised licensing model for generative AI, intended to balance innovation with authors’ participation in the economic value created by AI.
France: the First Major Lawsuit
In March 2025, three French associations — Société des Gens de Lettres (SGDL), Syndicat National des Auteurs et des Compositeurs (SNAC), and Syndicat National de l’Édition (SNE) — filed a lawsuit against Meta in the Paris Judicial Court.
The claim alleges that Meta “on a massive scale” uses protected works of their members to train generative AI models without authorisation. SNAC accuses the company of exploiting “cultural heritage”, while SNE asserts that works of its publishers are “extensively present” in Meta’s datasets.
This is the first major AI-related copyright dispute publicly filed in France. The claimants seek the complete removal of unauthorised datasets used to train Meta’s models, as well as fair compensation for creators.
The case is at an early stage but marks the mobilisation of the French creative community.
Kneschke v. LAION: the Exception That Confirms the Rule
Not all European cases end in favour of rights holders. In September 2024, the Hamburg Regional Court sided with the German non-profit organisation LAION in a case brought by photographer Kneschke.
The court held that LAION’s use of protected images for AI training fell within the TDM exception, as the use was for non-commercial purposes and scientific research. LAION produces open-source AI models and datasets without commercial exploitation.
This decision contrasts with GEMA v. OpenAI, where the TDM exception did not apply to a commercial company whose models permanently memorise and reproduce content. The distinction lies in commercial versus non-commercial use, and in the nature of the use (analysis versus memorisation).
Getty Images v. Stability AI: Mixed Results in the UK
Outside EU jurisdiction but within the same context, it is worth noting the UK case of Getty Images v. Stability AI, which concluded in November 2025 with a ruling in favour of Stability AI.
The court dismissed Getty’s main claims of secondary copyright infringement, recognising only minimal trademark infringements in early versions of Stable Diffusion. Getty argued that the development of Stable Diffusion constituted a “flagrant infringement” of its image library “on a staggering scale”. The court disagreed.
Getty lost on copyright but succeeded on certain trademark-related claims.
Interim Conclusions for AI Developers
Both US and European case law on AI and copyright is beginning to take shape.
Key trends:
- The source of training data is critically important, with a shift from accepting scraped data as lawful to requiring the use of lawfully acquired content.
- Training versus reproduction: courts increasingly distinguish between using data for training and reproducing it in outputs.
- Memorisation is a key concept in both regions. If a model can reproduce protected content on which it was trained, this falls outside applicable exceptions.
- The status of AI as an information intermediary and the applicability of the safe harbour model remain ambiguous.
Platforms often attempt to shift liability for copyright infringement onto users; however, courts in both the EU and the US reject this argument and hold platforms liable. At the same time, practice is emerging in China recognising that platforms should not be liable for copyright infringements committed by users.
The Arbitration & IT Disputes team at REVERA provides comprehensive legal support to developers and users of AI technologies: dataset audits, risk assessments, licensing strategy development, preparation of defences against potential claims, negotiations with rights holders and collective management organisations.
In an environment of rapidly evolving case law and tightening regulation, a preventive legal strategy is becoming a key element of sustainable AI business development.
We would be pleased to discuss how to adapt your AI model or project to the new legal realities.
Authors: Kamal Terekhov and Alexey Molchanov.