Meta AI Training Dispute Heads to EU Supreme Court
Meta AI Training Dispute Heads to EU Supreme Court
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
Meta AI Training Dispute Heads to EU Supreme Court
Legal Battle Over “Llama” Model Training Intensifies GDPR Privacy Concerns
The European Union is embroiled in a high-stakes legal battle that could reshape the future of artificial intelligence development, as Meta Platforms faces mounting opposition over its use of European user data to train its “Llama” AI models.
What began as an emergency injunction request has evolved into a complex dispute that data protection officials say will ultimately reach the European Court of Justice (ECJ).

The Initial Legal Showdown
Meta’s AI training initiative commenced on May 27, 2025, following a pivotal court decision that allowed the tech giant to proceed despite fierce opposition from privacy advocates. The company had initially announced this start date in April 2025, prompting immediate legal challenges from German consumer protection groups who sought an emergency injunction to halt the data processing.
A German court dismissed the injunction request brought by consumer protection groups, clearing the path for Meta to begin training its open-source large language model using public posts from Facebook and Instagram across the EU. The Cologne Higher Regional Court’s rejection of the emergency measures came just four days before Meta’s planned launch date, allowing the company to proceed as scheduled.
The training is crucial for Meta’s AI-powered products, including the “Ray-Ban Meta” smart glasses, which require cultural and linguistic knowledge derived from European data to function effectively in local markets. Some EU AI vendors welcomed the court’s decision, viewing it as relief from what they consider overly restrictive regulations that could hamper industry development and potentially stifle business under the EU’s emerging AI Act.
Privacy Advocates Push Back
Despite the court ruling, opposition remains strong. Thomas Fuchs, Hamburg’s data protection commissioner, maintains his position that Meta’s AI training should be stopped. Fuchs had initially considered exercising emergency powers under the General Data Protection Regulation (GDPR) but ultimately withdrew his immediate intervention after Ireland’s Data Protection Commission—Meta’s primary EU regulatory overseer—ruled that the company had demonstrated “legitimate interest” for AI model training, contingent on implementing strict privacy protection measures.
However, Fuchs emphasizes that the recent ruling only addressed emergency measures, not the substantive legal questions at the heart of the dispute. He has indicated his intention to continue the legal fight, suggesting the case will eventually reach the European Court of Justice for final resolution.
Meta AI faces challenges over GDPR compliance, with privacy advocates arguing that companies must comply with one of six legal bases according to Article 6(1) GDPR, including opt-in consent requiring “freely given, specific, informed and unambiguous” approval. The privacy advocacy group noyb has been particularly active, having previously secured Meta’s agreement to pause EU/EEA AI training in June 2024 following 11 complaints to the Irish Data Protection Commission.
Broader Implications for EU AI Development
The dispute highlights tensions between innovation and privacy protection in the EU’s digital economy. Fuchs warns that allowing Meta to proceed could set a concerning precedent, effectively giving other companies a “green light” to use public posts for AI training. He argues that if even a heavily scrutinized company like Meta can claim legitimate interest as justification for personal data processing in AI training, other companies will likely be permitted to do similar practices.
Meta’s public policy director for German-speaking regions, Semyon Lenz, counters that blocking the company’s AI training would weaken Germany’s AI industry. German companies would be unable to access Llama models trained on EU data that incorporate German cultural, historical, and linguistic nuances, potentially hampering their ability to develop competitive AI applications. Lenz also warns that divergent national AI regulations could undermine the EU’s single market principles.
Recent developments have added complexity to the case, with the Schleswig-Holstein Higher Regional Court confirming on August 12, 2025, that Meta’s AI training program processes personal data of children and adolescents, despite company claims of implementing protective measures.
Looking Ahead
While Meta is temporarily allowed to use personal data of non-objecting users for AI training purposes, the legal evaluation in a possible main proceeding could be different. The current permissions are based on summary assessments rather than comprehensive legal review.
The case represents a critical test of how EU privacy law will adapt to the AI era. With the EU’s AI Act taking effect and GDPR enforcement evolving to address new technological challenges, the eventual ECJ ruling could establish precedents affecting not only Meta but the entire European AI ecosystem.
As the legal battle continues, stakeholders across the technology industry, regulatory bodies, and privacy advocacy groups are closely watching what could become a defining moment for AI development in Europe. The ultimate resolution will likely influence how personal data can be used for AI training across the continent, potentially affecting everything from product development timelines to competitive positioning in the global AI race.
The dispute underscores the ongoing challenge of balancing technological innovation with fundamental privacy rights—a tension that will likely define much of the regulatory landscape as AI technology continues to evolve.