With the entry into force of the European Union’s transparency rules for general-purpose artificial intelligence (AI) on August 2, 2025, the global AI landscape is undergoing a significant transformation. These new requirements, part of the pioneering EU AI Act, place notable demands on providers developing and distributing advanced AI models for use throughout the European market. The initiative intends to ensure that technological innovation proceeds within clear boundaries of safety, accountability, and respect for fundamental rights.
A New Era in AI Oversight
The rules, introduced under the AI Act and detailed in the recently finalized General-Purpose AI Code of Practice, reflect a growing consensus that the most powerful AI models should not be left unchecked. As these models increasingly underpin critical systems and influence millions of lives, the EU’s approach positions transparency, safety, and copyright compliance at the core of responsible AI development and deployment.
According to documentation from the European Commission, “The Code is designed to help industry comply with the AI Act’s rules on general-purpose AI, which will enter into application on 2 August 2025.” These rules apply to all general-purpose AI models offered in the European single market and aim to foster responsible innovation, with special attention to the most advanced, high-impact systems.
Scope and Substance: What the New Rules Require
The regulation specifically applies to general-purpose AI models—systems capable of powering a diverse range of applications, from language generation to data analysis and beyond. Providers of these models must now adhere to strict guidelines in three key areas: transparency, copyright compliance, and safety.
Transparency is the most immediate requirement. The rules demand comprehensive technical documentation for each AI model. This documentation must detail how the model was trained, the datasets involved (including their categories and provenance), its intended use cases and technical properties, computational requirements, and the model’s energy consumption footprint. Clear, accessible model documentation forms must be submitted, allowing regulators and downstream providers to understand both risks and capabilities. Providers are also required to publish summary statements regarding the training data used—an unprecedented move in AI regulation.
Additionally, there are strong copyright stipulations. The EU now obliges AI providers to ensure their models and the data used to train them comply with European copyright law. This encompasses both correct rights reservation and clearer protocols for managing content that may be protected intellectual property.
Safety and systemic risk management are crucial for the largest and most powerful models. Providers of these high-impact systems—those capable of, for instance, generating code, producing realistic human voice, or processing sensitive health data—must evaluate and document potential risks. This includes identifying the possibility that a model could facilitate dangerous activities or lose control, and outlining mitigation steps such as regular risk assessments and robust cybersecurity protections. The safety and security chapter of the rules is specifically geared toward models with systemic risk, defined through thresholds of model complexity and compute power.
The Code of Practice: Voluntary but Influential
A central feature of the EU’s approach is the General-Purpose AI Code of Practice. Published after collaboration among nearly 1,000 stakeholders, including model developers, AI safety experts, academics, and civil society organizations, the code provides a standardized way for AI providers to demonstrate regulatory compliance. According to the European Commission, “The publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” said Henna Virkkunen, EVP for tech sovereignty, security, and democracy.
While adoption of the Code is voluntary, it carries substantial incentives. Providers adhering to the Code can expect reduced administrative burdens and clearer regulatory guidance, thereby gaining more legal certainty as they navigate the European market. Signing onto the Code does not offer “presumption of conformity” in legal disputes, but it does mark a good-faith effort at compliance—a position viewed favorably by EU regulators.
Who and What Is Covered
The new obligations apply to all providers placing general-purpose AI models on the EU market, regardless of where these companies are headquartered. These models are broadly defined as systems trained with over 10²³ floating point operations (FLOP) and capable of generating general-purpose content, such as text, code, images, or audio. For exceptionally powerful models exceeding 10²⁵ FLOP, additional layers of scrutiny—such as model registration with the European Commission and more rigorous safety assurances—are now in effect.
For models already present in the market before August 2, 2025, the transition period grants until August 2027 for full compliance. New entrants, by contrast, must conform immediately.
Practical Implications for AI Providers
The practical impact of the new regime is already evident. AI providers face the task of revising model development pipelines, retraining compliance staff, and updating risk management processes. Technical documentation, once often overlooked, must be complete and up-to-date, supported by a template issued by the Commission that clearly enumerates every relevant property, training process, data source, and intended use.
Downstream users and businesses that rely on general-purpose AI models in their services or products will benefit from new transparency and safety credentials.
“Providers must now supply model documentation that is clear, accessible, and allows stakeholders to assess both the usefulness and the potential risks attached to their products,”
commented a spokesperson for the European Commission. This is expected to foster greater trust among consumers and heighten scrutiny on business practices within Europe’s flourishing AI ecosystem.
Stricter Oversight for High-Risk Models
For those AI models determined to pose systemic risks, a higher bar is set. Providers must perform regular model evaluations, continuous risk assessments, incident reporting, and adopt advanced cybersecurity protocols. These requirements align with the AI Act’s broader vision of ensuring that AI serves society in ways that uphold human dignity and fundamental rights, without enabling uncontrolled, potentially harmful behaviors.
Copyright and Training Data: A New Disclosure Standard
Copyright has been a contentious issue in the development of large AI models, especially where web scraping and mass data harvesting methods are employed. The new EU rules demand that AI providers are clear about which data sources are used and what rights reservations are applied. This applies not only to public content but also to material that may be protected by copyright or other intellectual property rights. The copyright chapter of the Code of Practice guides model providers through these complex legal waters, giving concrete advice on rights management and protocols for downstream content usage.
Stakeholder Responses
Industry reaction has generally been constructive, with many major players expressing support for clear, harmonized rules in the AI sector. Civil society organizations, however, stress the importance of effective enforcement, warning that voluntary measures must translate into meaningful action for end-users across Europe. The EU’s newly established AI Office will monitor compliance and can enforce requirements for new models from 2026, and for existing models—those on the market before the rules took effect—starting from 2027.
“This framework puts Europe at the forefront of safe and trustworthy AI regulation, providing much-needed legal clarity,”
stated a representative from the European AI Office at the publication of the new regulations.
Looking Ahead: Blueprint for Global AI Governance?
As these transparency obligations take hold, the international community is watching closely. The EU’s proactive stance on AI safety, transparency, and ethical practices may inspire similar measures in other jurisdictions. Policymakers highlight these moves as a model for effective public oversight of rapidly advancing technology sectors.
The European Commission will continue reviewing and updating the rules to keep pace with technological innovation and emerging risks. Providers and developers are encouraged to remain engaged in ongoing consultations to ensure that compliance not only meets the letter of the law but also the evolving social and ethical expectations surrounding AI.