Providers
Providers are the entities that develop or place an AI system or GPAI model on the market. They usually carry the heaviest burden for documentation, conformity, and risk management.
Regulatory HUB
The world's first comprehensive legal framework for AI. This page interprets the regulation through the lens of Practical Open Weights (POW).
The Risk Pyramid
Prohibited
Regulated
Transparency
Permitted
Who Has Obligations
One of the most practical compliance steps is to identify which role you occupy in each use case.
Providers are the entities that develop or place an AI system or GPAI model on the market. They usually carry the heaviest burden for documentation, conformity, and risk management.
Deployers are organizations that use the system in their own operational setting. For them, correct use, human oversight, disclosures, and operational safeguards matter most.
Importers and distributors are responsible for not circulating non-compliant systems and for checking that essential documentation and marking obligations have been met.
Providers of general-purpose AI models face dedicated duties around documentation, copyright policy, training-data summaries, and, in some cases, additional systemic-risk controls.
GPAI Obligations
The GPAI layer matters most for model providers, open-weight ecosystems, and teams building on foundational models.
Tier 01
Base ComplianceStandard multi-purpose models (e.g., Llama 3 8B, Mistral Small).
Technical documentation for the AI Office and national authorities.
Instructions for use for downstream providers.
A policy to comply with Union copyright law.
A sufficiently detailed summary of the training content.
Open Source Note
Exceptions for models released under free and open-source licenses, provided they don't pose systemic risks (Recital 102).
Tier 02
High RigorCumulative computing power > 10^25 FLOPs or determined by the AI Office (e.g., Llama 3 400B+, GPT-4).
Rigorous adversarial testing (Red-Teaming).
Assessment and mitigation of systemic risks (e.g., cybersecurity, bias).
Reporting of serious incidents to the AI Office.
Adequate cybersecurity protection for model weights and infrastructure.
Regulatory Note
Open-source release does NOT exempt these models from systemic risk obligations.
Transparency Duties
Many of the most practical AI Act obligations show up directly in product design, UX, and content labeling.
When users interact with an AI system, they must be clearly informed that they are not dealing with a human, unless that is already obvious from the context.
AI-generated or manipulated content should be labeled appropriately, especially where authenticity could otherwise be misunderstood.
Certain use cases involving emotion recognition or biometric categorisation require extra caution, legal review, and clearer user-facing communication.
Operations
Practical steps for turning the AI Act from a policy topic into an operating model.
Appoint an AI Compliance Officer
Establish clear internal accountability for AI systems.
Inventory AI Systems
Identify all AI systems currently in use and classify their risk tier.
Human Oversight Framework
Design HITL (Human-in-the-loop) systems for High-Risk categories.
Record Lifecycle Logs
Implement automated logging for system performance and decision-making.
Dataset Privacy Audit
Verify that training/fine-tuning data complies with GDPR and Copyright laws.
Conformity Assessment
Conduct internal or third-party audits for High-Risk applications.
AI Disclosure Labels
Implement UI indicators informing users they are interacting with AI.
Deepfake Labeling
Ensure AI-generated audio/video is digitally watermarked or labeled.
Downstream Manuals
Provide clear documentation for users implementing your model via API or local weights.
Strategic View
The EU AI Act provides significant exemptions for models released under free and open-source licenses, provided they do not pose systemic risks. This encourages the development of transparent, adaptable, and sovereign AI systems that help organizations avoid vendor lock-in while meeting baseline compliance.
August 1, 2024
The AI Act enters into force.
February 2, 2025
Prohibited practices and AI literacy obligations start applying.
August 2, 2025
GPAI obligations and governance provisions start applying.
August 2, 2026
Most of the rules apply, including transparency duties and the main compliance framework.
August 2, 2027
Certain high-risk obligations for regulated product categories apply in full.
Glossary
High-risk AI system
An AI system subject to stricter governance, documentation, oversight, and risk-control obligations.
Prohibited practice
A practice that is not allowed to be developed or used under the Act.
General-purpose AI model
A general-purpose AI model that can be adapted or deployed across many downstream use cases.
Provider
The entity that develops or places an AI system or model on the market.
Deployer
The organization that uses the system in a real operational or product setting.
Transparency duty
An obligation to inform users when they are interacting with AI or consuming AI-generated content.
The EU AI Act is expected to become a global blueprint for AI regulation, much like GDPR redefined data privacy. Organizations that align early may gain stability, trust, and a cleaner multi-model operating posture.
Sources
Official implementation timeline for the main milestones under the AI Act.
Open source
Official summary of the obligations for general-purpose AI models.
Open source
Guidance on prohibited practices and how they are interpreted in practice.
Open source
Supporting material around the code of practice for GPAI compliance.
Open source