POW

Navigating the
EU AI Act

The world's first comprehensive legal framework for AI. This page interprets the regulation through the lens of Practical Open Weights (POW).

The Risk Pyramid

Classification by Severity

đŸšĢ

Prohibited

Unacceptable Risk

+
âš ī¸

Regulated

High Risk

+
â„šī¸

Transparency

Limited Risk

+
✅

Permitted

Minimal Risk

+

Who Has Obligations

Separate providers, deployers, and GPAI responsibilities

One of the most practical compliance steps is to identify which role you occupy in each use case.

Providers

Providers are the entities that develop or place an AI system or GPAI model on the market. They usually carry the heaviest burden for documentation, conformity, and risk management.

Deployers

Deployers are organizations that use the system in their own operational setting. For them, correct use, human oversight, disclosures, and operational safeguards matter most.

Importers & Distributors

Importers and distributors are responsible for not circulating non-compliant systems and for checking that essential documentation and marking obligations have been met.

GPAI Model Providers

Providers of general-purpose AI models face dedicated duties around documentation, copyright policy, training-data summaries, and, in some cases, additional systemic-risk controls.

GPAI Obligations

What changes for general-purpose AI models

The GPAI layer matters most for model providers, open-weight ecosystems, and teams building on foundational models.

Tier 01

Base Compliance

Standard GPAI Models

Standard multi-purpose models (e.g., Llama 3 8B, Mistral Small).

Technical documentation for the AI Office and national authorities.

Instructions for use for downstream providers.

A policy to comply with Union copyright law.

A sufficiently detailed summary of the training content.

Open Source Note

Exceptions for models released under free and open-source licenses, provided they don't pose systemic risks (Recital 102).

Tier 02

High Rigor

GPAI with Systemic Risk

Cumulative computing power > 10^25 FLOPs or determined by the AI Office (e.g., Llama 3 400B+, GPT-4).

Rigorous adversarial testing (Red-Teaming).

Assessment and mitigation of systemic risks (e.g., cybersecurity, bias).

Reporting of serious incidents to the AI Office.

Adequate cybersecurity protection for model weights and infrastructure.

Regulatory Note

Open-source release does NOT exempt these models from systemic risk obligations.

What the user must be told

Many of the most practical AI Act obligations show up directly in product design, UX, and content labeling.

AI interaction disclosure

When users interact with an AI system, they must be clearly informed that they are not dealing with a human, unless that is already obvious from the context.

Synthetic and deepfake content

AI-generated or manipulated content should be labeled appropriately, especially where authenticity could otherwise be misunderstood.

Emotion recognition and biometric context

Certain use cases involving emotion recognition or biometric categorisation require extra caution, legal review, and clearer user-facing communication.

Technical Compliance Checklist

Practical steps for turning the AI Act from a policy topic into an operating model.

Governance

Appoint an AI Compliance Officer

Establish clear internal accountability for AI systems.

Inventory AI Systems

Identify all AI systems currently in use and classify their risk tier.

Human Oversight Framework

Design HITL (Human-in-the-loop) systems for High-Risk categories.

Technical Documentation

Record Lifecycle Logs

Implement automated logging for system performance and decision-making.

Dataset Privacy Audit

Verify that training/fine-tuning data complies with GDPR and Copyright laws.

Conformity Assessment

Conduct internal or third-party audits for High-Risk applications.

Transparency

AI Disclosure Labels

Implement UI indicators informing users they are interacting with AI.

Deepfake Labeling

Ensure AI-generated audio/video is digitally watermarked or labeled.

Downstream Manuals

Provide clear documentation for users implementing your model via API or local weights.

Strategic View

The Open Source Advantage

The EU AI Act provides significant exemptions for models released under free and open-source licenses, provided they do not pose systemic risks. This encourages the development of transparent, adaptable, and sovereign AI systems that help organizations avoid vendor lock-in while meeting baseline compliance.

Implementation Timeline

August 1, 2024

The AI Act enters into force.

February 2, 2025

Prohibited practices and AI literacy obligations start applying.

August 2, 2025

GPAI obligations and governance provisions start applying.

August 2, 2026

Most of the rules apply, including transparency duties and the main compliance framework.

August 2, 2027

Certain high-risk obligations for regulated product categories apply in full.

Useful terms for product and compliance teams

High-risk AI system

An AI system subject to stricter governance, documentation, oversight, and risk-control obligations.

Prohibited practice

A practice that is not allowed to be developed or used under the Act.

General-purpose AI model

A general-purpose AI model that can be adapted or deployed across many downstream use cases.

Provider

The entity that develops or places an AI system or model on the market.

Deployer

The organization that uses the system in a real operational or product setting.

Transparency duty

An obligation to inform users when they are interacting with AI or consuming AI-generated content.

The "Brussels Effect"

The EU AI Act is expected to become a global blueprint for AI regulation, much like GDPR redefined data privacy. Organizations that align early may gain stability, trust, and a cleaner multi-model operating posture.

Official Sources and Guidance

Why smaller models matterDomain-specific AI and open-source exemptions
Ask the AI for help