EU enforces AI Act for general-purpose models

The AI Act obligations for providers of general-purpose AI (GPAI) models have entered into force on 2 August across the EU.

The Commission’s GPAI guidelines and endorsed Code of Practice offers a compliance path, but scrutiny will be high for non-signatories. The debated FLOP threshold (i.e., computational power used to train AI, measured in floating point operations) still needs data-driven calibration to keep systemic-risk models fully in scope. Meanwhile, effective enforcement depends on whether the AI Office can build sufficient capacity and resources, as pointed out by Kai Zenner from the European Parliament, on his Linkedin profile:


1️⃣ Scope of rules: With their #GPAI #guidelines, the European Commission has brought some much-needed clarity, choosing thereby a rather targeted approach. This is good! However, a key issue remains: the #FLOP #threshold in the AI Act. If there’s really an intention to raise it (“currently under review”), we expect this to be based on solid evidence (i.e. expert consultation and real-world data). Any recalibration must ensure that the largest and most capable GPAI providers remain fully in scope. Adjustments should benefit only those innovators whose GPAI models clearly pose no systemic risk.

2️⃣ Endorsed Code: The GPAI Code of Practice now serves as the Commission’s and Member States’ #endorsed (today!) blueprint for demonstrating compliance; just in time for the rules to kick in. This gives AI firms a reliable foundation to work with. Those who have already #signed the Code sent a strong signal: they want to play by the rules. The expectations are high: having endorsed the full text means there is no room for selective compliance or backroom renegotiation. Meanwhile, #nonsignatories should expect extra scrutiny. They have chosen not to follow a transparency and safety framework that was developed through months of expert input. That choice speaks volumes and TBH, I am unsure how those firms want to prove their compliance.

3️⃣ Template: For those who missed it, the #AIOffice has recently also published the #template for the Public Summary of Training Content for GPAI models. My first reaction? Improvable … (https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models)

4️⃣ Scientific Panel: Here, only one thing matters! We need to get the best minds on GPAI and at least some of them with – additionally – a sound understanding of institutional procedures in Brussels. In my opinion, this means that at least some #Code #Chairs should become members of the #ScientificPanel.

5️⃣ Capacities & enforcement: Enforcing the GPAI rules will require serious #capacity at the AI Office. Talent for assessing the most advanced models is scarce, and the European Parliament has long called for an AIO of at least 200 people, including top-tier technical, legal, and policy experts. While a few engineers have reportedly been hired, Axel Voss and I are still waiting for updates on the legal and policy recruitments launched months ago. One thing is clear: even if formal powers only apply from August 2026, the AI Office must already now #scrutinize #compliance by the largest providers. We will be watching closely.


You can read further in the Press release from the European Commission, from 01 August 2025.


Discover more from Aldeota Global

Subscribe to get the latest posts sent to your email.

Leave a comment