The Spanish presidency of the EU Council shared a revised mandate to negotiate with the European Parliament on the thorny issue of regulating foundation models under the upcoming AI law.
The AI Act is a flagship bill to regulate Artificial Intelligence based on its capacity to cause harm. The file is at the last phase of the legislative process, so-called trilogues, whereby the EU Commission, Council and Parliament negotiate the regulatory provisions.
EU policymakers aim to finalise an agreement at the next political trilogue on 6 December. Ahead of this crucial appointment, Spain, negotiating on behalf of EU countries, needs a revised mandate.
Euractiv reported the first part of the mandate on the law enforcement chapter, which will be discussed at the Committee of Permanent Representatives on Wednesday (29 November).
The second part of the mandate was circulated on Monday, but the part on foundation models was shared separately on Tuesday evening. It will be discussed at the ambassador level on Friday.
Foundation models
The rules for foundation models and powerful types of Artificial Intelligence like OpenAI’s GPT-4, which powers ChatGPT, have become a sticking point in the negotiations. At the last trilogue, there seemed to be a consensus on a tiered approach to foundation models, with tighter obligations for the most powerful ones.
However, three weeks ago, the negotiations hit the brakes due to mounting opposition from France, with the support of Germany and Italy, to oppose any binding rules for these models other than codes of conduct.
Last week, the Commission attempted to find a midway, proposing horizontal rules for all General Purpose AI (GPAI) models and codes of practices for the top-tier ones. The presidency shared the same Commission text with national delegates with a minor tweak.
However, the EU Parliament united in asking for tighter rules for these models in a working paper revealed by Euractiv. The presidency mostly preserved the Commission’s compromise. On Sunday (19 November), the European Commission circulated but introduced some elements from the Parliament’s working paper.
The designation process for designating GPAI models as having systemic risks has been largely maintained. Still, there are two quantitative thresholds: the amount of computing used for the training greater than 10~26 and the number of business users in the EU superior to 10,000.
Horizontal obligations for all GPAI models were also kept, including ensuring that AI-generated content would be detectable as artificially generated or manipulated. A requirement was added mandating model evaluation following standardised protocols.
Regarding copyright, the wording still requires model providers to put ‘adequate measures’ in place to comply with legislative requirements and publish a sufficiently detailed summary of training data and copyright policies.
For GPAI models with systemic risks, the Commission’s text mandating internal measures and a regulatory dialogue with the Commission to identify and mitigate potential systemic risks, with the additional requirement of ensuring adequate cybersecurity levels.
Model providers will be able to demonstrate compliance with the horizontal and systemic-specific obligations by adhering to codes of practice.
Additionally, the presidency proposes forcing providers of GPAI systems like ChatGPT to provide all the relevant information for downstream economic operators to comply with the AI Act’s obligations. If the GPAI system providers allow its use in any high-risk scenario, they must indicate so and comply with the relevant requirements.
The national delegates are asked if the proposed text would be acceptable if they would be flexible in including references to energy efficiency in the codes of conduct for models with systemic risks, and whether they would accept the exemption of open-source models.
Governance
On governance, the presidency asked to negotiate with a mandate that left the Commission’s proposal largely untouched from last week.
This new approach is made necessary by the new regime for powerful AI models and is centred around an AI Office ‘hosted’ within the Commission and with a ‘strong link’ with the scientific community.
The European Artificial Intelligence Board, gathering national authority, would remain a coordination platform and an advisory body to the Commission. The Council wants to maintain flexibility in appointing more than one competent authority at the national level.
Access to source code
The presidency also requested a revised mandate on less controversial parts of the bill.
In the initial proposal, market surveillance authorities and conformity assessment bodies were empowered to request access to the source code when assessing the compliance of high-risk AI systems with the AI Act’s requirements.
While the EU Parliament removed this possibility, the presidency considers “it is important to maintain this possibility at least for market surveillance authorities to be able to have access to source under limited conditions.”
Penalties
Concerning the sanction regime, the presidency is proposing to meet MEPs midway. The fines are set as a percentage of the company’s global annual turnover or a predetermined amount, whichever is higher.
The presidency suggested 6.5% for violations of the banned AI applications, 3% for violations of the AI Act’s obligations, and 1.5% for the supply of incorrect information.
Entry into force
Concerning when the AI rulebook will start to bite, the presidency proposed that the regulation should apply two years after it enters into force, except for the part on the conformity assessment bodies, governance and penalties that will be anticipated by 12 months.
AI literacy
The wording has been agreed to state that AI providers and deployers should take measures to ensure a sufficient level of AI literacy among their staff members. In contrast, the reference to measures being taken at the national and EU levels has been removed.