The Spanish presidency needs to conclude the technical preparation for the EU’s AI Act this week to request a revised mandate ahead of what is meant to be the last high-level meeting to reach a political agreement on the file.
The AI Act is a landmark legislation to regulate Artificial Intelligence based on its capacity to cause harm. The bill is currently at the last phase of the legislative process, with representatives of the EU Commission, Council and Parliament meeting in ‘trilogues’.
As Spain is holding the rotating presidency of the EU Council, Madrid will need a revised mandate before entering the next trilogue on 6 December, which might be the last opportunity to reach a political agreement on the AI law under the Spanish presidency – and also before the European elections’ recess.
The revised mandate will be divided into two packages to be discussed at the ambassador level on Wednesday and Friday next week. Therefore, the presidency is trying to tie up all the loose ends at the technical level by the end of this week.
Ahead of a meeting of the Telecom Working Party on Tuesday (22 November), a Council technical body, the presidency shared an internal note, seen by Euractiv, that touches on open source, foundation models and governance.
EU’s AI Act negotiations hit the brakes over foundation models
A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.
Powerful AI models
On the thorny issue of foundation models, the document notes that “the Presidency has done the best efforts to align the different comments within the Council in a way where there is a possible opening for reaching an agreement with the European Parliament.”
The text shared is the same Commission proposal Euractiv revealed on Monday, with one significant difference: the paragraph stating that the Commission could invite not only model providers but also national authorities and that civil society and other stakeholders could support the process was deleted.
The Spaniards stressed that the main element of giving codes of conduct a predominant role, as requested by France, Germany and Italy, made it into the text – in the form of codes of practice that would ensure presumption of conformity with the regulation.
“Please note that any metric to classify FM [foundation models] will be updated in less than one year,” the note adds.
An internal technical discussion is scheduled on this topic for Thursday. On Wednesday, the Commission’s top digital bureaucrat, Roberto Viola, gave a one-hour-and-a-half presentation to EU ambassadors about their compromise.
Euractiv understands that France remained fixed on its position not to regulate foundation models during the meeting. Germany showed more flexibility, while a group of liberal countries expressed some reservations but showed openness to the compromise to reach an agreement with the European Parliament.
In an internal meeting on Tuesday, several MEPs involved in the AI law expressed critical views towards the Commission’s text on foundation models and governance. However, Euractiv understands the political pressure to finalise the legislation might force the more sceptical lawmakers to accept the compromise.
A technical trilogue is scheduled for Friday. Although no agenda has been shared by the time of publication, Euractiv understands foundation models, governance and law enforcement are likely to be on the menu.
Governance
The text on governance is the same as the one the EU Commission shared with the European Parliament’s co-rapporteurs on Sunday and that Euractiv reported on Tuesday. In the note, the presidency remarks how the Council’s idea of a pool of experts will be present in the final text, implicitly referring to the scientific panel.
Open source
On open source, the presidency argues that this is a critical innovation driver in the technology field but that the risks linked to the use or characteristics of AI products do not depend on the type of license under which they were delivered.
To balance innovation and protection, “the Presidency proposes an exemption from the AIA obligations in the case of systems and components provided under open source licenses.”
At the same time, the note specifies that high-risk AI systems and high-impact foundation models – or General Purpose AI models with systemic risks as they are now called – would still be covered under the regulation. Member states are asked whether they can accept this approach.
As no legislative text has been proposed, it is unclear if the presidency wants to propose new text or accept the EU Parliament’s mandate, which similarly excluded AI components provided free and open source licenses from the regulation’s scope, except for high-risk uses and foundation models.
Other loose ends
At a Telecom Working party meeting on Tuesday, attaches discussed a series of less controversial parts of the text. Euractiv understands there were no red lines, albeit the presidency’s room of manoeuvre on the law enforcement chapter was less clear as only a handful of countries expressed flexibility.
National representatives were rather flexible on the obligations of users, the dispositions for responsibilities alongside the value chain and the link between law enforcement and real-world testing for regulatory sandboxes.
There was a general agreement to keep the definition in line with the OECD’s. More resistance was put up toward concessions to the Parliament on the fundamental rights impact assessment.
[Edited by Nathalie Weatherald]