The Spanish presidency of the EU Council asked for feedback on a series of less controversial points after the negotiations on the AI Act with the European Parliament hit a wall on foundation models.
The AI Act is a legislative proposal to regulate Artificial Intelligence based on its capacity to harm. The file is currently at the last stage of the legislative process, with so-called trilogue negotiations between the EU Council, Parliament and Commission.
On Friday (10 November), Euractiv reported how the representatives of the EU Parliament walked out of a technical meeting after the Spanish presidency, under pressure from France and Germany, tried to retreat from the approach to regulating foundation models.
EU countries had until Monday to provide written comments before discussing the matter at a meeting of the Telecom Working Party, a Council’s technical body, on Friday (17 November). Some options are expected to be circulated before then.
Euractiv understands the presidency is mediating directly with the concerned countries on a possible solution acceptable to the European Parliament. Meanwhile, this stalemate is disrupting the already tight agenda, as the chapter on foundation models was supposed to be agreed on at the technical meeting on Thursday.
In the meantime, the Spaniards also circulated a consultation paper with some of the European Parliament’s less politically charged proposals to gather the member states’ feedback and assess their flexibility.
The deadline for submitting written comments on these topics was Tuesday (14 November).
EU’s AI Act negotiations hit the brakes over foundation models
A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.
Responsibilities along the AI value chain
The most significant aspect of the consultation paper relates to the responsibilities alongside the AI value chain.
Remarkably, the paper was shared before France and Germany came out vehemently against any obligation on foundation models. Still, the presidency’s approach seems to be to keep this part separate from the foundation model dispositions.
The original Commission proposal detailed the obligations of distributors, importers and users in a specific article that the Council deleted in favour of conditions for other persons to be subject to the obligations of a provider.
The parliamentarians kept the original article and further expanded it to include obligations to ensure that downstream economic providers that adapt a General Purpose AI system like ChatGPT can meet the AI Act’s requirements.
The presidency noted that this approach goes beyond the Council’s version but could be important to ensure that high-risk AI system providers can comply with the legislative requirements.
Spain proposed several in-between options. One would be to accept the Parliament’s version but introduce the references to the interaction on the relevant EU harmonisation legislation from the Council’s mandate.
Another option entails deleting the obligation for foundation models, as it will apply anyway under the new foundation model approach.
Finally, the presidency suggested deleting the obligation for the Commission to produce model contractual terms or the reference to trade secrets from the MEPs’ text.
Unfair contractual terms
Still concerning the relationship between General Purpose AI providers and the downstream economic operators are provisions suggested by the Parliament to prevent the former from imposing unfair contractual terms on the latter.
“While the intention is to avoid abuses from large companies to smaller ones, it seems that the article is out of the scope of the Regulation. This statement is also based on the initial feedback from the delegations,” continues the paper.
Here, the options are only to accept or reject the proposal.
Fundamental rights impact assessment
Left-wing MEPs proposed the obligation for users of high-risk AI systems to conduct a fundamental rights impact assessment. Spain agreed to a toned-down version of this proposal, but only for public bodies.
However, whether private companies should be covered is still open, with some EU countries admittedly preferring this broader scope and the European Parliament available to concede dropping the requirement for users to conduct a public consultation for potentially affected groups in exchange.
General principles
The European Parliament’s mandate introduces a series of general principles all AI operators should make the best efforts to follow to develop and use AI systems. These principles would also be embedded in the requests for technical standards.
“The Presidency believes that Member States could have concerns regarding this article as its provisions could compromise the risk-based approach and put an unnecessary burden in the standardization process,” reads the paper.
Madrid also expressed scepticism on the measure, arguing that some of these principles were already covered in existing legislation and that it is unclear why they should be introduced in every AI system.
The options include accepting or rejecting these principles in full, agreeing to include them only in the law’s preamble, or considering them as guiding the development of codes of conduct. Separately, EU countries are asked whether it is acceptable that the principles are included in the standardisation requests.
AI literacy
MEPs introduced wording requiring EU and national institutions to promote measures for developing sufficient AI literacy while also obliging providers and deployers of AI applications to ensure that their staff members have sufficient knowledge of them.
Again, besides accepting or rejecting this proposal, the presidency proposed moving it into the text’s preamble, which is not legally binding. Moreover, the paper asks whether moving these AI literacy dispositions to other parts of the text, like those regarding transparency, human oversight or codes of conduct is acceptable.
[Edited by Nathalie Weatherald]