Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

AI Act: MEPs close ranks in asking for tighter rules for powerful AI models

1 year ago 44

The MEPs involved in the negotiations on the EU’s AI rulebook circulated a working paper detailing their proposed approach to regulating the most powerful Artificial Intelligence models on Friday (23 November).

The AI Act, a landmark bill to regulate AI based on its potential to cause harm, is at the last phase of the legislative process, with the EU Commission, Parliament, and Council gathered in ‘trilogues’ to hash out the final provisions.

In this late phase of the negotiations, EU policymakers have been butting heads on the approach for foundation models, powerful types of AI like OpenAI’s GPT-4, which powers the world’s most famous chatbot, ChatGPT.

At a political trilogue in mid-October, there seemed to be a consensus toward a tiered approach, with horizontal obligations for all foundation models and additional requirements for the models considered to present a systemic risk for society.

However, two weeks ago, Euractiv reported how the world’s first comprehensive AI law was in jeopardy after that France, Germany and Italy pushed back against any obligation on foundation models, prompting Parliament to leave the negotiation table.

Europe’s three largest economies consequently shared their views in a non-paper where they favoured going against the technology-neutral and risk-based approach of the AI Act in favour of codes of conduct.

On Monday, Euractiv revealed a compromise text from the European Commission that tried to revive the tiered approach with toned-down transparency obligations for all foundation models and codes of practice for those with systemic risk.

EU’s AI Act negotiations hit the brakes over foundation models

A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.

The revised tiered approach was discussed at internal Council meetings on Wednesday and Thursday, where France remained sceptical, and Germany and Italy showed a less uncompromising stance.

Most member states considered the Commission’s text a step in the right direction, although some reservations remained on the definitions and broad wording related to secondary legislation.

However, the mediation attempt of the EU executive is having a harder time being accepted in the European Parliament, which already saw the tiered approach as a watered-down compromise but eventually accepted the principle of focusing more on the most consequential models.

“In Parliament, there is a clear majority position in wanting obligations, perhaps limited but clear, for the developers of the most powerful models,” Brando Benifei, one of the MEPs spearheading the file, told ANSA, warning that otherwise no political agreement could be found.

The issue was meant to be discussed at a technical trilogue on Friday. However, the discussion was postponed to Monday as the Spanish presidency considered it did not have a negotiating mandate yet.

Meanwhile, the MEPs leading on the file have shared with their colleagues a working paper, seen by Euractiv, that sets out a series of binding obligations for providers of foundation models that pose a systemic risk.

The obligations include internal evaluation and testing, including red-team assessment, cybersecurity measures, technical documentation and energy-efficiency standards.

“It is key to the Parliament to underline that these obligations would only apply to the original developer of the designated models of the systemic risk category (i.e. OpenAI, Anthropic, StabilityAI, Meta) but not those downstream developers that revise or refine the model,” reads the document.

The AI Office would then be able to review the technical documentation and model evaluation and impose sanctions in case the regulation’s requirements are breached.

The two sides of the aisle support these mandatory requirements, as conservative MEP Axel Voss also dubbed an unacceptable approach solely based on voluntary commitments, stating that minimum standards should cover transparency, cybersecurity and information obligations.

“We cannot close our eyes to the risks,” he said on X.

The Parliament wants to maintain the horizontal transparency requirements, which include providing model cards detailing the training process and all relevant information to comply with the AI law’s obligations for downstream economic operators that build an AI system on the model.

Green lawmaker Kim Van Sparrentak told Contexte the Franco-German-Italian approach was ‘preposterous’, noting how this argumentation was the same as Big Tech companies and that non-binding initiatives have had scarce results in the past.

In the working paper, the parliamentarians accept the idea of EU codes of practice but only to complement the horizontal transparency requirements for all foundation models and might be used, for instance, to establish the industry’s best practices on risk assessment.

Moreover, the MEPs want to extend the drafting process of these codes of practice to SMEs, civil society and academia, a principle that was removed from the Council’s version of the Commission compromise.

Regarding where to draw the line on models deemed to have systemic risk, the working paper outlines that EU lawmakers are not satisfied with a single quantitative threshold based on the amount of computing used to train a model that the Commission suggested.

Following an assessment from researchers at Stanford University, the Parliament wants the designation to be based on several criteria like the model’s capabilities, number of users, financial investment, modalities and release strategies.

The idea is to give the AI Office discretion in assessing whether a model poses a systemic risk for society based on this pre-set list of criteria, which can be revised to keep up with market and technological developments.

“We need safeguards for these models because of the impact they have, because of the versatility they have and the fact that we are going to soon find them in a lot of the products and services that are around us,” said Parliament’s co-rapporteur Dragoș Tudorache.

[Edited by Alice Taylor]

Read more with EURACTIV

Read Entire Article