Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

AI Act: Leading MEPs propose initial criteria for classifying foundation models as ‘high-impact’

10 months ago 49

The EU lawmakers spearheading the work on the AI rulebook suggested a first set of criteria to identify the most powerful foundation models that must follow a specific regime.

The AI Act is a landmark EU legislation to regulate Artificial Intelligence based on its potential to harm people. The file is currently at the last stage of the legislative process, so-called trilogues, whereby the EU Commission, Parliament, and Council hash out the final provisions.

One of the sticking points in this late stage of the negotiations has been dealing with foundation models like GPT-4, on which several AI applications can be built, like the world’s most famous chatbot, ChatGPT.

On Tuesday (7 November), Euractiv exclusively revealed that the Spanish EU Council presidency, which leads the negotiations on behalf of the Council, circulated a first draft of obligations for foundation models, including the most powerful ones, dubbed ‘high-impact’.

On Wednesday, the offices of the  European Parliament’s co-rapporteurs Dragoș Tudorache and Brando Benifei shared a reaction to the presidency’s draft with MEPs. The co-rapporteurs’ text will be discussed at a political meeting on Thursday.

Top tier classification

A critical aspect of this tiered approach is how ‘high-impact’ foundation models are separated from the rest. In the Council’s proposal, the Commission was tasked with developing secondary legislation to specify these thresholds within 18 months after the law enters into force.

For the leading MEPs, these criteria are too important to be left to the Commission alone. Thus, they proposed four initial criteria on which concrete thresholds must be set, if confirmed.

The criteria are the size of data samples used for training, the size of the parameters representing the model, the amount of computing used for the training measured in floating point operations, and some performance benchmarks that are still to be fully elaborated.

In this scenario, the Commission would have to come up within 18 months with a methodology to assess such thresholds. In addition, the EU executive would be empowered to adjust the thresholds, for instance, if technological developments reduced the quantity of data samples required to train a powerful foundation model.

What remains up for discussion is whether all thresholds should be met or if a couple would suffice. Following the Digital Services Act model, the parliamentarians want the providers to provide information about the pre-set thresholds.

Still, the Commission could also base its designation decision on other information received via the disclosure obligations. The designation will be dropped if the model goes below the relevant thresholds for at least one year, according to the MEPs’ text.

For the leading MEPs, a definition of high-impact foundation models is not necessary since their designation will be based on the thresholds.

Obligations for high-impact foundation model providers

The co-rapporteurs also proposed some modifications to the obligations for high-impact model providers. A specification has been added that the obligations apply regardless of whether it is made available under free and open source licenses.

The leading MEPs want high-impact foundation models to be registered in the EU public database, which was initially conceived for users of AI systems at significant risk of causing harm.

An additional obligation has been proposed that these powerful models should apply relevant resource use and waste standards.

The presidency’s proposal entails that these providers must assess potential systemic risks. The lawmakers proposed adding to the systemic risks any foreseeable negative effects for the exercise of fundamental rights, gender-based violence, and the protection of public health, the environment and minors.

Regarding risk mitigation, the MEPs want to task the AI Office with publishing a yearly report identifying the most prominent recurring risks, indicating the best practices of risk mitigation, and breaking down systemic risks per member state.

Foundation models

The approach at hand involves establishing some horizontal obligations for all foundation models. However, the co-rapporteurs consider the Council’s definition of foundation models unclear, as some elements could be confused with generative AI.

The Parliament’s text clarifies that these obligations apply before the model is launched on the market. The cross-references the Spaniards introduced with EU copyright law were moved to a specific article for generative AI.

Generative AI

The leading MEPs proposed a new article with ex-ante obligations for generative foundation models and AI systems capable of generating synthetic audio, image, video or text content based on correlations and patterns learnt from data.

For the parliamentarians, the outputs of these systems must be marked as artificially generated or manipulated in machine-readable format.

Moreover, providers would have to “train, and where applicable, design and develop the foundation model or AI system in such a way as to ensure adequate safeguards against the generation of content in breach of Union law, without prejudice to fundamental rights, including freedom of expression”.

The Spanish presidency’s provisions – mandating these providers to provide a detailed summary of the content used to train the model and demonstrate that they put in place adequate measures to respect copyright law – were maintained untouched.

Still, the paragraph on establishing capabilities that respect the opt-out decisions of content creators was removed.

Finally, the text mandates providers to “ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account specificities and limitations of different types of content”.

General Purpose AI

The co-rapporteurs proposed defining general-purpose AI as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”.

Regarding obligations, the co-rapporteurs insist on maintaining the Parliament’s text on the responsibilities along the entire AI value chain of providers.

[Edited by Zoran Radosavljevic]

Read more with EURACTIV

Read Entire Article