Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

Spanish presidency pitches obligations for foundation models in EU’s AI law

1 year ago 49

The Spanish presidency of the EU Council of Ministers has drafted a series of obligations for foundation models and General Purpose AI as part of the negotiations on the AI Act.

Spain is currently leading the negotiations on behalf of EU governments on the Artificial Intelligence Act, a landmark legislative proposal to regulate AI based on its potential to cause harm. The file is currently in the last phase of the legislative process, so-called trilogues between the EU Council, Parliament, and Commission.

Since the meteoric rise of ChatGPT, a world-famous chatbot based on OpenAI’s GPT-4 model, EU policymakers have struggled to define how this type of Artificial Intelligence should be covered under the EU’s AI regulation.

In mid-October, Euractiv revealed that the policymakers’ thinking was heading toward a tiered approach, with a stricter regime for the most powerful foundation models like GPT-4. On Sunday, the Spanish presidency shared with the other EU countries the first developed version of the legal text for feedback, seen by Euractiv.

Foundation models

The Foundation model is defined as “a large AI model that is trained on a large amount of data, which is capable to competently perform a wide range of distinctive tasks, including, for example generating video, text, images, conversing in lateral language, computing or generating computer code”.

These models must comply with transparency obligations, namely, provide AI system providers with up-to-date technical documentation explaining the capacities and limitations of foundation models and a set of elements to be detailed in an annexe.

Additionally, foundation model providers will have to demonstrate that they have taken adequate measures to ensure the training of the model has taken place in compliance with EU law related to copyright, in particular, the disposition that requires data mining activities of content made publicly available online to require the consent of rightsholders, including by machine-readable means, if rightsholders have opted out of the copyright exception for text and data mining.

Thus, the foundation model developers will have to put in place a system to respect the opt-out decisions of content creators.

Another obligation entails publishing a sufficiently detailed summary of the content used for training the foundation model and how the provider manages copyright-related aspects based on a template to be developed by the European Commission.

High-impact foundation models

At the last negotiation session, the EU policymakers agreed to introduce a stricter regime for ‘high-impact’ foundation models.

A ‘high-impact’ foundation model is defined as “any foundation model trained with large amount of data and with advanced complexity, capabilities and performance well above the average for foundation models, which can disseminate systemic risks along the value chain, regardless there are integrated or not in a high-risk system”.

Within 18 months from the AI law’s entry into force, the Commission will have to adopt implementing or delegated acts to specify the threshold for classifying a foundation model as ‘high-impact’ in line with market and technological developments.

The EU executive will designate the foundation models that meet these thresholds in consultation with the AI Office.

The law’s chapeau dispositions, which clarify how the articles are to be interpreted, will explain what an AI model is and how it is built, as well as references to the scientific community, the interaction between datasets and foundation models and how AI applications can be built on top of them.

The obligations for these systemic models include adversarial vetting, a process known as red-teaming. What remains to be discussed is how this vetting obligation should apply to high-impact foundation models commercialised as a system that integrates components such as traditional software, as is the case of GPT-4.

Importantly, EU countries are to discuss whether red-teaming needs to be done by external experts. The presidency considers the provider can do this since these models will also be subject to audits.

The AI Office might request documentation proving that the model complies with the AI Act and, upon reasoned request, mandate an independent audit to assess the model’s compliance with the AI law and any commitments taken under the undersigned codes of conduct that the Commission should encourage to draw up.

An obligation marked as ‘possibly additional’ includes obliging the high-impact foundation model providers to establish a system for keeping tabs on serious incidents and related corrective measures.

Moreover, high-impact foundation models will need to assess systemic risks in the EU, including the risks stemming from integrating it into an AI system, at least once yearly since the market launch and for any new version released.

The risk assessment should include disseminating illegal or harmful content and any reasonably foreseeable negative effects concerning major accidents or impacting democratic processes.

The Commission is empowered to adjust the dispositions on foundation models and high-impact foundation models based on market and technological developments.

General Purpose AI

The final layer is made of General Purpose AI systems like ChatGPT, intended as systems “that may be based on an AI model, can include additional components such as traditional software and through a user interface has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

The Spanish presidency proposed obligations for General Purpose AI system providers when they enter into licensing agreements with downstream economic operators that might employ the system for one or more high-risk use cases.

These obligations include stating in the instructions the high-risk uses for which the system may be used providing technical documentation and all the information relevant for the downstream AI provider to comply with the high-risk requirements.

The providers of General Purpose AI systems can also prohibit certain high-risk uses. In this case, they have to take all necessary and proportionate measures to detect and enforce possible misuses.

[Edited by Nathalie Weatherald]

Read more with EURACTIV

Read Entire Article