The European Commission circulated on Sunday (19 November) a possible compromise on the AI law to break the deadlock on foundation models, applying the tiered approach to General Purpose AI and introducing codes of practice for models with systemic risks.
The AI Act is a landmark bill to regulate Artificial Intelligence based on its potential risks. The legislative proposal is currently at the last phase of the legislative process, so-called trilogues, between the EU Commission, Council and Parliament.
In the past weeks, the EU policymakers involved have been butting heads on regulating powerful foundation models like GPT-4, which powers the world’s most famous chatbot, ChatGPT, a versatile type of system known as General Purpose AI.
On 10 November, Euractiv reported on how the whole legislation risked derailing after the clash, with Europe’s three largest economies speaking out against the tiered approach initially envisaged on foundation models and pushing back against any regulation other than codes of conduct.
However, not having any obligations for foundation models is considered not an option for the European Parliament. The MEPs involved in the file are meeting on Tuesday (21 November) to discuss foundation models, governance, and law enforcement.
On Sunday, the EU executive shared a compromise with the European Parliament’s co-rapporteurs, who shared it with their colleagues on Monday. The text maintains the tiered approach but focuses on General Purpose AI, tones down the obligations and introduces codes of practice.
GPAI models and systems
The text is a significant rework compared to what was circulated by the Spanish presidency, and the leading MEPs provided feedback earlier this month. At the core, there is now a distinction between General Purpose AI (GPAI) models and systems.
“‘General-purpose AI model’ means an AI model, including when trained with a large amount of data using self-supervision at scale, that is capable to [competently] perform a wide range of distinctive tasks regardless of the way the model is released on the market,” reads the new definition.
By contrast, a GPAI system would be “based on an AI model that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”
The idea is that GPAI models can entail systemic risks related to ‘frontier capabilities’ based on ‘appropriate’ technical tools and methodology. In the notes, the co-rapporteurs question the terminology and vagueness of the text.
EU’s AI Act negotiations hit the brakes over foundation models
A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.
Besides this qualitative criterion, the draft initially classifies GPAI models with systemic risks using a quantitative threshold: the amount of computing used for their training measured in floating-point operation, a measure of computer performance, greater than 10~26.
The Commission would be empowered to adopt secondary legislation to specify technical elements of GPAI models further and keep benchmarks up to date with market and technological development.
The co-rapporteurs commented, asking if the Commission can be tasked with updating the definition of GPAI models and propose additional wording requiring the EU executive to develop a methodology to assess the compute training capabilities within 18 months from the regulation’s entry into force.
Similar to the Digital Services Act, the GPAI model providers that meet this threshold should notify the Commission. However, the text includes the possibility for the providers to request an exemption, arguing their model does not have frontier capabilities – a disposition the co-rapporteurs seem to consider unnecessary.
The Commission might also designate GPAI models presenting systemic risks of its own initiative.
Obligations for GPAI models
The text includes some horizontal obligations for all GPAI models, which would require up-to-date technical documentation through model cards – a proposal also present in the Franco-German-Italian non-paper.
The model cards would include information on the training process, evaluation strategies and sufficient information for downstream economic operators that want to build on the model a new AI system to comply with the AI Act. A minimum number of elements is detailed in an annexe.
Regarding copyright, the text merely states that the model providers must implement a policy to respect the Copyright Directive, especially the reservation of rights. A ‘sufficiently detailed summary’ of the content fed to train the model would need to be published.
Synthetically generated content like text and images must be marked in machine-readable format and detectable as artificially generated or manipulated.
Obligations for GPAI models with systemic risks
Additional requirements for the models with systemic risks include establishing internal measures and engaging with the Commission to identify potential systemic risks and develop possible mitigating measures, including through a code of practice in line with international approaches.
These providers would also have to keep track of and report at once to the Commission or national authorities as relevant any serious incidents and relevant corrective measures.
Codes of practice
A new article is dedicated to codes of practice that the EU executive should facilitate drawing up – in what seems a reply to the France-led non-paper that called for codes of conduct in line with the principles drawn up via the G7’s Hiroshima Process.
The codes of practice should cover at least the transparency obligations for all GPAI models, like model cards and summary templates, the identification of systemic risks at the EU level, and the risk assessment and mitigation measures – including possible propagation alongside the value chain.
The drafting of the codes of practice would include the relevant GPAI model providers and national authorities, with support from civil society and other stakeholders. The codes should include key performance indicators and regular reporting on the implementation of the commitments.
The Commission might approve a code of practice deemed to contribute to the application of the AI Act, resulting in the adhering members enjoying a presumption of conformity with the regulation’s obligations. The text does not mention possible sanctions.
[Edited by Nathalie Weatherald]