Foundation models, governance, and market concentration rank high on the list of stakeholders’ concerns as EU policymakers prepare to finalise the world’s first AI law.
The AI Act is a flagship legislative initiative to regulate Artificial Intelligence based on its potential to cause harm. All eyes are currently on the so-called ‘trilogue’ negotiations between the EU Commission, Council and Parliament set to close the file.
Meanwhile, stakeholders have pointed out several concerns related to AI law, ranging from the future-proofness of its governance structure to the trend of market concentration and regulating the most powerful AI models.
Governance
“I think the focus needs to be on building some agile governance that can keep up with the speed of innovation,” Paula Gürtler, research assistant at the think tank CEPS, said at a Euractiv-hosted event last week.
For Gütler, AI global governance is a crowded space, quoting the OECD AI principles of 2019, the G7 Hiroshima process, the G20 Osaka leadership declaration and the UNESCO recommendations on AI.
In Gütler’s opinion, this is an opportunity for “greater potential for international cooperation on AI”, mainly on three streams: sharing knowledge, managing extraterritorial effects of AI regulation and ensuring shared benefits across communities.
Market concentration
“There is a very strong possibility and very strong risk of extreme oligopolies” in the AI market, warned Marco Bianchini, economist and coordinator of the Digital for SME Global Initiative at the OECD, on the same panel.
This is a reason for concern, explains Gütler: “The algorithmic divide is something to look out for”, saying that eventually, AI should be looked at not only from a risk perspective but also in terms of AI enablers, so that the technology can benefit everyone.
Foundation models
Looking at the AI sector from this perspective, some speakers expressed their scepticism towards the self-regulatory approach that France, Germany and Italy have pushed for the most powerful AI models.
“I think I am generally rather sceptical of code of conduct and self-regulation,” said Gütler.
In her view, companies cannot be trusted to self-regulate because their main objective is maximising profits. He called for a body providing oversight on developer companies.
Bianchini said that the EU’s debate was polarising over the regulation of foundation models, with the European Parliament pushing for a much tighter regime than the codes of conduct requested from Europe’s three largest economies.
However, this discussion on whether or not and how to regulate foundation models is also happening in other parts of the world, quoting Canada and the United States, Bianchini said.
MEP Ibán García del Blanco added: “I am sceptical too about self-regulation, but I think we can create the conditions in which we could encourage self-regulation in a way that would be more profitable than not.”
García del Blanco pushed for introducing in the AI Act some general principles for all AI models meant to guide the ethical use of AI.
Gütler reacted by saying she was not convinced, explaining that general concepts like equality of fairness leave much room for interpretation, which can be detrimental for end-users. She suggested having certification schemes, which give much less room for interpretation.
Bianchini added that innovation in AI was happening in “both directions”, explaining that one might build foundation models to monitor the regulatory compliance of other foundation models.
[Edited by Luca Bertuzzi/Nathalie Weatherald]