It would be irresponsible for the EU to cast aside regulation of European foundation model developers. To support its SMEs and ensure AI works for people and society, the EU must create rules for these companies in the AI Act, writes Connor Dunlop.
Connor Dunlop is the EU Public Policy Lead at the Ada Lovelace Institute.
The European Union has a long history of regulating technologies that pose serious risks to public safety and health. Whether it’s automobiles, planes, food safety, medical devices or drugs, the EU has established product safety laws that create clear rules for companies to follow.
These rules keep people safe, protect their fundamental rights, and ensure the public trusts these technologies enough to use them. Without regulation, essential public and commercial services are more likely to malfunction or be misused, potentially causing considerable harm to people and society.
AI technologies, which are becoming increasingly integrated into our daily lives, are no exception to this.
This is the lens through which to view the current debate in the EU over the AI Act, which seeks to establish harmonised product safety rules for AI. This includes foundation models, which pose significant risks given their potential to form AI infrastructure that downstream SMEs build from.
That is why EU legislators have proposed guardrails for foundation model providers, including independent auditing, safety and cybersecurity testing, risk assessments and mitigation.
Given the range and severity of risks that foundation models raise, these proposals are reasonable steps for ensuring public safety and trust – and for ensuring that the SMEs using these products can be confident they are safe.
But last week, France, Germany and Italy rejected these requirements and proposed that foundation models should be exempt from any regulatory obligations.
This position has now raised the prospect of indefinitely delaying the entire EU AI Act – which covers all kinds of AI systems, from biometrics technologies to systems that impact our electoral processes.
France and Germany claim these regulatory obligations will be too burdensome for a handful of companies that have raised hundreds of millions in funding to build open-source foundation models.
But it would be irresponsible for the EU to cast aside regulation of large-scale foundation model providers to protect a couple of ‘national champions’. Doing so would ultimately stifle innovation in the EU’s AI ecosystem – of which downstream SMEs and startups are the vast majority. SMEs wishing to integrate or build on foundation models will not have the expertise, capacity or – importantly – access to the models to make their AI applications compliant with the AI Act.
Model providers are significantly better placed to conduct robust safety testing, and only they are aware of the full extent of models’ capabilities and shortcomings. It makes sense that obligations to conduct safety testing live with them, as these will benefit the thousands of downstream users of these systems.
This is why the DIGITAL SME Alliance, which represents 45,000 ICT SMEs in Europe, has called for the fair allocation of responsibility in the value chain, including at the model layer.
Without this, it will be extremely difficult for SMEs to comply with the AI Act’s requirements, which could lead to them stopping their use of foundation models altogether, or otherwise being disproportionately exposed to the burden of compliance and liability.
France and Germany’s positions on foundation models are based on an unevidenced yet prevalent myth that regulation is at odds with innovation. However, research on the impacts of regulation across different sectors shows many examples of regulation enabling greater innovation, market competition and uptake of certain technologies within society. Our own public attitudes research shows that people expect AI products to be safe and want them to be regulated.
EU’s AI Act negotiations hit the brakes over foundation models
A technical meeting on the EU’s AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.
Recent international agreements like the Bletchley Declaration and G7 commitments recognise the risks of advanced foundation models and have set out voluntary best practices for industry.
These initiatives explicitly recognise that most risks must be addressed by developers of foundation models. While these proposals are welcome, meaningful protection requires hard regulation to reinforce these practices. As tech companies have shown in the past with voluntary commitments, non-binding policies fail as a meaningful form of accountability. Companies will choose to prioritise corporate incentives over safety in the absence of a strong regulatory framework.
A ‘tiered’ approach, like the one being proposed by the Spanish Presidency, offers a fair compromise – ensuring compliance and assurance from large-scale foundation models, while giving EU businesses building smaller models a lighter burden until their models become as impactful.
The stakes could not be higher this week. Europe has a rare opportunity to establish harmonised rules, institutions and processes to protect the interests of the tens of thousands of businesses that will use foundation models, and to protect the millions of people who will be impacted by their potential harms.
Europe has done this in other sectors in the past without sacrificing its economic advantage, such as civil aviation, where safety-based regulation reduced fatality risk by 83% between 1998–2008, while seeing a 5% annual increase in passenger kilometres flown.
This opportunity will not come again for a long time. The last few years have demonstrated how unregulated AI can cause wide-scale societal harms and economic disparity. To protect its people and its SMEs, Europe must pass this law.