The European Commission’s artificial intelligence (AI) office should ensure a multi-stakeholder process when drafting rules for general-purpose AI, writes Kris Shrishak.
Kris Shrishak is an Enforce Senior Fellow at non-profit Irish Council for Civil Liberties, in which position he studies tech regulation and advises stakeholders on it.
The European Union has claimed to be the first to establish comprehensive AI regulation. But being first at rule-making is worthless if rules are not effectively enforced.
The newly set up EU AI Office, a renamed unit within the European Commission, has the mandate to implement and enforce critical elements of the law. It aims to be the EU’s “centre of AI expertise”.
One of its first tasks involves kick-starting the process to draft codes of practice for general purpose AI systems. These codes will translate the legal requirements in the AI Act into detailed instructions. They are expected to be ready nine months after the law enters into force—around April 2025.
Despite being called “codes”, they are less optional than they may seem.
These codes are not legally binding, but they are very attractive for large AI companies. By default, in the short term, companies that follow the codes will be presumed to be compliant with the law. And non-compliance can attract fines after July 2025. These codes are intended to protect people from AI harms.
But companies such as OpenAI and Google would be keen to make these codes as weak as possible and as close to their internal policies—the EU AI Office should prevent this.
Thus, the pen holders of these codes play an important role. The EU AI Office has not made any public statements on this. Until recently, it was not clear who the code drafters will be. But reports reveal that the EU AI Office does not plan to run the process but to watch from the sidelines, along with you and me.
The drafting will be outsourced to consulting firms. We could see Big 4 consulting firms writing rules with and for their partners OpenAI and Microsoft. The EU’s message to the companies seems to be: You and your friends write the rules you wish to follow.
This scenario can be prevented. Now.
There is no responsible AI without a responsible regulator. The EU has the tools in its possession. It should act responsibly. The “centre of AI expertise” should not outsource important tasks to scandalous consultancy firms. Responsible AI regulators do not allow OpenAI to write EU rules.
OpenAI helped make AI harms scalable. With able assistance from Microsoft and Google, it will prevent the EU from setting and enforcing strong rules.
The EU AI Office should draft the codes through a multistakeholder process. The process should include industry, especially EU companies developing, fine-tuning or using general purpose AI systems, civil society organisations, and independent scientists. These stakeholders should be involved at every step of the drafting process. Not just at the end.
The process of inviting participants to the drafting process should be transparent to the public. Opaque processes and decisions will not increase public trust in the EU AI Office.
Lack of time is no excuse for not including diverse and essential stakeholders. The short deadline of nine months to develop the codes necessitates fast drafting. But speed should not come at the cost of weak rules written by companies with vested interests.
Even in fields where fast and best go together, a false start is not an option. Usain Bolt was disqualified for a false start in the 100 metres World Championship in 2011. Bolt could come back and win the gold medal in the London Olympics in 2012.
If it makes a false start, the EU AI Office will not get a second chance.