The European Commission plans to allow the providers of ChatGPT-like artificial intelligence (AI) models to write codes of practice that will determine their compliance in the short to medium term, with civil society in a consultation role.
Civil society organisations have wondered for the past few months whether they will be involved in Codes of Practice for general-purpose AI models (GPAI), sources with knowledge of the matter told Euractiv.
In the meantime, Euractiv has been made aware that the European Commission has been looking for consulting firms to draft these codes.
The Codes of Practice are a crucial part of the AI Act, at least in the short to medium term.
GPAI providers, like OpenAI or Microsoft, can use the codes to demonstrate compliance with their obligations until harmonised standards are created. The Commission may give the codes general validity within the EU through an implementing act.
“If the General Purpose AI Codes of Practice drafting process is not multi-stakeholder, including civil society, academics and independent experts, this could mean an industry-led process; essentially Big Tech writing their own rules,” one person from civil society, who declined to be named commenting on an evolving situation, told Euractiv.
Within the AI Act itself, the participation of providers, civil society, and academia in drafting codes for general-purpose AI models was ambiguous as it states they “may” participate.
It turns out that the providers of the models will be the ones primarily drafting the codes, with other stakeholders participating through consultations, according to information made available to Euractiv.
How this consultation will take place is still unknown. It could mean one or several calls for opinions or being invited to the room under observer status.
Multiple sources with knowledge of the matter told Euractiv that some organisations took a presentation in a Commission workshop in June as suggesting that the situation is evolving and that the Commission may now be more open to including civil society.
A Commission spokesperson told Euractiv that “a call for expressions of interest, to be published soon, will outline the specific ways in which these stakeholders will be involved in the development of the codes”.
“The process of preparation of the code of practice will be supported by a diverse range of stakeholders, including civil society,” the Commission spokesperson said but declined to give details on how this will be achieved.
If the providers do not sign the otherwise voluntary guidelines, they will have to prove independently that they comply with the almost 500-page Act.
The drafting process
The Commission’s Directorate-General for Communications Networks, Content and Technology (DG CNECT) ran a mini-competition under an existing framework contract that ended in June, according to information verified by Euractiv.
The process and drafting of the codes will be outsourced to an external firm, which will have to figure out who will be part of the process, devise and run a working programme, set up working groups with weekly meetings, and draft the codes within nine months.
As a result, the Commission has set an aggressive timeline for the consulting firms’ work.
All consultation with stakeholders, agendas, and methodology must be approved by the Commission’s newly established AI Office.
The AI Office will monitor the drafting but will not get too involved except to approve the final codes, according to information confirmed by Euractiv. The AI Board, comprised of experts from member states, will have a similar role as the Commission in not being too involved in the drafting process.
The awarded firm will also map out the best way for the Commission to evaluate GPAI risks, including systemic risks.
Implementation of the AI Act has been pushed back several weeks, with the Act expected to come into force in early August. It will be fully implemented two years after that, but the codes of practice are to be prepared nine months from the date of its entry into force.
The hiring of the consulting firm for drafting the codes is done under a framework contract. These are “multi-year agreements that set the basic terms” for a contract to be awarded later, Professor Albert Sanchez-Graells, who studies EU procurement processes at the University of Bristol, told Euractiv.
Companies run in a public tender and are pre-selected for general tasks such as “assistance to Commission services with execution of audits.” When a specific job arises under the framework contract, the Commission can run a mini-tender among the pre-selected companies.
Participation conundrum
Such an industry-led process “risks an unfaithful implementation of the AI Act, which would not be sufficiently protective of the safety and fundamental rights of EU citizens,” said one stakeholder, who requested anonymity.
In late April, these stakeholders expressed their concerns to the Commission in another workshop, people present at the meeting told Euractiv, and former Euractiv tech editor Luca Bertuzzi confirmed on LinkedIn.
The Commission clarified to them that they would not be a driving force in the drafting. At that point, a few of these organisations sent letters to express their dissatisfaction but most have yet to receive a full response, people with knowledge of the matter told Euractiv.
The EU executive has already faced criticism over its handling of the AI Act’s implementation. The appointment of the file’s rapporteur to the AI Office without any public job posting or explanation, first reported by Contexte and then Euractiv, has caused a stir.
A group of over 30 civil society organisations, including leading consumer association BEUC, has cast doubt on the independence of national authorities tasked with enforcing the Act. In an open letter on 26 June, they called on the Commission to clarify these roles.
Three MEPs, who were reelected in the June EU, vote sent questions to the Commission in April about the process of staffing the office. They have not yet received an official response.
[Edited by Alice Taylor/Zoran Radosavljevic]