Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement

9 months ago 39

After 22 hours of intense negotiations, EU policymakers found a provisional agreement on the rules for the most powerful AI models, but strong disagreement in the law enforcement chapter forced the exhausted officials to call for a recess.

The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file is at the last stage of the legislative process as the EU Commission, Council, and Parliament meet in so-called trilogues to hash out the final provisions.

The final trilogue started on Wednesday (6 December) and lasted almost uninterruptedly for an entire day until a recess was called for Friday morning. In this first part of the negotiation, an agreement has been found on regulating powerful AI models.

Scope

The regulation’s definition of AI takes all the main elements of the OECD’s definition, although it does not repeat it word for word.

As part of the provisional agreement, free and open-source software will be excluded from the regulation’s scope unless they are a high-risk system, prohibited applications or an AI solution at risk of causing manipulation.

On the negotiators’ table after the recess will be the issue of the national security exemption, since EU countries, led by France, asked for a broad exemption for any AI system used for military or defence purposes, including for external contractors.

Another point to discuss is whether the regulation will apply to AI systems that were on the market before it started to apply if they undergo a significant change.

Foundation models

According to a compromise document seen by Euractiv, the tiered approach was maintained with an automatic categorisation as ‘systemic’ for models that were trained with computing power above 10~25.

A new annexe will provide criteria for the AI Office to make qualitative designation decisions ex officio or based on a qualified alert from the scientific panel. Criteria include the number of business users and the model’s parameters, and can be updated based on technological developments.

Transparency obligations will apply to all models, including publishing a sufficiently detailed summary of the training data “without prejudice of trade secrets”. AI-generated content will have to be immediately recognisable.

Importantly, non-systemic, pre-trained models can avoid the horizontal obligations if they “are made accessible to the public under a licence that allows for the access, usage, modification, and distribution of the model, and whose parameters […] are made publicly available”.

For the top-tier models, the obligations include model evaluation, assessing and keeping track of systemic risks, cybersecurity protection, and reporting on the model’s energy consumption.

The codes of practice are only meant to complement the binding obligations until harmonised technical standards are put in place, and the Commission will be able to intervene via delegated acts if the process is taking too long.

Governance

An AI Office will be established within the Commission to enforce the foundation model provisions. The EU institutions are to make a joint declaration that the AI Office will have a dedicated budget line.

AI systems will be supervised by national competent authorities, which will be gathered in the European Artificial Intelligence Board to ensure consistent application of the law.

An advisory forum will gather stakeholder feedback, including from civil society. A scientific panel of independent experts was introduced to advise on the regulation’s enforcement, flag potential systemic risks and inform the classification of AI models with systemic risks.

Prohibited practices

The AI Act includes a list of banned applications because they are deemed to pose an unacceptable risk. The bans confirmed so far are on manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images.

However, the European Parliament has proposed a much longer list of banned applications and is facing a strong pushback from the Council. According to several sources familiar with the matter, MEPs were being pressured to accept a package deal, seen by Euractiv, that is extremely close to the Council position.

The parliamentarians were split on this matter, with the centre-right European People’s Party, co-rapporteur Dragoș Tudorache, and the president of the Social Democrat parliamentary group, Iratxe García, pushing for accepting the deal.

The Council’s text wants to ban biometric categorisation systems based on sensitive personal traits like race, political opinions and religious beliefs “unless those characteristics have a direct link with a specific crime or threat”.

The examples given were of religiously or politically motivated crimes. Still, the presidency also insisted on keeping racial profiling.

While left-of-centre lawmakers want to ban predictive policing, the Council’s proposal limits the ban to investigations solely based on the system’s prediction and not to cases with reasonable suspicion of involvement in criminal activity.

The Parliament also introduced a prohibition for emotion recognition software in the workplace, education, law enforcement, and migration control. The Council is only willing to accept it in the first two areas, except for medical or safety reasons.

Another controversial topic is the use of Remote Biometric Identification (RBI). MEPs have agreed to drop a complete ban in favour of narrow exceptions related to serious crime. The Council is pushing to give law enforcement agencies more room to manoeuvre and make the ex-post usage a high-risk application.

An additional open issue relates to whether these bans should apply only to systems used within the Union or also prevent EU-based companies from selling these prohibited applications abroad.

[Edited by Zoran Radosavljevic]

Read more with EURACTIV

Read Entire Article