The Spanish presidency of the EU Council asked member states for flexibility in the sensitive area of law enforcement ahead of a crucial political meeting for the AI law.
The AI Act is a flagship bill to regulate Artificial Intelligence based on its capacity to cause harm, currently in the last phase of the legislative process, with the EU Commission, Council and Parliament negotiating the final dispositions in so-called trilogues.
EU policymakers are invested in finding a final agreement at the trilogue on 6 December. Ahead of this crucial appointment, the Spanish presidency, which negotiates on behalf of European governments, will need a revised negotiating mandate.
On Friday (24 November), the presidency circulated the first half of the negotiating mandate, asking for flexibility and pointing to possible landing zones in the law enforcement area. On Wednesday, the mandate is to land on the Committee of Permanent Representatives (COREPER) desk.
The second half of the mandate will touch upon foundation models, governance, access to source code, the sanction regime, the regulation’s entry into force and secondary legislation. It will be discussed at the COREPER level on Friday (1 December).
Prohibitions
The MEPs have significantly extended the list of prohibited practices – AI applications deemed to entail an unacceptable level of risk.
The presidency suggests accepting the bans on untargeted scraping of facial images, emotion recognition in the workplace and educational institutions, biometric categorisation to infer sensitive data like sexual orientation and religious beliefs, and predictive policing for individuals.
Moreover, ‘in the spirit of compromise’, the presidency proposes putting the EU Parliament’s prohibitions that were not accepted in the list of high-risk use cases, namely all other biometric categorisation and emotion recognition applications.
Regarding remote biometric identification, the parliamentarians agreed to drop the total ban on real-time use in exchange for limiting its exceptional usage and including more safeguards. For the presidency, this technology’s ex-post use is considered high-risk.
Law enforcement exceptions
The Council’s mandate includes several carve-outs for law enforcement to use AI tools. The presidency notes that it managed to ‘keep almost all of them’.
These include making the text more flexible for police forces concerning the human oversight obligation, reporting risky systems, post-market monitoring and confidentiality measures to avoid disclosing sensitive operational data.
The presidency also wants law enforcement to be able to use emotion recognition and biometric categorisation software without informing the subjects.
The European Parliament obtained that police forces should register the high-risk systems in the EU database but in a non-public section. The deadline for large-scale IT systems to comply with the AI Act’s obligations was set for 2030.
National security exception
France has been pushing for a broad national security exemption in the AI law. At the same time, the presidency noted that the EU Parliament has shown ‘no flexibility’ in accepting the wording of the Council’s mandate.
Spain proposes splitting this provision into two paragraphs. The first one states that the regulation does not apply to areas that fall outside of EU law and should not, in any event, affect member states’ competencies in the area of national security or any entity entrusted with tasks in this area.
Secondly, the text says that the AI Act would not apply to systems placed on the market or put into services for activities related to defence and the military.
Fundamental rights impact assessment
Left-to-centre MEPs introduced the fundamental rights impact assessment as a new obligation that would have to be conducted by users of high-risk systems before deployment. For the presidency, it is ‘absolutely necessary’ to include it to reach an agreement with the Parliament.
A sticking point on this topic has been the scope, with parliamentarians asking for all users and EU countries pushing to limit the provision to public bodies. The compromise was to cover public bodies and only private actors providing services of general interest.
Additionally, the fundamental rights impact assessment would have to cover aspects not already covered under other legal obligations to avoid overlaps.
Regarding the obligations on risk management, data governance and transparency, the users only have to verify that the high-risk system provider has complied with them.
For the presidency, the obligation of running a six-week consultation should be removed even for public bodies, replacing it with a simple notification to the relevant national authority.
Testing in real-world conditions
A point of contention in negotiations has been the possibility introduced by the Council to test high-risk AI systems outside regulatory sandboxes. According to the presidency’s note, some safeguards have been included to make the measure acceptable to the Parliament.
The text indicates that the people subject to the test should give their informed consent and that, where in the case of law enforcement activities, it is not possible to ask for consent, the testing and its outcome cannot negatively affect the people involved.
Derogation from conformity assessment
The Council introduced an emergency procedure that allows law enforcement agencies to deploy a high-risk AI tool that has not yet passed the conformity assessment procedure in case of urgency.
MEPs want this process subject to judicial authorisation, a point the presidency finds unacceptable for EU countries. As a compromise, the Spaniards proposed re-introducing the mechanism, allowing the Commission to review the decision.
Read more with EURACTIV
EU countries' reservations persist on Cyber Solidarity Act
The Commission's proposal for a Cyber Solidarity Act is languishing in the EU Council of Ministers, where the scarce enthusiasm is evidenced by the slow progress at the technical level - despite there being no major political hurdle.