Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

The European Commission’s assessment of how to define high-risk products relative to sectoral rules

4 months ago 27

Artificial intelligence (AI)-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk in the context of the AI Act, according to a European Commission document seen by Euractiv.

The document on the interplay between the 2014 Radio Equipment Directive (RED) and the AI Act is the first known interpretation of how the Act will treat AI-based safety components, laying down the logic that could be used to classify other types of products as high-risk.

The RED covers more than old-school radios: it refers to wireless devices, using, for example, WiFi or bluetooth.

On top of any applicable sectoral legislation, high-risk AI systems require extensive testing, risk management, security measures and documentation under the AI Act.

The AI Act includes a list of use cases where, if AI is deployed, it is automatically categorised as high-risk. These include areas like critical infrastructure and law enforcement.

The Act also sets a key boundary for categorising other high-risk products: third-party conformity assessments with previously enacted sector-specific regulations.

Such AI systems need to meet two criteria to be classified as high-risk:

The first is that the system is a safety component of a product, or the AI system is a product itself, that is covered by pre-existing legislation.

The second is that this type of component or product is required to go through a third-party assessment to demonstrate compliance under previously enacted rules.

According to the Commission’s document, components related to cybersecurity and access to emergency services satisfy both these criteria in RED, making them high-risk systems.

However, the Commission’s preliminary view is that even in some cases where the RED foresees an opt-out from the third-party assessment, meaning when a company can demonstrate compliance by a self-assessment with harmonised standards, this is merely a procedural mechanism.

As such, even where such opt-outs exist, the AI-based components, in this case related to cybersecurity, are still deemed high-risk.

From heavy machinery to personal watercrafts

The AI Act lists a barrage of previously enacted sectoral regulation that might be used to classify AI products as high-risk. More documents like the one on RED can be expected.

In addition to electronics, products like medical devices, aviation, heavy machinery, even “personal watercraft” and lifts are covered by harmonised legislation relevant for the AI Act, so they might undergo a similar process as RED.

The preliminary interpretation shows that similar self-assessment standards likely cannot be used to remove the high-risk tag from AI products in these industries.

The AI Act places considerable requirements on high-risk AI systems. AI systems that are not in this category only face minor transparency obligations.

The question is therefore which systems fall into the high-risk category.

While the Commission estimated 5-15% of AI systems will be classified as high risk in 2021, a 2022 survey of 113 EU-based startups said it had found that 33-50% of the startups think of their own product as high risk.

The commission document is only a preliminary interpretation, it remains to be seen exactly how the AI Act will interplay with both RED and other regulations. Despite the AI Act being over 500 pages, considerable interpretive work remains to determine how it will apply to a fast moving cross-sectorial technology.

[Edited by Eliza Gkritsi/Zoran Radosavljevic]

Read more with Euractiv

Read Entire Article