This week, EU lawmakers will put the finishing touches on the world’s first attempt to regulate artificial intelligence. But whether they are willing to put the needs of people over Big Tech profits remains to be seen, write Kateřina Konečná and Cornelia Ernst.
Kateřina Konečná is a Czech politician and MEP; Cornelia Ernst is a German politician and MEP. Both of them are part of the Left Party.
Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?
Us neither.
That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.
EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.
If a product found in cosmetics or medicine had unknown side effects, would you want it to be allowed on the market? Most likely, you wouldn’t. So why then should we gamble with the deployment of artificial intelligence without adequate safeguards?
The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.
At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.
However, this approach contains a major loophole that risks undermining the entire legislation.
Like asking a tobacco company whether smoking is risky
When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.
Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.
Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.
With great power comes great responsibility
Where AI has great power, its developers must take great responsibility. Under the original proposal, developers had to ensure these systems were safe, free from discriminatory bias, and information about how their systems work was supposed to be publicly available. This risks being undermined by the self-assessment loophole.
A petition signed by over 100 civil society organisations highlights the risk to legal certainty, enforcement challenges as well as unfairness between developers who honestly follow self-assessment and those who flout the rules.
We cannot leave the power to determine what is risky or not for our society in the hands of Big Tech. Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.
AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.