Est. 5min
07-12-2023 (updated: 07-12-2023 )
Content-Type:
News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.
"This decision of the CJEU clarifies that the GDPR contains a prohibition to subject people to automated decision-making with significant impact on them", explains to Euractiv Gabriela Zanfir-Fortuna, Vice President for Global Privacy at the Future of Privacy Forum. [Akarat Phasura / Shutterstock]
The Court of Justice of the EU (CJEU) ruled on Thursday (7 December) that decision-making by scoring systems that use personal data is unlawful, a judgement that could have significant spillover effects for social security and credit agencies.
Years after the EU’s General Data Protection Regulation (GDRP) started to take effect, the Court of Justice of the EU (CJEU) issued its first ruling on the article on automated individual decision-making.
“This decision of the CJEU clarifies that the GDPR contains a prohibition to subject people to automated decision-making with significant impact on them,” Gabriela Zanfir-Fortuna, Vice President for Global Privacy at the Future of Privacy Forum, explained to Euractiv.
Between 2018 and 2021 a scandal took hold in the Netherlands – eventually leading to the resignation of Mark Rutte’s third government – due to a flawed risk-scoring algorithm which led tax authorities to wrongly accuse thousands of frauding a childcare benefit scheme.
On Thursday, the Court ruled that any type of automated scoring is prohibited if it significantly impacts people’s lives. The verdict relates to SCHUFA, Germany’s largest private credit agency, which rates people according to their creditworthiness with a score.
According to the judgment, SCHUFA’s scoring violates the GDPR if SCHUFA’s customers – such as banks – attribute a “decisive” role to it in their contractual decisions.
This decision might have far-reaching consequences. In France, the National Family Allowance Fund (CNAF) has used a risk-scoring automated algorithm to initiate home inspections on potential fraud suspicions since 2010.
Le Monde and Lighthouse Reports reported that the data mining algorithm from the CNAF analyses and scores 13.8 million households monthly to prioritise controls.
CNAF’s data mining algorithm uses some 40 criteria based on personal data on which a risk coefficient is attributed, scoring all beneficiaries between 0 and 1 each month. The closer beneficiaries’ score is to 1, the more chances they have of receiving a home inspection.
Bastien Le Querrec, a legal expert at the advocacy group La Quadrature du Net, told Euractiv: “The fact that the National Family Allowance Fund uses an automatic scoring system for all its beneficiaries, and considering the crucial significance of this score in the subsequent process, this score, in the opinion of the Quadrature du Net, has significant implications on people’s lives and should therefore fall within the scope of the CJEU decision.”
In other words, the scoring system would be illegal unless specifically authorised by French law and in strict compliance with the EU data protection rules.
French centrist MP and member of the French privacy regulator CNIL Philippe Latombe told Euractiv that he considers CNAF’s algorithm to be a mere risk evaluation system, filtering people based on their data, which happens to manipulate personal data because of the organisation’s purpose: deliver allowances to people in need.
“If each criterion taken separately may seem logical for the purpose of tackling fraud, the sum of the criteria could be discriminatory if they are correlated,” continued Latombe.