EU top court’s ruling spells trouble for scoring algorithms

yl

Brussels: The Court of Justice of the EU (CJEU) has ruled that decision-making by scoring systems that use personal data is unlawful, a judgement that could have significant spillover effects for social security and credit agencies.

Years after the EU’s General Data Protection Regulation (GDRP) started to take effect, the Court of Justice of the EU (CJEU) issued its first ruling on the article on automated individual decision-making.

“This decision of the CJEU clarifies that the GDPR contains a prohibition to subject people to automated decision-making with significant impact on them,” Gabriela Zanfir-Fortuna, Vice President for Global Privacy at the Future of Privacy Forum, explained to Euractiv.

Between 2018 and 2021 a scandal took hold in the Netherlands – eventually leading to the resignation of Mark Rutte’s third government – due to a flawed risk-scoring algorithm which led tax authorities to wrongly accuse thousands of frauding a childcare benefit scheme.

On Thursday, the Court ruled that any type of automated scoring is prohibited if it significantly impacts people’s lives. The verdict relates to SCHUFA, Germany’s largest private credit agency, which rates people according to their creditworthiness with a score.

According to the judgment, SCHUFA’s scoring violates the GDPR if SCHUFA’s customers – such as banks – attribute a “decisive” role to it in their contractual decisions.

This decision might have far-reaching consequences. In France, the National Family Allowance Fund (CNAF) has used a risk-scoring automated algorithm to initiate home inspections on potential fraud suspicions since 2010.

Le Monde and Lighthouse Reports reported that the data mining algorithm from the CNAF analyses and scores 13.8 million households monthly to prioritise controls.

CNAF’s data mining algorithm uses some 40 criteria based on personal data on which a risk coefficient is attributed, scoring all beneficiaries between 0 and 1 each month. The closer beneficiaries’ score is to 1, the more chances they have of receiving a home inspection.

Bastien Le Querrec, a legal expert at the advocacy group La Quadrature du Net, told Euractiv: “The fact that the National Family Allowance Fund uses an automatic scoring system for all its beneficiaries, and considering the crucial significance of this score in the subsequent process, this score, in the opinion of the Quadrature du Net, has significant implications on people’s lives and should therefore fall within the scope of the CJEU decision.”

In other words, the scoring system would be illegal unless specifically authorised by French law and in strict compliance with the EU data protection rules.

French centrist MP and member of the French privacy regulator CNIL Philippe Latombe told Euractiv that he considers CNAF’s algorithm to be a mere risk evaluation system, filtering people based on their data, which happens to manipulate personal data because of the organisation’s purpose: deliver allowances to people in need.

“If each criterion taken separately may seem logical for the purpose of tackling fraud, the sum of the criteria could be discriminatory if they are correlated,” continued Latombe.

French green MP Aurélien Taché commented: “As usual, [the government] fights the poor rather than poverty itself, and with social scoring, it does not even uphold the most fundamental principles regarding the defence of freedoms and the right to privacy.”

The GDPR only exempts public and private organisations from using data mining algorithms in three cases: expressed consent by individuals, contract necessity or obligations by law.

Zanfir-Fortuna explained that the EU court’s decision removes the “legitimate interest” of organisations, such as business interests from companies, as a lawful basis to conduct scoring using personal data.

In addition, should the government want to give a legal basis to enforcement authorities in using scoring algorithms, national laws will have to base their legitimacy on EU laws and the EU Charter of Fundamental Rights.

These algorithms should be “necessary in a democratic society and meet the proportionality criterion,” said Zanfir-Fortuna. As a result, feeding personal data into scoring algorithms is now considerably more limited in the EU.

Latombe said that the CNAF situation “raised the question of algorithmic transparency by ParcoursSup”, a French governmental web portal designed to attribute undergraduate places in French universities.

The Quadrature du Net website, moreover, lists the French health insurance and old age insurance agencies, as well as the agricultural social fund and the employment agency, as using a similar risk-scoring algorithm, of which the lawfulness, with regards to the aforementioned court case, could now be put into question too.

Under the AI Act, an upcoming flagship law to regulate Artificial Intelligence, AI systems meant to determine access to public services will be deemed ‘high-risk’ and undergo a strict regime in terms of risk management and data governance.