Jesus, a convicted man who has been assessed multiple times by the Riscanvi algorithm. llustration by El Confidencial.

European governments strongly believe in data-driven solutions for law enforcement and the prevention of crime. This is also visible in the new Artificial Intelligence Act, an attempt to reign in the harms of the fledgling technology, which is watered down to ensure leniency for the use of AI by law enforcement agencies.

This investigation in the Netherlands, Italy and Spain shows that while governments believe that AI systems are by definition efficient, objective and fair, they can be deeply flawed. All the while, they have a profound impact on the lives of European citizens, without them knowing it.

Algorithmic risk assessments often process large amounts of sensitive data on people. This can include data like nationality, gender and age, but also postcodes, job competencies and even sensitive medical data. The risk of bias and discrimination is high.

In the Netherlands, the investigation revealed that authorities deploy a predictive algorithm that attempts to predict youth recidivism on a massive scale. The system, based on 10 variables that also factor in whether a minor has been a witness to or victim of a crime, influences the chances of a criminal record for underaged suspects. But statistical analysis of the algorithm shows it has a low predictive value, resulting in many false positives. Regardless of its impact, the government did not inform the minor, their parents or lawyer of the use of the scoring device.

Reporting in Italy revealed the extent to which Italian police forces use predictive policing systems, delving into the story of KeyCime. This company developed software called DELIA that has been used by Milan Police Headquarters since 2008. Although KeyCrime claimed that DELIA complied with the AI Act’s standards, delays in obtaining EU approval hindered the company’s ability to operate and generate revenue, causing its liquidation. Despite the proven failure, this experience has sparked interest from the Italian government to use new predictive tools in this field.

Meanwhile in Spain, this project revealed the inner workings of the RisCanvi algorithm, software used in prisons in the Catalonia region to predict which inmates will reoffend. The evaluation of this automated system has a direct impact on the rights of inmates and decisions made by officials, especially in the final phase of sentences, a key moment in prisoners’ social rehabilitation process. Despite this, opacity continues to surround this and other predictive tools used by the Spanish public sector.

This investigation is the result of the work of three journalists over more than a year, during which they have had to fight against the profound lack of transparency that continues to surround the automation of public decisions in Europe. The revelations underline the need to continue investigating and explaining the worrying implications of the current transformation of European governments.

Published stories

web: KontraBit