Civil society calls on the EU to prohibit predictive and profiling AI systems in law enforcement and criminal justice

Topic
Country/Region
EU

45 organisations, including Statewatch, are calling on EU decision-makers to prohibit the use of predictive and profiling "artificial intelligence" (AI) systems in the realm of law enforcement and criminal justice, a move that will "ensure full fundamental rights protection for people affected by AI systems, and in particular... prevent the use of AI to exacerbate structural power imbalances."

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

The statement was coordinated by Fair Trials and EDRi. Full-text below.


The European Union institutions are taking the significant step to regulate artificial intelligence (AI) systems, including in the area of law enforcement and criminal justice, within the proposed Artificial Intelligence Act (AIA).

This is a unique opportunity to ensure full fundamental rights protection for people affected by AI systems, and in particular, to prevent the use of AI to exacerbate structural power imbalances. AI systems in law enforcement, particularly the use of predictive and profiling AI systems, disproportionately target the most marginalised in society, infringe on liberty and fair trial rights, and reinforce structural discrimination.

We, the undersigned organisations, call on the Council of the European Union, the European Parliament, and all EU member state governments to prohibit AI predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act. (AIA).

This statement details the harmful impact of predictive, profiling and ‘risk’ assessment systems in law enforcement and criminal justice and makes the case for amendments to the EU’s AIA.

What are predictive and profiling AI systems in law enforcement and criminal justice?

Artificial intelligence (AI) systems are increasingly used by European law enforcement and criminal justice authorities to profile people and areas, predict supposed future criminal behaviour or occurrence of crime, and assess the alleged ‘risk’ of offending or criminality in the future.

These predictions, profiles, and risk assessments, conducted against individuals, groups and areas or locations, can influence, inform, or result in policing and criminal justice outcomes, including surveillance, stop and search, fines, questioning, and other forms of police control. They can lead to arrest, detention, prosecution, and are used in sentencing, and probation. They can also lead to civil punishments, such as the denial of welfare or other essential services, and increased surveillance from state agencies. Policing and criminal justice authorities across Europe are using these AI systems to influence, inform, or assist in criminal justice decisions and outcomes.

The fundamental rights harms of predictive, profiling and risk assessment AI systems in criminal justice

Discrimination, surveillance and over-policing

These AI systems reproduce and reinforce discrimination on grounds including but not limited to: racial and ethnic origin, socio-economic status, disability, migration status and nationality, as well as engage and infringe fundamental rights, including the right to a fair trial and the presumption of innocence, the right to private and family life, and data protection rights.

The law enforcement and criminal justice data used to create, train and operate AI systems is often reflective of historical, systemic, institutional and societal discrimination which result in racialised people, communities and geographic areas being over-policed and disproportionately surveilled, questioned, detained and imprisoned across Europe.

These discriminatory practices are so fundamental and ingrained that all such systems will reinforce such outcomes. This is an unacceptable risk.

The right to liberty and the right to a fair trial and the presumption of innocence

Predictive, profiling and risk assessment AI systems target individuals, groups and locations, and profile them as criminal, resulting in serious criminal justice and civil outcomes and punishments, including deprivations of liberty, before they have carried out the alleged act for which they are being profiled.

By their nature, these systems therefore undermine the fundamental right to be presumed innocent, shifting criminal justice attention away from criminal behaviour towards vague and discriminatory notions of risk and suspicion. The outputs of these systems are therefore not reliable evidence of actual or prospective criminal activity and should never be used as justification for any law enforcement action, such as an arrest, let alone be submitted in criminal proceedings.

Further, such systems facilitate the transfer of substantive decisions affecting peoples’ lives (criminal justice, child protection) from the judicial to the administrative realm, with serious consequences for fair trial, liberty and other procedural rights.

Transparency, accountability and the right to an effective remedy

AI systems that are used to influence, inform and assist law enforcement and criminal justice decisions through predictions, profiles and risk assessments often have technological (black boxes, neural networks) or commercial barriers (intellectual property, proprietary technology) that prevent effective and meaningful scrutiny, transparency, and accountability. It is crucial that individuals affected by these systems’ decisions are aware of their use.

To ensure that the prohibition is meaningfully enforced, as well as in relation to other uses of AI systems which do not fall within the scope of this prohibition, affected individuals must also have clear and effective routes to challenge the use of these systems via criminal procedure, to enable those whose liberty or right to a fair trial is at stake to seek immediate and effective redress.

Prohibit Predictive and Profiling AI Systems

The undersigned organisations urge the Council of the European Union, the European Parliament, and all EU member state governments to prohibit AI predictive and profiling AI systems in law enforcement and criminal justice in the Artificial Intelligence Act (AIA).

Such systems amount to an unacceptable risk and therefore must be included as a ‘prohibited AI practice’ in Article 5 of the AIA. Ongoing negotiations on the AIA must be informed by a full consideration of the fundamental rights and societal harms associated with predictive systems in policing and criminal justice, and the fundamental rights of individuals, groups, as well as the consequences for democratic society, must be prioritised.

For further information, including examples and case studies and further analysis, please see:

SIGNED

  1. Fair Trials (International)
  2. European Digital Rights (EDRi) (Europe)
  3. Access Now (International)
  4. AlgoRace (Spain)
  5. AlgoRights (Spain)
  6. AlgorithmWatch (Europe)
  7. Antigone (Italy)
  8. Big Brother Watch (UK)
  9. Bits of Freedom (Netherlands)
  10. Bulgarian Helsinki Committee
  11. Centre for European Constitutional Law – Themistokles and Dimitris Tsatsos
  12. Citizen D / Državljan D (Slovenia)
  13. Civil Rights Defenders (Sweden)
  14. Controle Alt Delete (Netherlands)
  15. Council of Bars and Law Societies of Europe (CCBE)
  16. De Moeder is de Sleutel (Netherlands)
  17. Digital Fems (Spain)
  18. Electronic Frontier Norway
  19. European Centre for Not-for-Profit Law (ECNL)
  20. European Criminal Bar Association (ECBA)
  21. European Disability Forum (EDF)
  22. European Network Against Racism (ENAR)
  23. European Sex Workers Alliance (ESWA)
  24. Equinox Initiative for Racial Justice (Europe)
  25. Equipo de Implementación España Decenio Internacional Personas Afrodescendientes
  26. Eticas Foundation (Europe)
  27. Fundación Secretariado Gitano (Europe)
  28. Ghett’Up (France)
  29. Greek Helsinki Monitor
  30. Helsinki Foundation for Human Rights (Poland)
  31. Homo Digitalis (Greece)
  32. Human Rights Watch (International)
  33. International Committee of Jurists
  34. Irish Council for Civil Liberties
  35. Iuridicum Remedium (IuRe) (Czech Republic)
  36. Ligue des Droits Humains (Belgium)
  37. Novact (Spain)
  38. Observatorio de Derechos Humanos y Empresas en la Mediterránea (ODHE)
  39. Open Society European Policy Institute
  40. Panoptykon Foundation (Poland)
  41. PICUM (Europe)
  42. Refugee Law Lab (Canada)
  43. Rights International Spain
  44. Statewatch (Europe)
  45. ZARA – Zivilcourage und Anti-Rassismus-Arbeit (Austria)

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error