New Technology, Old Injustice: Data-driven discrimination and profiling in police and prisons in Europe

Topic
Country/Region

Police and criminal legal system authorities across Europe are increasingly using data-based systems and tools to ‘predict’ where crime will occur, to profile people as criminals and to assess the ‘risk’ of crime or criminality in the future.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

Full report (pdfs)

Press release

Executive summary

Police and criminal legal system authorities across Europe are increasingly using data-based systems and tools to ‘predict’ where crime will occur, to profile people as criminals and to assess the ‘risk’ of crime or criminality in the future.

These so-called ‘predictions’, profiles and risk assessments influence police decisions, actions and interventions. These include surveillance and monitoring, questioning, stop and search, identity checks, being barred from employment, home raids, fines, use of force, detention, arrest, and deportation.

These data-based systems and decisions also influence decisions throughout the criminal legal system: from detention and pre-trial detention, to prosecution, sentencing and probation.

Outside of the criminal legal system, automated decisions can also influence or lead to other forms of punishment. For example, they may underpin denials of or restrictions on access to essential public services such as welfare or housing.

This report synthesises original research about automated decision-making systems and databases used in the policing and criminal legal systems in four countries: Belgium, France, Germany and Spain. It is based on in-depth research conducted by partner organisations in those countries.[i] It looks at:

  • how these data-based crime prediction systems are developed;
  • how they are used by law enforcement and criminal legal system authorities;
  • the outputs produced by the systems;
  • how these outputs are used and influence decisions, and the impact these have on people, groups and communities.

It also considers how marginalised groups and communities are disproportionately targeted and impacted by these systems, including Black and racialised people and communities, victims of gender-based violence, migrants and people from working-class and socio-economically deprived backgrounds and areas, and people with mental health issues.

The majority of these systems use historical data, for example from the police or criminal legal system. This reflects historic and existing biases within these institutions and within wider society. This leads to the over-policing and criminalisation of marginalised communities, particularly racialised groups, migrants, and people from low-income neighbourhoods.

The use of these systems in policing and criminal the criminal legal system has significant consequences for individuals' rights, including the right to a fair trial, privacy, and freedom from discrimination.

Location-focused ‘predictive’ policing systems

Across Europe, police forces are developing and implementing location-focused methods of ‘predictive’ policing. These algorithmic systems are developed to ‘predict’ where and when a crime will occur. This allows police to allocate resources to these locations. Geographic crime ‘prediction’ systems are used or have been used in all four countries examined: Belgium, France, Germany and Spain, as well as in other European countries, for example: Italy, the Netherlands, Switzerland and the UK.

The research identified two main types of location-focused systems:

  • crime ‘hotspot’ prediction, which draws on historical policing data to forecast future crime locations; and
  • environmental ‘risk’ prediction algorithms, which are based on the assumption that environmental factors determine where crimes take place, and can therefore predict ‘risky’ locations.

Crime ‘hotspot’ prediction methods use historical crime statistics on where and when a crime took place to ‘predict’ future crime locations or ‘hotspots’. These predictions are based on the analysis of statistical insights and trends from large amounts of crime data, often from police databases. Generally, ‘hotspot’ prediction systems provide police with a ‘heat map’ to identify areas or locations where there is allegedly a high risk of crime taking place.

Crime ‘hotspot’ prediction systems are used to allocate police resources and determine where and when officers should patrol. Outcomes in ‘hotspot’ areas may include: surveillance, information-gathering, identity checks, questioning, searches, restraining orders, home raids, and arrests. The research raises concerns that locations labelled as crime ‘hotspots’ are disproportionately areas or neighbourhoods where low-income and racialised communities live and work.

Location-focused systems are based on environmental and contextual data. They draw on environmental or contextual data to identify areas or locations that are allegedly more prone to criminality. An algorithm assigns a vulnerability value to locations based on spatial factors, including:

  • whether the location is well-lit;
  • metro or bus stations;
  • outdoor seating areas of cafés;
  • fast-food outlets;
  • public toilets;
  • pharmacies, bars, certain types of shops;
  • trees and benches; and
  • schools and post offices.

This method raises similar issues of bias and discrimination to those arising from systems based on crime data.

Person-focused ‘predictive’ policing and crime ‘prediction’ systems

Person-focused crime ‘prediction’ tools are designed to predict a person’s likelihood or ‘risk’ of committing a criminal offence. Similar systems are used to assess people’s likelihood of being a victim of crime, such as gender-based violence, or to detect allegedly false crime reports.

People targeted by these systems are subjected to a constant analysis of data that characterises them, their past and present lives and their relationships. The objective is to determine and ‘predict’ their behaviour, ‘risk’, or supposed ‘criminality’. This can have serious consequences. The outputs generated by these systems may lead to people being put under surveillance or monitoring. They may result in increased police stops, questioning, searches, home or workplace visits, being barred from employment, detention, deportation, or arrest. 

These tools are also used in the criminal legal system, and can influence judges’ decision-making, including sentencing. They can influence the length of a person’s imprisonment and when they will be released, as well as the conditions in which they are detained.

Conclusion

The use of these systems leads to racial and socio-economic profiling, discrimination and criminalisation. This is directed particularly against marginalised people and communities, specifically Black and minoritised ethnic people, and people from deprived backgrounds.

Their use leads to unjust and discriminatory consequences: from surveillance, identity checks and searches, to police harassment, home raids, being barred from employment, arrest, detention and deportation.

These systems are used secretively, meaning that people are not aware of their use. As a result, people targeted by them and the actions that results from their outputs are unable to challenge them. Even if they were, there is no clear framework for accountability.

The conclusion of all of the partner reports, and of this report, is the same: that these systems must be prohibited.

Full report (pdfs)

Press release

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error