29 April 2025
The EU is secretively paving the way for police, border and criminal justice agencies to develop and use experimental “artificial intelligence” (AI) technologies, posing risks for human rights, civil liberties, transparency and accountability, says a report published today by Statewatch.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The EU is secretively paving the way for police, border and criminal justice agencies to develop and use experimental “artificial intelligence” (AI) technologies, posing risks for human rights, civil liberties, transparency and accountability, says a report published today by Statewatch.
The report, Automating Authority: Artificial intelligence in European police and border regimes, [1] warns that the growing use of AI technologies by the EU and its member states will reinforce the existing discrimination, violence and harm caused by policing, border and criminal justice policies.
Hundreds of pages of documents obtained from EU institutions, agencies and working groups [2] detail attempts by politicians and officials to put into use what the report refers to as “security AI” – artificial intelligence tools and technologies for police, immigration, border and criminal justice agencies.
Legal loopholes
The EU’s Artificial Intelligence Act, a landmark piece of legislation adopted in 2024, is designed to ensure “a high level of protection of health, safety, fundamental rights… including democracy, the rule of law and environmental protection.” [3]
However, the Act contains a multitude of exemptions, exceptions and loopholes for security AI.
This includes a total exemption from the Act, until at least 2031, for high-risk AI systems used by EU and national authorities. [4]
The Act does not apply to individuals located outside the EU. This means that it will provide no protection to the millions of people obliged to apply for visas and travel authorisations, which will be processed using AI techniques. [5]
Despite supposed bans on practices such as individual profiling, biometric categorization, and mass biometric surveillance (such as public facial recognition systems), exemptions grant law enforcement and migration authorities many options for using them. [6]
State secrecy
The report argues that the law draws a “silicon curtain of secrecy” over the use of AI by law enforcement agencies.
Police forces have claimed credit for having many of these exemptions included in the law. A secretive body called the European Clearing Board, made up of EU member state police officials, worked extensively to weaken safeguards in the Act.
Ultimately, the Act “will make meaningful supervision of and control over the use of AI systems for policing and migration authorities extremely difficult,” says the report.
New institutions
The secrecy permitted by the AI Act is compounded by the secrecy surrounding new institutions the EU is developing to create and deploy security AI, the report argues.
The European Clearing Board is part of what the report refers to as new “institutional infrastructure.”
It is attached to the EU Innovation Hub for Internal Security, which brings together the EU’s justice and home affairs and institutions. Last year, the Hub launched a dedicated working group on AI.
The EU’s agency for managing police and immigration databases, eu-LISA, has pursued a plan to transform itself into a “centre of excellence” for security AI.
According to documents obtained for the Statewatch report, this idea was first proposed by multinational consultancy company Deloitte, which was contracted to set out the centre’s “strategy, purpose, requirements and operating model.”
The plan was later dropped. Nevertheless, the report highlights the significance of such a far-reaching plan being undertaken with no democratic scrutiny or oversight.
Technical infrastructure
The report also details two initiatives to set up the technical infrastructure needed to develop and use security AI.
The first is a “Security Data Space for Innovation” (SDSI) that will interconnect datasets held by different agencies and institutions across the EU. The data will be used to train AI systems for law enforcement, which topics of interest including automated image recognition and video analysis.
A €1m EU-funded project is currently mapping the types of data that could be shared through the SDSI, including photos, videos, voice samples, and text scraped from the web.
EU police agency Europol is working on a separate, but similar, plan. Amongst the agency’s priority technologies are voice print analysis, age and gender detection from audio recordings of voices, and the use of augmented and virtual reality for data analytics.
To help develop these technologies, it is building a “sandbox” – an isolated technical environment in which software can be developed and tested with no external effects.
Europol documents describe the sandbox as an “infrastructural foundation” for “numerous depending initiatives” that has “paramount strategic significance.” [7]
One aim is to use the vast quantities of data held by Europol to train new AI tools. However, this runs the risk of reinforcing the bias and discrimination inherent in police data, the report warns.
Quotes
Chris Jones, Statewatch director and co-author of the report, said:
While the EU claims the AI Act will protect peoples’ rights, our analysis suggests completely the opposite when it comes to law enforcement and immigration agencies. It is riddled with loopholes, exceptions and exemptions that will severely limit people’s ability to know when AI is being used against them, and to challenge the use of these experimental technologies.
The creeping introduction of AI tools and techniques into EU policing, border and criminal justice agencies gives further cause for concern. This should be high on the public and political agenda at any time, let alone a time of growing far-right influence and power over EU governments and institutions. Instead, we see secretive working groups set up and plans being introduced with zero public or democratic debate. This has to change.
Romain Lanneau, Statewatch researcher and co-author of the report, said:
European law enforcement agencies and private companies have convinced European legislators to include wide exceptions in the Artificial Intelligence Act. Police forces will self-assess the legality of their experiments with highly intrusive technologies on migrants and people they deem to be a risk. Our report shows that this secretive deployment is a major threat to the rule of law, in particular for free speech, non-discrimination and the right to asylum. The next decade will show if and how litigation and journalistic investigations against the worst impacts of AI in law enforcement can bring back fundamental rights protection for all.
Notes
[1] Full report available at: https://statewatch.org/securityai
[2] The full set of documents will be made available at: https://statewatch.org/securityai
[3] Article 1(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e1915-1-1
[4] Article 111, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_111
[5] Article 2(1)(g), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_2
[6] Article 5, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_5
[7] Europol Innovation Lab, ‘Progress Report and Strategic Priorities 2024-2026’, 22 September 2023, EDOC #1321956v13, p.5, https://www.statewatch.org/media/4775/europol-innovation-lab-progress-report-and-plan-2023-25.pdf
Artificial intelligence (AI) technologies are being embedded into everyday life by powerful actors, primarily motivated by profit. Police, border and criminal justice agencies are also looking to take advantage of the new powers AI offers for “security” policies, at both national and EU level. The EU is creating new infrastructure, away from the public eye, to allow the swift development and deployment of “security AI.” This will also reinforce the existing discrimination, violence and harm caused by policing, border and criminal justice policies. Exposing and understanding this emerging security AI complex is the first step to challenging it.
Spotted an error? If you've spotted a problem with this page, just click once to let us know.