The EU has long sought to develop and deploy advanced technologies for policing, border and immigration control, and criminal justice: biometric border controls, border surveillance drones, and machine learning systems for analysing vast quantities of data, amongst other things. The latest part of this push for technological “solutions” to so-called security problems is the development and deployment of artificial intelligence (AI).
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The EU has long sought to develop and deploy advanced technologies for policing, border and immigration control, and criminal justice. These efforts, representative of trends around the globe, are increasingly coming to fruition.
Biometric border controls will soon be enforced on all travellers to the EU, with fingerprints, photos and other personal data stored in vast databases. High-tech surveillance drones patrol the EU’s borders, supporting illegal pushbacks and violence. Police forces have increasing access to sensitive personal data through supranational information-sharing systems. Police officers are being equipped with mobile fingerprint and face scanners to use in the street.
The concrete effects of these technologies vary. For officials, they may well provide more efficient means of carrying out their tasks. However, it is not clear if this will be of significant social benefit. The EU’s border control model requires the routine use of violence and abuse against people seeking safety. Police forces embody and enforce systemic forms of racism and discrimination. AI is often heralded as introducing a technological ‘revolution’, but there is no sign of it creating any systemic political change to overcome this situation.
For individuals, the effects of these technologies range from privacy invasions (through increased data collection on migrants and refugees), to death (for some of those ‘pulled back’ to Libya with the aid of EU surveillance drones). One overall effect, particularly for migrants and refugees travelling to or present in the EU, is ever-more detailed inscription in digital state databases. This provides for new means of regulation and control by state authorities.
The latest part of this push for technological “solutions” to so-called security problems is the development and deployment of artificial intelligence (AI). The story begins in 2019, when security officials launched a number of initiatives to help develop and use AI for security purposes – that is, for policing, immigration and border control, and criminal justice.
Officials have nurtured new infrastructure, both institutional and technical, whilst overseeing other pre-existing AI projects. Lengthy studies have delved into the potential uses of AI for policing, immigration and criminal justice, leading to new projects and initiatives. These developments have received little, if any, public or democratic scrutiny.
During this period, the EU introduced a new law to regulate AI: the AI Act.[1] This aims to stimulate the development and use of AI technologies in the EU, through a complex regulatory regime. The Act classifies AI systems according to different risk levels. A whole host of systems and techniques are automatically considered high-risk.
Many of these high-risk systems concern law enforcement, immigration and criminal justice agencies. However, the law grants those agencies a number of exemptions and exceptions, in particular police and border agencies. It also does nothing to increase transparency over the development and use of security AI. Instead, it reinforces a long-standing logic of secrecy.[2]
Taken together, these institutional, technical and legal developments point to the development of a security AI complex: a confluence of political and economic interests that aim to make the development and use of security AI a structural feature of state power and practice in the EU. This, in turn, is intended to reinforce and extend the repressive powers of the state: to control peoples’ movement, to monitor their activities and habits, and to arrest or imprison people.
These developments will be seen by some as largely positive. They will be seen by others as largely negative. The authors of this report sit in the latter camp. Whatever one’s views, they should not go unreported, unexamined or unscrutinised. Understanding these developments and formulating responses designed to protect human rights, civil liberties and democratic standards is particularly urgent at a time of rising authoritarianism, xenophobia, and the resurgence of fascism.
This report follows previous work by Statewatch on the development of new security technologies and techniques. It is primarily based on information obtained from access to documents requests, open data, and material obtained through non-official routes. The story it tells is incomplete, but it illuminates a number of ongoing developments that merit close, critical scrutiny.
The report contains four main sections. The summary analysis brings together the findings and arguments, and is designed to provide a brief overview of the key points. Its content is drawn from the three sections that follow this introduction, which look at the issues covered by the summary analysis in detail.
The first of those sections examines the security, exemptions and exceptions in the EU’s AI Act. The second looks at current and forthcoming uses of security AI by EU agencies, in particular the EU’s justice and home affairs agencies, responsible for policing, immigration and border control, and criminal justice. The third examines the institutional and technical infrastructure officials are developing to accelerate the development and use of security AI. A list of acronyms and and abbreviations is also included.
There are three annexes to the report:
AI and AI system: The EU AI Act, approved in June 2024, defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy.” It must be able to infer, from “explicit or implicit objectives” based on the input it receives, “how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” An AI system may also “exhibit adaptiveness after deployment.”[3] The report uses this definition, though we are aware there are many ways to define AI and AI systems.[4]
Security and security AI: As with the term AI, “security” can also be defined in many ways. It is used in this report as shorthand for the policy areas addressed by policing, border, immigration, asylum and criminal justice agencies. “Security AI” is used to collectively describe the AI technologies, techniques and tools being developed for, and used by, those agencies.
The term “security” should be read critically. Other interpretations are available that contrast with the dominant, state-centric idea. As the Ammerdown Group argued in a 2016 paper: “The proper goal of security should be grounded in the wellbeing of people in their social and ecological context, rather than the interests of a nation state as determined by its elite.”[5] This should be borne in mind when reading the word in this report: what kind of security does “security AI” offer?
Security AI complex: This is shorthand for the confluence of political and economic interests converging around security AI. It is an imperfect term, but is useful for giving conceptual form to the institutional, technical and legal initiatives launched in the EU in recent years.
[1] Further legislation on AI is in the works. This deals with non-contractual civil liability for damage caused by AI systems, and are currently under negotiation. See: European Commission, ‘Liability Rules for Artificial Intelligence’, 28 September 2022, https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en
[2] ‘Patrick Breyer v European Research Executive Agency’, 7 September 2023, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62022CJ0135; Madalina Busuioc, Deirdre Curtin and Marco Almada, ‘Reclaiming Transparency: Contesting the Logics of Secrecy within the AI Act’, European Law Open, 23 December 2022, https://www.cambridge.org/core/journals/european-law-open/article/reclaiming-transparency-contesting-the-logics-of-secrecy-within-the-ai-act/01B90DB4D042204EED7C4EEF6EEBE7EA
[3] Article 3(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_3
[4] To give just one example, the definition in the Act is substantially different from the one included in the initial proposal. See: ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence’, 21 April 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
[5] The Ammerdown Group, ‘Rethinking security: A discussion paper’, May 2016, https://www.statewatch.org/media/documents/news/2016/may/ammerdown-group-rethinking-security-5-16.pdf
Spotted an error? If you've spotted a problem with this page, just click once to let us know.