Artificial intelligence (AI) technologies are being embedded into everyday life by powerful actors, primarily motivated by profit. Police, border and criminal justice agencies are also looking to take advantage of the new powers AI offers for “security” policies, at both national and EU level. The EU is creating new infrastructure, away from the public eye, to allow the swift development and deployment of “security AI.” This will also reinforce the existing discrimination, violence and harm caused by policing, border and criminal justice policies. Exposing and understanding this emerging security AI complex is the first step to challenging it.
Read the summary analysis as a PDF.
Artificial intelligence (AI) technologies are being embedded into everyday life by powerful actors, primarily motivated by profit. Police, border and criminal justice agencies are also looking to take advantage of the new powers AI offers for “security” policies, at both national and EU level. The EU is creating new infrastructure, away from the public eye, to allow the swift development and deployment of “security AI.” This will also reinforce the existing discrimination, violence and harm caused by policing, border and criminal justice policies. Exposing and understanding this emerging security AI complex is the first step to challenging it.
The EU has long sought to develop and deploy advanced technologies for policing, border and immigration control, and criminal justice: biometric border controls, border surveillance drones, and machine learning systems for analysing vast quantities of data, amongst other things. The latest part of this push for technological “solutions” to so-called security problems is the development and deployment of artificial intelligence (AI).
The regulatory regime introduced by the Artificial Intelligence Act will frame the use of artificial intelligence in the EU, and perhaps elsewhere in the world, for many years to come. In the field of security, it achieves two key things. Firstly, it establishes conditions for increased development and use of security AI systems. Second, it ensures that those systems are subject to extremely limited accountability, oversight and transparency measures.
EU agencies are already developing and using various types of AI technology. This section looks in particular at projects and activities launched by eu-LISA and Europol, as well as Frontex, Eurojust and the EU Asylum Agency. There are a wide variety of AI technologies - from facial recognition to machine learning and 'predictive' technologies - that have been examined or are actively deployed.
This report considers two types of "infrastructure" required for the development of the EU security AI complex: institutional and technical. The former is made up various processes, working groups and other ‘spaces’ (whether formal or informal) that have been brought into use in recent years, principally since 2019. The latter consists of the hardware and software needed for the development of new security AI tools and techniques.
Spotted an error? If you've spotted a problem with this page, just click once to let us know.