2. Cop out: security exemptions in the Artificial Intelligence Act

The regulatory regime introduced by the Artificial Intelligence Act will frame the use of artificial intelligence in the EU, and perhaps elsewhere in the world, for many years to come. In the field of security, it achieves two key things. Firstly, it establishes conditions for increased development and use of security AI systems. Second, it ensures that those systems are subject to extremely limited accountability, oversight and transparency measures.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

In this section

2.1 “A historic achievement”

2.2 Summary: exceptions and loopholes

2.3 In detail: the AI Act’s security exemptions

2.3.1 Scope and application of the law

2.3.2 (Un)prohibited practices

2.3.3 Risk and impact assessments

2.3.4 A “silicon curtain” of secrecy

2.3.5 Conformity assessment

2.3.6 Data protection

2.3.7 Oversight

2.4 Implementing the Act


2.1 "A historic achievement"

In December 2023, after two years of negotiations and debate, the EU’s Artificial Intelligence Act was approved by the Council and the Parliament. Carme Artigas, the Spanish state secretary for digitalisation and artificial intelligence, lauded it as “a historic achievement.” She described the Act as keeping “an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.”[1] With regard to her latter point, an examination of the text reveals quite the opposite with regards to law enforcement and migration authorities.

The overarching aim of the AI Act is to “improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence.” At the same time, it is supposed to ensure “a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection,” while supporting “innovation.”[2]

It seeks to achieve these ends by setting out:

  • rules governing when and how AI systems can be placed on the market or put into use;
  • prohibitions on certain uses of AI systems;
  • requirements for AI systems classified as high risk, and obligations for those operating them;
  • transparency rules for certain types of lower-risk AI systems;
  • rules on monitoring the market for AI systems, including governance of that market and enforcement of the rules; and
  • “measures to support innovation”.[3]

One key aspect of the law is its “risk-based approach.” More stringent rules apply to AI systems deemed to pose higher risks to health, safety, fundamental rights, ethical standards or the public interest.[4] Those that pose the highest risks are prohibited.

This includes AI systems that deploy “subliminal techniques” to impair people’s ability to make informed decisions, and systems for social scoring. Remote biometric recognition systems – for example, public deployments of facial recognition – also fall under this heading. However, as explained below, there are multiple exceptions to this supposed prohibition.

High-risk systems are permitted, as long as they meet certain requirements. The providers and/or deployers of such a system must establish a risk management system, ensure appropriate data governance, produce various technical documents, and ensure a certain level of transparency. There are multiple exemptions to these requirements for security authorities.

The Act includes a specific list of systems that must be treated as high-risk. Amongst them are a number relevant to security policies: remote biometric identification systems; systems for biometric categorisation; and emotion recognition systems. The Act also lists specific types of systems for law enforcement; migration, asylum and border control management; and the administration of justice and democratic processes. The full list is provided in Annex I to this report. It can be amended by the European Commission in certain circumstances.[5]

AI systems that are not prohibited or considered high-risk have to comply with certain transparency measures. For example, people must be informed that they are interacting with an AI system.[6] The Act also contains rules, drafted part-way through the negotiations, on “general-purpose AI models with systemic risk.”[7]

The Act raises multiple questions on how it relates to other EU laws, including the Charter of Fundamental Rights and jurisprudence from the Court of Justice of the EU. A number of exemptions appear to clash with existing legislation and case law, for example those relating to:

  • national security;
  • the right to explanations on law enforcement use of AI;
  • restrictions on supervisory authorities’ inspection powers; and
  • the exclusion of people outside the EU from its scope.

There will likely be a lot of expensive, lengthy and complex litigation in the next few years as attempts are made to resolve these problems.

It can also be observed that the Act contributes to a particular ‘imaginary’ of AI. Amongst the many exemptions in the Act are a number premised on the idea that it may be necessary to deploy AI systems in urgent situations. In the words of the law, deployment of an AI system may be required in “situation(s) of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons.”[8]

This rests on an assumption that there are, or will be, AI systems that can deal with such situations of urgency. The Act does not provide any examples of what such a situation might be, or what a system deployed to deal with it might do. It may well be that no such system exists. The inclusion of the exception almost guarantees that it will be used. At the same time, the exception also serves to further propel the hype that surrounds the development and deployment of AI technology.

In the field of security[9] more specifically, the Act achieves two key things. First, it establishes conditions for increased development and use of security AI systems. Second, it ensures that the development and use of those AI systems is subject to extremely limited accountability, oversight and transparency measures.

Governments played a key role in placing these exemptions, exceptions and loopholes into the law. A policing body, the European Clearing Board (EuCB) has also taken credit for a number of changes resulting from its “advocacy activities regarding law enforcement use of AI”[10] (section 4.1.3). According to a Europol document, lobbying of EU member state governments by the EuCB’s Strategic Group on AI:

… triggered important changes in the Council position on the AI Act, including on the definition, classification of systems, remote biometrics, use of dactyloscopy and exceptions for law enforcement (mandatory publishing of AI-systems in use or that are developed by law enforcement agencies).[11]

This can be seen by looking at the wide array of exemptions embedded in the Act, which can be divided into eight different categories:

  • scope and application of the law;
  • (un)prohibited practices;
  • risk assessment;
  • transparency;
  • conformity assessment;
  • data protection; and
  • oversight.

(return to top)

 

2.2 Summary: exceptions and loopholes

Despite promising to protect fundamental rights and ensure that AI systems deployed in the EU are “trustworthy,” the AI Act contains a multitude of exemptions, exceptions and shortcomings in relation to the use of AI systems for policing and migration purposes. These can be summarised as follows:

Scope and application of the law

  • law enforcement agencies in the EU can make use of AI systems operated by international organisations or non-EU state authorities, to which the AI Act does not apply;
  • a new concept of “sensitive operational data,” with an unclear definition, is used in the Act, introducing new possibilities for security agencies to avoid oversight and scrutiny;
  • the use of high-risk AI systems by EU and national authorities is excluded from the scope of the Act until at least 2030, and high-risk systems operated by private companies may be excluded permanently;
  • the Act does not apply to individuals located outside the EU, meaning that, for example, people applying for visas or authorisations to travel to the EU enjoy none of the protections the Act offers to individuals within the EU (though some redress should be possible through the European Charter of Fundamental Rights or EU data protection law);

(Un)prohibited practices

  • despite supposed bans on practices such as profiling, biometric categorization, and mass biometric surveillance, law enforcement and migration authorities enjoy numerous exemptions that may enable widespread deployment of these techniques;

Risk and impact assessments

  • providers of high-risk AI systems are given the possibility to classify those systems as not, in fact, falling into the high-risk category;

A “silicon curtain” of secrecy

  • a swathe of transparency measures do not apply when AI systems are used for law enforcement purposes, such as:
    • the right for individuals to be informed when they are subject to decisions informed by an AI system (however, individuals will have rights under EU data protection law that means they should be informed of any such decisions[12]);
    • the right for individuals to know they are interacting with an AI system or being subjected to an emotion recognition or biometric categorization system;
    • the right for individuals to know that text, audio, video or images have been generated by an AI system;
    • the right for individuals to have explanations about decisions made using AI systems;
    • the right for individuals to know that they are being subjected to the testing of AI systems;
  • an EU-wide database will be created to store information on high-risk AI systems, with information accessible to the public – unless it relates to high-risk systems used by policing and migration agencies, which will be registered in a non-public section of the database, creating opacity over what types of systems are being used by which institutions and agencies;

Conformity assessment

  • policing and migration authorities are exempt from certain assessment and oversight procedures in cases of “urgency”;
  • in many cases, security agencies are exempt from the requirement for external oversight, designed to assess compliance with the standards and specifications included in the Act;

Data protection

  • the fundamental data protection principle of purpose limitation does not apply when personal data is used for testing AI systems in regulatory “sandboxes”;

Oversight

  • there are serious shortcomings to the procedure for reporting serious incidents caused by AI systems;
  • the use of AI systems by policing and migration authorities will be overseen by data protection authorities, who are under-funded and severely lacking in resources;
  • policing and migration authorities appear to be granted the ability to restrict or prevent supervisory authorities from exchanging information about their use of AI systems, and are granted control over supervisory authorities’ access to technical documentation.

In short, the Act will make meaningful supervision of and control over the use of AI systems for policing and migration authorities extremely difficult. In some cases, the rules are designed to prevent the public having any knowledge whatsoever about the use of intrusive and invasive AI systems.

The Act states that AI technologies should “serve as a tool for people, with the ultimate aim of increasing human well-being.”[13] When it comes to the use of AI by police and migration authorities, the aim is clearly to reduce oversight and scrutiny, whilst increasing their powers – something that all too often achieves quite the opposite of “increasing human well-being.”

However, the Act is also clear that it is “without prejudice” to a host of EU and national legal requirements that may help to protect peoples’ rights. The Act itself says it should not affect “existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety.”[14] On the face of it, there are some clear clashes with existing law and jurisprudence, particularly in relation to data protection.

What remains to be seen is how parts of the Act that appear to be in tension with other elements of EU law will be implemented in practice and, in particular, how they will be interpreted by the courts. There will likely be a substantial amount of litigation in the years to come as authorities, companies and individuals seek to have aspects of the law clarified.

(return to top)

 

2.3 In detail: the AI Act’s security exemptions

2.3.1 Scope and application of the law

Military and national security

The text is clear that the Act does not apply to any use of AI systems for “military, defence or national security purposes.” It emphasises that it is the purpose of a particular use of an AI system that matters, and not the agency or institution using it. For example, if the police were using an AI system for “national security” purposes, the AI Act would not apply.

The Act also emphasises that this exemption applies whether or not an AI system is available on the market commercially. For example, a system could be:

  • developed by a private company and made publicly available for purchase;
  • developed “in-house” by a state agency; or
  • developed via some form of public-private partnership, and never made available commercially.

In none of these cases would the Act apply, if the system were used for “military, defence or national security” purposes.

As one group of journalists has put it: “Climate demonstrations or political protests, for instance, could now be freely targeted with AI-powered surveillance if police have national security concerns.”[15] While this may be the hope of some EU governments, the practicalities may be more complicated.

There is a longstanding conflict between EU institutions and EU member states over whether or not matters of “national security” can be in any way regulated by EU law. The academic Plixavra Vogiatzoglou argues that the AI Act’s national security exemption runs counter to EU Court of Justice (CJEU) case law.[16] How the exemption works in practice is likely one of the many questions that will, at some point, be raised before the CJEU. 

Data-laundering overseas

The Act applies to providers and deployers of AI systems within the EU and in non-EU states, “where the output produced by the AI system is used in the Union,”[17] but there is a major exemption for law enforcement and judicial cooperation purposes.

If EU or member states bodies make use of AI systems “in the framework of international cooperation or agreements for law enforcement and judicial cooperation” with the authorities of non-EU states or international organisations, the Act does not apply. The third country or international organisation must provide “adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.”[18]

However, there is no explanation of what those “adequate safeguards” are. Nor does the text explain who is to judge the adequacy of any safeguards. EU data protection law applies when law enforcement agencies transfer personal data to international organisations. However, in certain circumstances, law enforcement agencies can themselves determine whether or not “adequate safeguards” exist.[19] There are thus many questions that arise as to how transfers of data for use in non-EU AI systems will be assessed and supervised.

One international organisation developing AI technology is the International Police Organisation, better known as Interpol. At Interpol’s September 2023 annual conference, then-Secretary General, Jürgen Stock, told delegates about plans to “harness the power of artificial intelligence.” This is being done is through an “analytical platform” called INSIGHT, which is ultimately supposed to provide “visual, video, audio recognition, facial and bio-data matching,” for “advanced and predictive analytics.” European states, amongst others, are funding the development of the platform.[20]

The INSIGHT platform does not have to comply with the AI Act. Although Interpol has its headquarters in Lyon, France, it is an international organisation. Data shared with Interpol by EU member states, or agencies such as Europol, could be processed in the INSIGHT platform in breach of the AI Act’s rules. The outputs could then be used within the EU, without breaching the law. Another potential avenue for this kind of data-laundering is through data exchanges with the USA, where the Department of Homeland Security is building a vast database and analytical system known as Homeland Advanced Recognition Technology System (HART).[21]

Geographical discrimination

The Act “does not apply to affected persons outside the Union.” The protection it offers to individuals stops at the EU’s borders. Given that the EU is developing AI systems specifically designed to be used to profile and analyse vast numbers of people located outside its territory – primarily applicants for visas and travel authorisations – this exemption should be of major concern.

For example, the Act gives individuals the right to obtain “clear and meaningful explanations” of an AI system’s role in any decision which has “legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights.” If an individual is located outside the EU and affected by an AI system, they will have no right to any explanation. However, there is a clash here with other laws: the right to explanations may be available under data protection law[22] or from the right to effective remedy in the EU Charter of Fundamental Rights.[23]

There are also geographical exemptions within the EU. The Irish government has decided to opt out of certain provisions of the Act related to police and judicial cooperation. This includes prohibitions on AI-powered risk assessments of individuals, the use of mass biometric surveillance, and biometric categorisation systems. However, any measures taken by Ireland in these areas will still have to comply with, for example, EU data protection law and the Charter of Fundamental Rights.[24]

Research and testing

The Act “does not apply to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service,” although this does not apply to “testing in real world conditions” (examined further below). Furthermore: “Such activities shall be conducted in accordance with applicable Union law,”[25] for example, on data protection.

It may be questioned whether the provision excluding “research, testing and development activity” from the scope of the Act could be used to deliberately bypass other parts of the law. Authorities could claim that they are deploying high-risk AI technologies for research and development purposes. This would negate the need to meet the obligations the Act places on providers and deployers.

As Border Violence Monitoring Network have highlighted, a number of EU-funded security research projects have been tested in Greece. These have primarily focused on border surveillance and the integration and sharing of data generated by border surveillance.[26] Deployments have included drone flights in the Orestiada region, the testing of surveillance technologies designed to see through foliage in Evros, and the construction of pylons for the gathering and transmission of surveillance data, also in Evros.

As at other land borders in Europe, more border surveillance in Greece is likely to support the widespread (if not systematic) and flagrantly illegal practice of violently “pushing back” refugees. The question here, however, is whether such deployments would be covered by the AI Act’s provisions on “real world” testing, or would escape its provisions by being classified as “research, testing and development activity.”

Temporary general exemptions from the Act

The Act includes two temporary, general exemptions. The first concerns AI systems that are used as part of various large-scale EU information systems. The second concerns high-risk AI systems that are “placed on the market or put into service” before 2 August 2026, the date from which the Act comes into force. The exemptions provide carte blanche for all manner of practices until the end of the decade, at both EU and national level.

Large-scale EU information system put into use before 2 August 2027 do not have to comply with the Act until 31 December 2030.[27] The information systems in question are:

  • the Schengen Information System (SIS);
  • the Visa Information System (VIS);
  • Eurodac;
  • the Entry/Exit System (EES);
  • the European Travel Information and Authorisation System (ETIAS);
  • the European Criminal Records Information System on third-country nationals and stateless persons (ECRIS-TCN);
  • the information systems that establish the “interoperability” architecture, namely:
    • the Common Identity Repository (CIR);
    • the shared Biometric Matching System (sBMS);
    • the Multiple Identity Detector (MID); and
    • the European Search Portal (ESP).[28]

All these systems are intended to make use of some form of technology covered by the Act: for example, biometric identification. Perhaps the most invasive use of AI announced so far, however, will be in the ETIAS and the VIS. These systems will use algorithmic profiling to determine whether individuals are believed to pose a “security, illegal immigration or high epidemic risk.”[29]

A similar temporal exemption applies to operators of high-risk AI systems placed on the market or put into use before 2 August 2026.[30] If there are “significant changes” to the design of those systems after that date, they will be subject to the Act. Furthermore, all high-risk systems used, or intended to be used, by public authorities, must comply with the Act by 2 August 2030. However, private operators of high-risk AI systems are not mentioned here, and thus seem to be excluded from the requirement to comply with the Act by 2 August 2030.

“Sensitive operational data”

The Act introduces a new term into EU law: “sensitive operational data.” This is defined as: “operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings.”[31]

The phrase appears nowhere else in EU law, apart from a statement accompanying a proposed law on greenhouse gas emissions, where it has a different meaning.[32] The definition itself relies upon another term, “operational data,” that is not defined in the Act, but can evidently be taken to mean data relating to operations: for example, criminal investigations.[33]

It is unclear if “sensitive operational data” is somehow related to “sensitive data,” defined in EU data protection law as:

…data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership… genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.”[34]

The lack of definition of “sensitive operational data” causes a problem: who gets to decide what is sensitive operational data, and what is not? Does it include “sensitive data” as defined in EU data protection law, or is it something separate? This is not merely a question of semantics: the AI Act includes various provisions that prevent supervisory authorities having access to sensitive operational data, examined further below.

It should be noted that the term “sensitive operational data” was not included in the original proposal for the Act.[35] It is unclear when or by whom it was inserted in the text, but it is certainly beneficial for law enforcement and other security agencies.

(return to top)

 

2.3.2 (Un)prohibited practices

The EU has loudly promoted the fact that certain uses of AI are banned under the Act. However, there are multiple exemptions to those bans. These exemptions cover the use of AI systems for profiling, biometric categorization, and mass biometric surveillance, known formally as “remote biometric identification.” The best-known example of such a technique, but far from the only one, is the use of facial recognition systems in public spaces. Furthermore, while there is a ban on certain uses of emotion recognition systems, it has important limits.

Profiling

The Act bans systems that try to calculate the likelihood that someone will commit a criminal offence, where that calculation is based solely on “the profiling of a natural person” or “assessing their personality traits and characteristics.” However, the ban is not absolute: these systems are legal if “used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.”[36]

Reversing this statement makes it somewhat clearer. If there are “objective and verifiable facts directly linked to a criminal activity” that show a person is involved in criminal activity, a system can be used to profile them, or to assess that person’s “personality traits and characteristics.” That assessment can be used to support human decisions about the risk the person will commit a criminal offence. In short: these AI techniques cannot be applied to large datasets to single out individuals, but can be applied to individuals who have already been singled out.

It should be noted that the prohibition only extends to the use of profiling and personality assessment in relation to criminal activity – it does not apply to immigration, asylum or border control. There is thus no ban on using AI systems for profiling or personality assessments in those policy fields. The Protect Not Surveil coalition[37] sought to extend the ban to cover those areas, but was unsuccessful. The European Data Protection Supervisor has argued that the exclusion of administrative offences from the ban is unconvincing, in light of EU case law.[38] A set of EU guidelines designed to provide definitive explanations on this point is forthcoming. It remains to be seen what they say.[39]

Biometric categorization

Biometric categorization is defined by the Act as a process for automatically assigning people “to specific categories on the basis of their biometric data.” This might mean, for example, using a system to classify people by gender, eye colour, height, or some other measurable characteristic. Under the Act, it is illegal to use biometric categorization “to deduce or infer… race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.”[40]

However, the ban “does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement.”[41] The wording here is unclear, meaning the statement could be read in two ways:

  • it is legal to label or filter, for law enforcement purposes, lawfully acquired datasets that are based on biometric data or on the categorization of biometric data; or
  • the labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data, is legal, as is the categorizing of biometric data in the area of law enforcement.

Like much of the labyrinthine wording of the Act, this lack of clarity may be due to the “3-day ‘marathon’ talks”[42] that led to its adoption. This might be better-described as a 72-hour negotiation binge. The Spanish version of the text, which uses the word ni (“nor”, rather than “or” in the English version) indicates that the second interpretation is correct.[43]

The upshot is that law enforcement agencies are excluded from complying with a ban on a practice that is otherwise deemed to be inherently dangerous for “health, safety and fundamental rights.”[44] However, the EDPS has warned that the provision “should not be interpreted as a general derogation,” but instead as clarification that “there are other types of biometrics categorisation for different legitimate purposes.”[45]

Nevertheless, the Act may provide EU legal backing for controversial practices that have been going on for some time, such as in the Czech Republic.[46] Europol makes extensive use of facial recognition software (examined in section 3.3.3). Governments have sought to introduce new powers so the police can cross-reference videos and photos with databases, as in Denmark[47] and Sweden.[48]

In Germany, the government introduced plans to legalise police searches of publicly-available data on the internet using facial recognition software.[49] The European Commission reportedly warned the German government that this would be illegal.[50] With elections forthcoming in Germany, the future of the proposals is unclear.

Mass biometric surveillance

Mass biometric surveillance – “remote biometric identification,” in the words of the law – is one of the most controversial topics covered by the AI Act. The Reclaim Your Face coalition sought a comprehensive ban on the practice.[51] The European Commission itself recognises that the use of systems to automatically identify individuals in public, at a distance and without their knowledge, undermines “personal data, privacy, autonomy and dignity,” as well as “freedom of expression, association and assembly… resulting in a chilling effect on democracy.”[52]

The Act prohibits the use of “remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.”[53] This of course allows the use of mass biometric surveillance for purposes other than law enforcement – for example, for border control or in the workplace. The law also contains three specific exemptions to the supposed prohibition:

  • targeted searches “for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings”;
  • searches for missing persons;
  • preventing “a specific, substantial and imminent threat to the life or physical safety of natural persons”;
  • preventing “a genuine and present or genuine and foreseeable threat of a terrorist attack”; and
  • “the localisation or identification of a person suspected of having committed a criminal offence,” with regard to a specific set of offences.[54]

If a member state wishes to make use of these exemptions, it must pass national legislation authorising that use. The legislation must include detailed rules on “the request, issuance and exercise of, as well as supervision and reporting related to the authorizations,” and specify for which of the offences in the Act mass biometric surveillance can be deployed. The Act also states that member states can, if they wish, introduce “more restrictive laws on the use of remote biometric identification systems.”[55]

Two EU member states planning new legislation to introduce mass biometric surveillance include Sweden[56] and the Czech Republic, where a proposal to allow facial recognition in airports seeks to sidestep even the limited protections of the Act.[57]

Member states must notify the Commission of any national rules on this topic, though there is no obligation for the Commission to publish that information. This means individuals travelling from one member state to another may not be able to find out if and how they may be subjected to mass biometric surveillance.

The Act also sets out a number of other requirements that must be considered by the deploying authorities. Usage of a mass biometric surveillance system must take into account the seriousness, probability and scale of harm if the system is not used; and the consequences that usage of the system will have for the individual rights and freedoms. [58]

There must also be “necessary and proportionate safeguards and conditions” in place. As well as national legislation, there must be limitations on the length of time the system can be used for, the locations in which it can be used, and the persons to whom it may be applied.[59]

According to the Act, the deploying authority must carry out a fundamental rights impact assessment. European Court of Human Rights jurisprudence on public facial recognition systems requires a high level of justification for mass biometric surveillance to be considered “necessary in a democratic society.”[60]

The European Data Protection Supervisor (EDPS) has also remarked that member states will need to assess whether “the purpose pursued cannot be achieved without using biometric data,” and that the “fundamental values of democratic systems” need to be taken into account.[61] It is doubtful whether mass biometric surveillance in any form is compatible with democratic norms, though this view does not seem to be shared by most politicians and officials.

Deploying authorities should also register the system in the EU-wide database for high-risk AI systems. This database is governed by the AI Act and, as noted below, there are numerous exemptions for law enforcement authorities. One exemption allows them to deploy mass biometric surveillance systems without registering them in the database in “duly justified cases of urgency,” provided that registration is completed “without delay.”[62]

A similar law enforcement exemption applies to judicial or administrative authorization for deploying a mass biometric surveillance system. Deployments are supposed to have prior authorization by such an authority, but this requirement can be ignored in a “duly justified situation of urgency.” Authorization must however be requested within 24 hours of deployment. If it is denied then use of the system must be halted, and any outputs resulting from that use deleted.[63]

“Each use” must also be notified to the relevant supervisory authority, although it is not clear how a single “use” is to be defined. The national supervisory authority for law enforcement, migration, asylum, border control and judicial authorities will be the national data protection authority.[64]

That supervisory authority must send an annual report to the European Commission detailing any authorizations or refusals for the use of mass biometric surveillance, and the Commission has to publish an annual report based on that information. This is likely to mean that the public will get a sanitised, partial version of the information compiled by national supervisory authorities. Indeed, the Act states explicitly that the Commission’s annual reports cannot include “sensitive operational data” from national authorities.

Emotion recognition

The Act bans AI systems used to “infer emotions of a natural person in the areas of workplace and education institutions.” Outside of these two areas, therefore, emotion recognition systems can be deployed – for example, for policing or border control purposes. They are classified as high-risk when used for any other purpose, though law enforcement use of emotion recognition systems may be kept secret, thanks to an exemption in the law (section 3.3.4).

The EDPS has highlighted that the reason for banning emotion recognition in workplaces and educational institutions is “the imbalance of power.” This, the authority noted, “is even more applicable and relevant in the law enforcement context. The same consideration applies also in the field of border control, migration and asylum.”[65] This view was evidently not shared by EU legislators.

(return to top)

 

2.3.3 Risk and impact assessments

Self-assessment

As noted above, the AI Act is premised on a system of risk assessment, with differing levels of control and oversight applying depending on the level of risk posed by a particular AI system. As this analysis makes clear, that level control and oversight can be substantially reduced when the authority in control is responsible for law enforcement, immigration, border control or asylum. The Act also allows providers of nominally high-risk systems to assess them as not, in fact, being high-risk. That means, in turn, that certain safeguards do not apply.

As the Act puts it, systems that are included in the high-risk list[66] are not be considered high-risk if they do not pose “a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.” This is the case when one any of four conditions is met:

  • the AI system is for performing a narrow procedural task;
  • the AI system is for improving the result of a previously completed human activity;
  • the AI system is for detecting patterns and deviations in decision-making, but not replacing or influencing any previously-completed human assessment, without “proper human review”; or
  • the AI system is for performing a preparatory task in an assessment that is “relevant” for any of the purposes of a system defined as high-risk.

The Act then goes on to introduce an exemption to these exemptions: any AI system that performs profiling of natural persons must be considered high-risk, regardless of whether or not it meets one of the four conditions listed above.

If a provider of a high-risk system determines that their system is not in fact high-risk, they must document that assessment before they put the system on the market or put it into use. The assessment must be made available to national supervisory authorities upon request, and the information must also be registered in the EU database for high-risk systems. Guidelines for the “practical implementation” of these provisions must be published by the European Commission by 2 February 2026.

Impact assessments

Deployers of high-risk AI systems are obliged to carry out a fundamental rights impact assessment (FRIA) prior to deployment. That assessment has to include:

  • “a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose”;
  • the time period for use of the system, and the frequency with which it will be used;
  • “the categories of natural persons and groups likely to be affected by its use in the specific context”;
  • the “specific risks of harm” likely to affect those categories of people and groups;
  • how human oversight measures specified in the instructions for the system will be implemented;
  • a description of action to be taken in case any of the identified risks materialise, including arrangements for internal governance and complaint mechanisms.[67]

A FRIA has to be conducted for the first deployment of a high-risk AI system. The deployer can use the same FRIA for “similar cases,” if they wish. The FRIA must be updated by the deployer if any of the requirements listed above “has changed or is no longer up to date.”[68]

The completed FRIA must then be submitted to the relevant oversight authority – presumably for review and assessment, though this is not specified in the Act. It is worth noting that one analysis of 10 different FRIAs for AI systems found sharp divergences in their “length and completeness,” with serious questions for their utility and meaningfulness.[69]

Such an assessment should also, presumably, include an assessment of the necessity and proportionality of using the AI system. These are general principles of EU law,[70] and are mentioned multiple times in the preamble to the AI Act, particularly in connection with law enforcement use of AI systems.

If personal data is processed in an AI system, deployers will also have to conduct a data protection impact assessment, in accordance with EU data protection law. Under the AI Act, the FRIA “shall complement the data protection impact assessment,” if the latter assessment already meets any of the obligations set out in the Act.[71]

The deploying authority may be exempt from the requirement to submit the FRIA to the oversight authority. If the deployer has invoked the derogation from the conformity assessment procedure,[72] they are also exempt from the obligation to send the FRIA to that authority. The derogation from the conformity assessment procedure is intended to be temporary, “for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets.” Presumably the FRIA must be submitted once the derogation ends, though the Act does not specify if this is the case or not.

(return to top)

 

2.3.4 A “silicon curtain” of secrecy

A swathe of the Act’s exemptions and exceptions for law enforcement, immigration, border control and asylum authorities relate to transparency. While most deployers of high-risk AI systems are required to provide certain types of information to the public when they are subjected to those systems, security authorities do not.

During the Cold War, the phrase “iron curtain” was used to describe the physical and political barriers between eastern and western Europe. The AI Act could be described as introducing a “silicon curtain” of secrecy, designed to prevent the public and elected officials from knowing when, where or how security AI is being used.

EU database of high-risk AI systems

The Act mandates the establishment of an EU-wide database for high-risk AI systems. The majority of information in this database is to be made public, to “increase the transparency towards the public… allowing the general public to find relevant information [on] the registration of high-risk AI systems and on the use case of high-risk AI systems.”[73]

However, biometric systems, emotion recognition systems, and other high-risk AI systems “in the areas of law enforcement, migration, asylum and border control management” are exempt from a number of the requirements related to this database.

Providers of two categories of AI systems, or their authorised representatives, must register information on themselves and their high-risk systems in this database. They must do so when:

  • an AI system is classified as high-risk; or
  • the provider has determined the AI system is not high-risk according to Article 6(3).[74]

When a national authority, an EU institution, office, body or agency, or persons acting on behalf of national or EU authorities are planning to use a high-risk AI system, they must take two steps. Firstly, they have to register themselves in the database as users of a high-risk system. Then they have to select the system they plan to use. In this way, a record of the providers, deployers, and the systems they have created or used is put in place.

However, different rules apply to providers of such systems for law enforcement, migration, asylum and border control purposes. They are entitled to register the systems in a “secure non-public” section of the EU database, and only have to provide a limited amount of information compared to other registrants (see Annex II for more detail).

Notably, they do not need to provide information on “the information used by the system (data, inputs) and its operating logic,” amongst other things. The “secure non-public” section of the EU database will only be accessible by the European Commission, the European Data Protection Supervisor, and national supervisory authorities.

There will thus be no increase in “transparency towards the public” regarding high-risk AI systems used for law enforcement and migration purposes – in fact, quite the opposite. There are certainly arguments for keeping details of particular police investigations secret. The means and methods available to the police for detecting and investigating crime, however, should be publicly-available information, so that it can be the subject of democratic scrutiny and public debate.

As noted above, this secrecy appears to be the result of lobbying by police officials, through the European Clearing Board. The body has claimed credit for changing the Council of the EU’s negotiating position on “mandatory publishing of AI-systems in use or that are developed by law enforcement agencies.”[75]

It is noteworthy that in the USA, police are deliberately obscuring their use of facial recognition technology, “which means defendants are being deprived of their constitutional right to challenge the veracity of the evidence being used against them.”[76] In the EU, the AI Act offers various new means police and immigration authorities to keep the use of AI technology secret, including by exempting them from public registration in the database.

The right to information

Where high-risk AI systems are used to make decisions that affect individuals, those individuals must be informed by the system’s deployer.[77] However, national law enforcement agencies using high-risk systems are subject to a different legal framework, dealing with data protection in national law enforcement agencies.[78] This allows them to delay, restrict or omit providing information to affected individuals, for example to “avoid obstructing official or legal inquiries, investigations or procedures.”[79]

People may also directly interact with AI systems. The most obvious example of this would be chatbots deployed for customer service on websites. Whatever the situation, providers must ensure that people are made aware of this, unless it is “obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.”[80] Quite how the concept of a “well-informed, observant and circumspect” person might be interpreted is open to question. There is an extensive literature on the interpretation of the similar legal fiction of a “reasonable person.”[81]

However, the obligation to inform people they are interacting directly with an AI system does not apply to “AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties.”[82] The only exception to this is when an AI system is available to members of the public to report crime. The Act includes an equivalent exemption for emotion recognition and biometric categorisation systems.[83]

As with many similar provisions examined below, it can be inferred from this that the police wish to use various types of AI systems without the public’s knowledge. At the very least, it leaves the possibility open for them to do so. There is no further detail in the Act on what “appropriate safeguards” might be.

Watermarking of AI-generated media

Providers of AI systems used to generate “synthetic audio, image, video or text content” must ensure the outputs are “marked in a machine-readable format and detectable as artificially generated or manipulated.”[84] However, this does not apply when those systems are “authorised by law to detect, prevent, investigate or prosecute criminal offences.” There is no requirement set out for “appropriate safeguards.” Like the previous exemption, the article indicates that the police intend to make operational use of AI-generated media.

Notification that text has been generated by AI

Deployers of AI systems that generate or manipulate text used for “informing the public on matters of public interest” – presumably, news and other media articles – have to disclose if the text has been artificially-generated and/or manipulated. However, this is not required if the text has been subject to “human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.” Nor does the rule apply “where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences.”[85]

The article indicates that police intend to make operational use of AI-generated text, and there is no explicit requirement for appropriate safeguards or for use by law enforcement to be in accordance with EU law – though, again, this does not mean that EU law does not apply.

Right to explanations

The Act gives individuals the right to obtain from the deployer of a high-risk AI system “clear and meaningful explanations of the role in the AI system” in any decisions that have produced “legal effects or similarly significantly affects that person.” Importantly, this is based on the individual’s opinion, not any external assessment: there is a right to an explanation when someone considers an AI system has had “an adverse impact on their health, safety or fundamental rights.” This, of course, only applies to people located within the EU, as with all the other such safeguards in the Act.[86]

However, if EU or national law contains “exceptions from, or restrictions to” the provision of the AI Act setting out the right to explanations, it does not apply. It is unclear whether any such restrictions or exemptions already exist, or whether this provision is laying the foundation for future exceptions. In either case, the AI Act makes it possible for governments to exempt law enforcement, asylum, immigration, border and judicial authorities from the right to provide explanations to individuals about the use of AI systems.

The CJEU has already warned of the potential of AI to deprive individuals of their right to an effective judicial remedy. In a judgment on the EU’s travel surveillance law, it found that the “opacity” of AI technology “might make it impossible to understand the reasons why a given program led to… a result.”[87] People affected by technological procedures must be able to understand “how those criteria and those programs work” to allow them to decide “with full knowledge of the relevant facts” whether or not to challenge the unlawful and indiscriminatory nature of these criteria.”[88]

The Act also appears to clash with article 41 of the Charter of Fundamental Rights. This provides for a right to good administration, and places an obligation on the administration “to give reasons for its decisions.” How this will work in the context of technically-complex AI systems remains to be seen.[89]

Information on and consent to being subjected to “real-world testing”

Most people would probably not want to be the subject of experiments by state authorities. Even fewer would want to be the subject of experiments that take place without their knowledge. Yet the AI Act permits precisely that: it gives law enforcement authorities the permission to test experimental AI systems on individuals without their knowledge, provided certain conditions are met.

The Act contains rules on how testing in “real world conditions” should take place.[90] Such testing can be undertaken by providers or prospective providers of systems “at any time before the placing on the market or the putting into service of the AI system.” Testing can be done by providers “on their own or in partnership with one or more deployers or prospective deployers.” For example, a company producing AI systems could team up with a public authority, such as a police force, to test an AI system.

To do so, the provider or prospective provider must be legally registered in the EU or have an authorised representative in the EU. They must put in place “appropriate and applicable” safeguards for any personal data processed during the testing that will be transferred to a third state. For example, a provider based in the US might have an authorised representative in the EU. Personal data processed during testing could then be transferred to the US, provided adequate safeguards exist. The provider or prospective provider must also be registered in the EU-wide database of AI systems and providers. They have to draw up a plan for the test, and that plan has to be approved by the relevant supervisory authority.

Individuals subjected to the tests who are “vulnerable” due to “age or disability” must be “appropriately protected” (a provision that excludes other forms of vulnerability), and test subjects must have provided informed consent to the testing. The testing cannot last longer than six months, and it must be possible for any outputs from the AI system being tested to be “effectively reversed and disregarded.” Furthermore, “the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects.” All personal data processed must be deleted “after the test is performed.”

Supervisory authorities must ensure that real world testing is carried out in accordance with the law via inspections and checks, though it is not mandated that they do so for every instance of testing. Supervisory authorities must also be informed when testing has finished, and of the results. Providers or prospective providers of the AI system being tested are liable for any damage caused during the tests.

All instances of real-world testing must be registered in the EU-wide database. However, there is a presumption of secrecy with regard to the public. Information will only be accessible to supervisory authorities and the Commission, “unless the prospective provider or provider has given consent for also making the information accessible the [sic] public.”[91]

Information on real world testing for these systems and purposes will likely never be made public. This is due to the transparency exemptions in place for the registration in the EU database of systems for (1) remote biometric identification, biometric categorisation and emotion recognition; and of (2) high-risk AI systems for law enforcement, migration, asylum and border control.

Furthermore, the information that has to be registered by providers or prospective providers regarding (1) and (2) is more limited than for other types of system. There is an exemption from the requirement to register “a summary of the main characteristics of the plan for testing in real-world conditions.”[92] This makes supervisory authorities’ work more complicated, as they will have to go through extra steps to obtain the relevant information (see below). Testing for the purposes of law enforcement is also exempt from the requirement to seek and obtain informed consent from test subjects, if doing so “would prevent the AI system from being tested.”

Even if it is the case that an AI system being tested in “real world conditions” is prohibited from having any negative effects on individuals, and that personal data processed for testing must be deleted as soon as the test has ended, these provisions add a new level of secrecy and opacity to the law enforcement use of AI systems. There is no reasonable justification for allowing law enforcement agencies to test AI systems on the public without their knowledge. Turning the techniques used to detect and investigate crime into state secrets simply increases police impunity.

(return to top)

 

2.3.5 Conformity assessment

Conformity assessment is the process by which an authority assesses whether a product, material, service or process complies with particular standards. Standards are described by the International Standardisation Organisation as “a formula that describes the best way of doing something.”[93] They govern all manner of things – components for machinery or vehicles, cybersecurity requirements, and food safety, amongst others.

The AI Act introduces obligations for the development of certain new standards that AI systems must comply with, “prior to their placing on the market or putting into service.”[94] The extensive reliance upon standards has been criticised by Corporate Europe Observatory. Companies are lobbying hard to influence the AI Act standards, to try to have them meet corporate preferences. There are also broader questions to be asked about relying on standards to define and guide issues concerning fundamental rights.[95]

Under the Act, the European Commission must issue “standardisation requests” for all the requirements related to high-risk AI systems.[96] Such requests would be sent to the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC), or the European Telecommunications Standards Institute (ETSI).

These bodies would then be responsible for developing the standards in question. Developers or providers of high-risk AI systems would demonstrate conformity with those standards to demonstrate compliance with the Act. National conformity assessment bodies would, in some cases, be responsible for deciding whether a system is compliant or not. In the case of policing, migration or judicial agencies, however, data protection authorities would undertake the assessment procedure,[97] at both national and EU level.[98]

A standardisation request made by the European Commission may not be met. The Act notes four situations in which this could happen:

  • the request is not accepted by a standardisation organisation;
  • the standards are not provided within the required deadline;
  • the standards do not comply with fundamental rights requirements; or
  • the standards do not comply with the request.[99]

In such cases, the Commission is able to draw up implementing acts.[100] These are used to support the uniform implementation of EU law by member states.[101] These acts would contain “common specifications” serving the same purpose as standards. However, even if a high-risk AI system does not comply with these specifications, the provider must adopt “technical solutions” that are “at least equivalent” to them.[102]

External oversight

There is a choice of conformity assessment procedures for providers of remote biometric identification, biometric categorization, or emotion recognition systems that rely on either harmonised standards[103] or common specifications.[104] These are:

  • internal control, set out in Annex VI of the Act; or
  • assessment by a conformity assessment body, set out in Annex VII of the Act.

The internal control procedure is a self-assessment procedure. It means providers must establish a quality management system (meeting at least 13 different requirements set out in the Act[105]) and produce a range of technical documentation demonstrating compliance with the requirements for high-risk systems. They also have to verify that the system’s design and development, and its “post-market monitoring,”[106] are consistent with that documentation.[107]

Alternatively, providers can request that a conformity assessment body examine whether they comply with the relevant standards or specifications. In certain cases, they are obliged to do so: if standards have only been partially applied by the provider or do not exist at all; if common specifications have not been applied or are not available; or if a standard contains restricted information, and their conformity with the restricted part of the standard requires assessment.

However, providers of high-risk AI systems for police, migration or judicial purposes are obliged to follow the self-assessment procedure,[108] meaning there is no external oversight of whether or how they conform with relevant standards and requirements. The Commission can change these rules, if the procedure is not deemed effective at preventing or minimising risks to health, safety or fundamental rights.[109]

Ongoing supervision

A high-risk system must undergo a new conformity assessment procedure (even if it has undergone one before) where there is a “substantial modification” to the system. This new assessment must take place even if the system is not to be further distributed, or will continue to be used by the current deployer.

However, there is an exemption to this for high-risk AI systems that “continue to learn after being placed on the market.” That learning shall not be considered as a “substantial modification,” provided that the system’s original conformity assessment and technical documentation set out how it would learn and change.

Urgent authorisation and deployment without authorisation

There are numerous exemptions to the conformity assessment procedures for high-risk AI systems. Market surveillance authorities – data protection authorities, in the case of policing, migration or judicial agencies – can authorise, for a limited period, the deployment of “specific high-risk AI systems… for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets.”[110] However, a full authorisation can only be issued if the authority determines that the system in question meets the Act’s requirements for high-risk AI systems.

As explained above, high-risk AI systems used for policing, migration or judicial purposes are exempt from conformity assessment procedures involving external oversight (with the exception of remote biometric identification, biometric categorisation or emotion recognition systems). It is unclear if this means they can also self-authorise “urgent” deployments. However, as exceptions to the law must be interpreted strictly,[111] this seems unlikely. The Act also seems to indicate that urgent deployments of whatever type of system require a request to and authorisation by the oversight authority.[112]

When there is this type of “urgent” deployment of a high-risk AI system, conformity assessment procedures must be carried out “without undue delay.” An authorisation must be issued if the system complies with the relevant requirements.

A further exemption allows “law-enforcement authorities or civil protection authorities” to deploy “a specific high-risk AI system” without any kind of authorisation. If they do so, authorisation must be requested “during or after the use without undue delay.” If it is refused, “the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded.”[113] There is no equivalent provision for the procedure that allows deployment via urgent authorisation.

In either case, the Commission and EU member states must be informed of any authorisations issued in this way, and can object to it. However, they must do so within 15 days of being notified of the authorisation. If they do not object, the authorisation is to be considered justified. In the case of objections, the Commission must enter consultations with the member state in which the system has been deployed, after which it must issue a decision on whether the authorisation is justified or not. If deemed unjustified, the market surveillance authority has to withdraw the authorisation.

It is noteworthy that individuals or organisations cannot object to the “urgent” deployment of high-risk systems – this is a privilege reserved for state authorities or the European Commission.[114] A further observation can also be made. These provisions clearly benefit security agencies in a very direct way, by exempting them from requirements (however limited) to ensure the safety and fundamental rights compliance of high-risk AI systems. There is, however, more to it than this.

Like many other provisions in the Act, these articles contribute to a particular ‘imaginary’ of AI. The very fact that such exceptions exist implies that AI systems will be developed that are capable of dealing with “situation(s) of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons.”[115] From this angle, it would appear that the law itself seeks to further propel the hype that surrounds the development and deployment of AI.

(return to top)

 

2.3.6 Data protection

Purpose limitation

Purpose limitation is a fundamental data protection principle. It requires that personal data be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.” There may be exceptions to this: for example, an individual can give their consent for their personal data to be processed for another purpose, and EU data protection law provides exemptions for “archiving purposes in the public interest, scientific or historical research purposes or statistical purposes.”[116]

The AI Act introduces further exceptions: personal data collected for one purpose can be used for developing, training and testing AI systems in a “regulatory sandbox,” if certain conditions are met:

  • the AI system being tested must be for “safeguarding substantial public interest by a public authority or other natural or legal person” in one or more of five areas, including “public safety” and the “efficiency and quality of public administration and public services”;
  • the personal data used to train the system must be needed to comply with one or more of the requirements for high-risk AI systems, and can only be used if “anonymised, synthetic or other non-personal data” will not suffice;
  • there must be monitoring mechanisms in place to identify and mitigate any risks to the data subjects;
  • it must be possible to halt the data processing, if necessary;
  • personal data must be stored separately to other data and only accessible to authorised persons;
  • data can only be shared in accordance with EU law and any personal data generated in the sandbox cannot be shared further;
  • the processing of personal data in the sandbox cannot lead to measures or decisions affecting the data subjects;
  • there must be data security measures in place, and personal data must be deleted once use in the “sandbox” is over, or the data retention period comes to an end;
  • logs must be kept for duration of the participation in the sandbox, unless EU or national law permits otherwise;
  • the rationale and results must be kept, alongside other technical documentation relating to the testing process; and
  • a short summary of the project must be published on the website of the competent authorities.

These requirements also apply to law enforcement authorities undertaking tests in a “sandbox,” with the exception of the requirement to publish a short summary of the project online. This reinforces the general secrecy surrounding law enforcement use of AI allowed by the Act. Just as the public is not to be told if the police are testing an AI system on them, neither are they permitted to know when a system is being tested in a “regulatory sandbox.”

(return to top)

 

2.3.7 Oversight

Reporting of serious incidents

The Act includes provisions dealing with the reporting of serious incidents. A serious incident is defined as “an incident or malfunctioning of an AI system that directly or indirectly” causes:

  • the death of a person, or serious harm to a person’s health;
  • a serious and irreversible disruption of the management or operation of critical infrastructure;
  • the infringement of obligations under Union law intended to protect fundamental rights; or
  • serious harm to property or the environment.[117]

When a serious incident occurs, the Act requires that “providers of high-risk AI systems placed on the Union market” must report it to the supervisory authority in the member state(s) where the incident occurred. A report must also be filed in case of “a widespread infringement,” though this term is not defined. After a report is filed, the provider must conduct an investigation and take corrective action to address the problem.

These requirements are limited to “providers of high-risk AI systems placed on the Union market.” There are separate, and more limited, requirements regarding high-risk systems that are “placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation.” What those legislative instruments are is not explained, and this type of report is limited to serious incidents concerning “the infringement of obligations under Union law intended to protect fundamental rights.”[118]

On the face of it, the use of the phrase “incident or malfunctioning” would appear to exclude any serious incidents caused by the normal functioning of an AI system, though this depends on how the word “incident” is interpreted. It remains to be seen how these reporting systems work in practice. However, as there is no requirement for the Commission or any other supervisory authority to publish any information regarding serious incidents, the public may not get to hear much about it.

Specific supervisory authorities

In EU member states, most supervision of AI systems and their use will be carried out by market surveillance authorities. However, national data protection authorities will take on that role for high-risk systems for law enforcement, migration, border control, asylum, the administration of justice or democratic processes; and remote biometric identification, biometric categorisation and emotion recognition systems used for those purposes.[119] The European Data Protection Supervisor will take on this role in the case of EU institutions, bodies, offices and agencies.[120]

Given that data protection authorities at a national and EU level are already responsible for monitoring how law enforcement and other such authorities process personal data, including by using advanced technologies, this may well make sense. However, data protection authorities are also notoriously under-funded and lacking in resources. An August 2021 study by the European Data Protection Board (EDPB) found that 80% of national data protection authorities had insufficient funding to carry out their statutory tasks. The EDPB itself warned that it was “at risk of no longer being able to fulfil its legal duties.”[121]

It is obvious that whichever authority is tasked with supervising the use of AI systems will need sufficient resources to carry out those tasks. Failing to provide them only further adds to the possibilities for mistaken or malicious uses of dangerous technologies, and limits the opportunities for protecting rights and ensuring redress for affected groups and individuals. It may also be observed that politicians and officials who claim that the AI Act fully respects fundamental rights, whilst failing to provide resources to supervisory authorities, are essentially gaslighting the public.

Ability to prevent exchanges of information between supervisory authorities

The Act requires that the Commission, supervisory authorities and notified bodies exchange information to carry out their supervisory and monitoring tasks. When they do so, they must ensure that information remains confidential, to ensure the protection of:

  • intellectual property rights, confidential business information or trade secrets;
  • the implementation of the Act, in particular with regard to investigations, inspections and audits;
  • public and national security interests;
  • criminal or administrative proceedings; and
  • information that is classified under EU or national law.

Authorities granted a supervisory role by the Act must only request the information “strictly necessary” for undertaking their work and must put in place “effective cybersecurity measures” to protect that information.[122] A requirement to ensure confidentiality of “public and national security” already provides wide grounds to invoke exemptions to transparency, for example in response to freedom of information requests.

When it comes to biometric and emotion recognition systems, and high-risk AI systems used for law enforcement, migration, border control or asylum purposes, the Act contains more restrictive measures. In these cases, information cannot be exchanged between national authorities, or between national authorities and the Commission, “without prior consultation of the originating national competent authority and the deployer” of the system in question. Furthermore, any such exchange of information “shall not cover sensitive operational data”.[123] This provides policing and migration authorities the ability to restrict supervisory authorities’ access to information.

There are no further procedural requirements set out in the Act. It is not clear if, when or how competent authorities can refuse permission for monitoring authorities to exchange information. It is noteworthy that these restrictions apply even though it is made explicit that such exchanges of information cannot include “sensitive operational data.” As remarked above, this term and its lack of clear definition provides further leeway to the authorities to restrict supervisory authorities obtaining information on the development and deployment of AI systems.

It also appears to contradict the legal powers of data protection authorities to undertake independent investigations. Where data subject rights are restricted, Article 17 of the LED provides that these rights may be exercised through the competent data protection authority.[124]

Control over access to technical documentation

Following on from the above, if a law enforcement, immigration or asylum authority is the provider of a high-risk AI system for biometric identification or categorisation, emotion recognition, or for law enforcement, migration, border control or asylum purposes, they are given control over who has access to the technical documentation related to those systems.

The Act requires that technical documentation must stay on the premises of the authority that is the provider. Supervisory authorities must be able to request and “immediately” obtain access to that documentation, or a copy of it, provided their staff have the relevant security clearance.[125] However, it is not clear what procedure or penalties might apply if “immediate” access to requested documentation is not provided. This may allow the erection of further barriers to effective monitoring and oversight of high-risk AI systems deployed by police and migration authorities.

(return to top)

 

2.4 Implementing the Act

The exceptions and exemptions outlined above all exist on paper. What remains to be seen is how they will be implemented in practice. There is an array of guidelines, implementing legislation, delegated acts,[126] standards and other requirements intended to clarify certain elements of the Act – for example, on prohibitions and definitions. Civil society organisations, including Statewatch, have issued calls to ensure those guidelines are centred on protecting fundamental rights.[127]

The European Commission has a central role in the implementation process, providing an obvious route for corporate and state lobbyists to push for their interests. Such lobbying is already taking place in international standardisation organisations, where “standard-setting is being used to implement [AI Act’s] requirements related to fundamental rights, fairness, trustworthiness and bias.”[128] The European Parliament has also set up its own working group to monitor implementation of the Act.[129]

The law has led to questions from EU agencies themselves on how it should be interpreted. During a meeting of the EU Innovation Hub (section 4.1.2), Frontex asked how the law applies when “the Agency deploys AI technical equipment/capabilities that belong to a Member State?”

The border agency has also sought clarity on:

  • whether the Act applies if a member state passes it data generated by an AI system that is exempt from the law on national security grounds;
  • the use of “remote fingerprint acquisition” technology, one of many advanced biometric technologies that Frontex hopes to deploy in coming years;[130] and
  • how responsibility would be determined if an AI system jointly deployed with a member state breaches the law.[131]

The EU Asylum Agency has also raised questions:

Would EUAA be considered a deployer when an AI tool that is developed by a third party is shared with EU+ countries through a central EU platform?

Is the case of using AI for researching COI [country-of-origin information] considered high-risk? COI reports do assist the examination of asylum cases indirectly by informing the case officers about the situation in a country of origin. They are also quoted in asylum decisions.

If… speech-to-text tools [for transcribing asylum interviews] would be procured, would these be considered high-risk or an AI model with a systemic risk?[132]

In a report on AI and policing, Europol noted that the Act means AI systems already in use may need to be re-evaluated, to determine if and how they comply with the law.[133] The exemptions and exceptions in the law may well keep some of those systems out of reach of the more stringent rules in the AI Act.

As noted in section 4.1.3, EU police forces appear to have set up a specific working group on the implementation of the AI Act. If nothing else, the Act will create a substantial amount of work for lawyers, across the EU and beyond.

Indeed, the Act will undoubtedly be the subject of multiple court cases in the years to come, as governments, state agencies, businesses and individuals seek to challenge or clarify aspects of it. The extensive discrepancies between the Act and the EU’s data protection laws, highlighted in the sections above, is likely to be one topic under examination.

This will be a piecemeal and gradual process, especially as the Act will come into force in a step-by-step manner.[134] Many civil society organisations and lawyers across the EU will seek to ensure the Act is used and interpreted in a way that upholds fundamental rights, to the extent this is possible. Governments and industry may well seek the opposite.

Whatever the outcome of these efforts, the regulatory regime introduced by the Act will frame the use of artificial intelligence in the EU, and perhaps elsewhere in the world, for many years to come. In this context, it is vital to understand ongoing and emerging projects that seek to embed AI in policing, border, migration and criminal justice agencies. These developments are examined in the sections that follow.

(return to top)

 

< Previous section
1. Introduction

Next section >
3. Security AI in EU agencies

Notes

[1] Council of the EU, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’, 9 December 2023, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

[2] Article 1(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e1915-1-1

[3] Article 1(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e1915-1-1

[4] Recital(5), Recital (26), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[5] Article 7, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_7

[6] Article 50, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[7] Chapter V, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#cpt_V

[8] Article 46(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_46

[9] Used here as shorthand for policing, border control, immigration, asylum and criminal justice.

[10] Innovation Hub Team, ‘EU Innovation Hub for Internal Security – multi-annual planning of activities 2023-26’, Council doc. 5603/23, LIMITE, 16 February 2023, p.23, https://www.statewatch.org/media/4704/1335957-v1-eu_innovation_hub_for_internal_security_multi-annual_planning_of_activities_2023-2026_st05603_en-public.pdf

[11] Ibid.

[12] Article 20, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_20

[13] Recital (6), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[14] Recital (9), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[15] Maria Maggiore, Leila Minano and Harold Schumann, ‘France spearheaded successful effort to dilute EU AI regulation’, EUobserver, 22 January 2025, https://euobserver.com/digital/ardc3193c4

[16] Plixavra Vogiatzoglou, ‘The AI Act National Security Exception’, Verfassungsblog, 9 December 2024, https://verfassungsblog.de/the-ai-act-national-security-exception/

[17] Article 2(1)(c), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e1975-1-1

[18] Article 2(4), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e1975-1-1

[19] Articles 37-38, Law Enforcement Directive, https://eur-lex.europa.eu/eli/dir/2016/680/oj/eng#cpt_V; Articles 48 and 50, EU data protection Regulation, https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32018R1725#cpt_V

[20] ‘Interpol: multi-million dollar “predictive analytics” system under construction’, Statewatch, 21 September 2023, https://www.statewatch.org/news/2023/september/interpol-multi-million-dollar-predictive-analytics-system-under-construction/

[21] ‘Hart Attack: How DHS’s massive biometrics database will supercharge surveillance and threaten rights’, Surveillance Resistance Lab, January 2023, https://surveillanceresistancelab.org/resources/hart-attack-how-dhss-massive-biometrics-database-will-supercharge-surveillance-and-threaten-rights/

[22] For example, through the General Data Protection Regulation or the Regulation on data protection in EU institutions, agencies, offices and bodies.

[23] Giovanni De Gregorio and Simona Demkova, ‘The Constitutional Right to an Effective Remedy in the Digital Age: A Perspective from Europe’ Ch. Van Oirsouw et al., (eds.), European Yearbook of Constitutional Law, 31 January 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4712096

[24] ‘Analysis: Ireland’s AI Act exemptions’, Irish Legal News, 15 January 2025, https://www.irishlegal.com/articles/analysis-irelands-ai-act-exemptions

[25] Article 2(8), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_2

[26] ‘Surveillance Technologies at European Borders: Evros’, Border Violence Monitoring Network, 1 October 2024, https://borderviolence.eu/app/uploads/Border-surveillance-in-Evros.pdf

[27] Article 111(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_111

[28] More information on those systems is available in Statewatch’s interactive map: ‘EU agencies and interoperable databases’, https://www.statewatch.org/eu-agencies-and-interoperable-databases/ 

[29] ‘Automated Suspicion: the EU’s new travel surveillance initiatives’, Statewatch, 13 July 2020, https://www.statewatch.org/automated-suspicion-the-eu-s-new-travel-surveillance-initiatives/

[30] Article 111(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_111

[31] Article 3(38), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_3

[32] ‘Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the accounting of greenhouse gas emissions of transport services’, 11 July 2023, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52023PC0441

[33] In a law establishing an online platform to facilitate cooperation in judicial investigations, “operational data” is defined as “information and evidence processed… during the operational phase of a [joint investigation] to support cross-border investigations and to support prosecutions.” See: Regulation (EU) 2023/969 of the European Parliament and of the Council of 10 May 2023 establishing a collaboration platform to support the functioning of joint investigation teams and amending Regulation (EU) 2018/1726, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32023R0969

[34] Recital 51 of the GDPR equates the term “sensitive personal data” with special categories of personal data, which are defined in Article 9 of the GDPR, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679

[35] Proposal for a Regulation laying down harmonised rules on artificial intelligence, 21 April 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[36] Article 5(d), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[37] Protect Not Surveil, https://protectnotsurveil.eu/

[38] European Data Protection Supervisor, ‘EDPS comments to the AI Office’s consultation on the application of the definition of an AI system and the prohibited AI practices established in the AI Act launched by the European AI Office’, 19 December 2024, https://www.edps.europa.eu/data-protection/our-work/publications/formal-comments/2024-12-19-edps-ai-offices-consultation-application-definition-ai-system-and-prohibited-ai-practices-established-ai-act-launched-european-ai_en

[39] This is a new body established by the Act. See: ‘European AI Office’, https://digital-strategy.ec.europa.eu/en/policies/ai-office

[40] Article 5(g), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[41] Article 5(g), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[42] Council of the EU, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’, 9 December 2023, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

[43] “…esta prohibición no incluye el etiquetado o filtrado de conjuntos de datos biométricos adquiridos lícitamente, como imágenes, basado en datos biométricos ni la categorización de datos biométricos en el ámbito de la garantía del cumplimiento del Derecho.” This translates to: “…this prohibition does not include the labelling or filtering of sets of legally acquired biometric data, such as images, based on biometric data, nor the categorisation of biometric data within the area of law enforcement.”

[44] Recital (7), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[45] European Data Protection Supervisor, ‘EDPS comments to the AI Office’s consultation on the application of the definition of an AI system and the prohibited AI practices established in the AI Act launched by the European AI Office’, 19 December 2024, https://www.edps.europa.eu/data-protection/our-work/publications/formal-comments/2024-12-19-edps-ai-offices-consultation-application-definition-ai-system-and-prohibited-ai-practices-established-ai-act-launched-european-ai_en

[46] ‘TZ: Policie již téměř rok využívá analytický nástroj na rozpoznávání tváří. Podrobnosti jeho fungování tají’, Iuridicum Remedium, 12 July 2023, https://digitalnisvobody.cz/blog/2023/07/12/tz-policie-jiz-temer-rok-vyuziva-analyticky-nastroj-na-rozpoznavani-tvari-podrobnosti-jeho-fungovani-ale-pred-verejnosti-taji/

[47] ‘Danish Police Trials Facial Recognition Technology for Crime Investigation Amid Controversy’, en.365Nyt, 31 August 2024, https://en.365nyt.dk/2024/08/31/danish-police-trials-facial-recognition-technology-for-crime-investigation-amid-controversy/; ‘ Police in Denmark to implement facial recognition technology to combat violent crimes’, euronews, 14 August 2024, https://www.euronews.com/next/2024/08/14/police-in-denmark-to-implement-facial-recognition-technology-to-combat-violent-crimes

[48] ‘Sweden: Government Bill to Allow Police Use of Facial Recognition and DNA Genealogy Cleared for Parliament’s Consideration’, Library of Congress, 9 October 2024, https://www.loc.gov/item/global-legal-monitor/2024-10-09/sweden-government-bill-to-allow-police-use-of-facial-recognition-and-dna-genealogy-cleared-for-parliaments-consideration/

[49] Svea Windwehr, ‘Germany Rushes to Expand Biometric Surveillance’, Electronic Frontier Foundation, 7 October 2024, https://www.eff.org/deeplinks/2024/10/germany-rushes-expand-biometric-surveillance

[50] According to a report of a meeting between MEPs and the European Commission on the implementation of the AI Act.

[51] ‘Reclaim Your Face’, https://reclaimyourface.eu

[52] European Commission, ‘Impact assessment accompanying the proposal for a Regulation laying down harmonised rules for artificial intelligence’, 21 April 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=SWD:2021:0084:FIN

[53] Article 5(1)(h), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[54] Annex II, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#anx_II

[55] Article 5(5), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[56] Abigail Opiah, ‘Swedish proposal tests AI Act’s live public facial recognition limits’, Biometric Update, 4 June 2024, https://www.biometricupdate.com/202406/swedish-proposal-tests-ai-acts-live-public-facial-recognition-limits

[57] ‘Biometric surveillance in the Czech Republic: the Ministry of the Interior is trying to circumvent the Artificial Intelligence Act’, EDRi, 9 October 2024, https://edri.org/our-work/biometric-surveillance-in-the-czech-republic-the-ministry-of-the-interior-is-trying-to-circumvent-the-artificial-intelligence-act/

[58] Article 5(2)(a) and (b), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[59] Article 5(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[60] European Court of Human Rights, ‘Case of Glukhin v. Russia’, 4 October 2023, https://hudoc.echr.coe.int/?i=001-225655

[61] European Data Protection Supervisor, ‘EDPS comments to the AI Office’s consultation on the application of the definition of an AI system and the prohibited AI practices established in the AI Act launched by the European AI Office’, 19 December 2024, https://www.edps.europa.eu/data-protection/our-work/publications/formal-comments/2024-12-19-edps-ai-offices-consultation-application-definition-ai-system-and-prohibited-ai-practices-established-ai-act-launched-european-ai_en

[62] Article 5(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[63] Article 5(3), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2812-1-1

[64] Article 74(8), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_74

[65] European Data Protection Supervisor, ‘EDPS comments to the AI Office’s consultation on the application of the definition of an AI system and the prohibited AI practices established in the AI Act launched by the European AI Office’, 19 December 2024, https://www.edps.europa.eu/data-protection/our-work/publications/formal-comments/2024-12-19-edps-ai-offices-consultation-application-definition-ai-system-and-prohibited-ai-practices-established-ai-act-launched-european-ai_en

[66] Annex III, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e38-127-1

[67] Article 27(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_27

[68] Article 27(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_27

[69] ‘Comparative review of 10 FRIAs’, Algorithm Audit, September 2024, https://algorithmaudit.eu/knowledge-platform/knowledge-base/comparative_review_10_frias

[70] European Data Protection Supervisor, ‘Necessity & Proportionality, undated, https://www.edps.europa.eu/data-protection/our-work/subjects/necessity-proportionality_en

[71] Article 27(4), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_27

[72] Article 27(3), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_27

[73] Recital (131), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[74] Article 6(4), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_6

[75] Innovation Hub Team, ‘EU Innovation Hub for Internal Security – multi-annual planning of activities 2023-26’, Council doc. 5603/23, LIMITE, 16 February 2023, p.23, https://www.statewatch.org/media/4704/1335957-v1-eu_innovation_hub_for_internal_security_multi-annual_planning_of_activities_2023-2026_st05603_en-public.pdf

[76] Tim Cushing, ‘Public Records Show Cops Are Obscuring Their Use Of Facial Recognition Tech In Criminal Cases’, Techdirt, 21 October 2024, https://www.techdirt.com/2024/10/21/public-records-show-cops-are-obscuring-their-use-of-facial-recognition-tech-in-criminal-cases/

[77] Article 26(11), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_26

[78] Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, https://eur-lex.europa.eu/eli/dir/2016/680/oj

[79] Article 13(3), Directive (EU) 2016/680, https://eur-lex.europa.eu/eli/dir/2016/680/oj

[80] Article 50(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[81] Marvin L. Astrada and Scott B. Astrada, ‘Law, continuity and change: Revisiting the reasonable person within the demographic, sociocultural and political realities of the twenty-first century’, undated, https://rutgerspolicyjournal.org/jlpp/wp-content/uploads/sites/26/2017/05/Astrada.pdf

[82] Article 50(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[83] Article 50(3), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[84] Article 50(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[85] Article 50(4), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_50

[86] Article 2(1)(g), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[87] Case C-817/19, 21 June 2022, paras. 194-195, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62019CJ0817

[88] Case C-817/19, 21 June 2022, para. 210. See also : Evelien Brouwer, ‘Challenging Bias and Discrimination in Automated Border Decisions’, Verfassungsblog, 11 May 2023, https://verfassungsblog.de/pnr-border/

[89] Melanie Fink, ‘The Hidden Reach of the EU AI Act: Expanding the Scope of EU Public Power’, Verfassungsblog, 20 January 2025, https://verfassungsblog.de/the-hidden-reach-of-the-eu-ai-act/

[90] Article 60, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_60

[91] Article 71(4), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_71

[92] Annex IX, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#anx_IX

[93] International Standards Organisation, ‘Standards’, undated, https://www.iso.org/standards.html

[94] Recital (123), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

[95] ‘Bias baked in: How Big Tech sets its own AI standards’, Corporate Europe Observatory, 9 January 2025, https://corporateeurope.org/en/2025/01/bias-baked

[96] Article 40(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_40

[97] Article 43(1), AI Act: “…where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body.” Article 74(8) of the Act designates data protection, rather than market surveillance, authorities as responsible for overseeing use of high-risk systems for “law enforcement purposes, border management and justice and democracy.” In those cases: “Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Regulation (EU) 2016/679 [General Data Protection Regulation] or Directive (EU) 2016/680 [Directive on data protection in law enforcement], or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of Directive (EU) 2016/680.”

[98] “…‘national competent authority’ means a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.” See: Article 3(48), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_3

[99] Article 41(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_41

[100] Article 41(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_41

[101] European Commission, ‘Implementing and delegated acts’, undated, https://commission.europa.eu/law/law-making-process/adopting-eu-law/implementing-and-delegated-acts_en

[102] Article 41(5), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_41

[103] Article 40, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_40

[104] Article 41, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_41

[105] Article 17(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_17

[106] Annex VI, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#anx_VI

[107] Annex VI, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#anx_VI

[108] Article 43(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_43

[109] Article 43(6), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_43

[110] Article 46(1), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_46

[111] Court of Justice of the EU, ‘Ordre des barreaux francophones et germanophone and Others v Conseil des ministers’, Case-718/19, 22 June 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62019CJ0718

[112] Article 46(1), AI Act: “By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned”.

[113] Article 46(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_46

[114] Article 46, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_46

[115] Article 46(2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_46

[116] Article 89, GDPR, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679#d1e6494-1-1

[117] Article 3(49), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e2090-1-1

[118] Article 73(9), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e7140-1-1

[119] Article 74(8), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_74

[120] Article 74(9), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_74

[121] Data protection: 80% of national authorities underfunded, EU bodies “unable to fulfil legal duties”, Statewatch, 30 September 2022, https://www.statewatch.org/news/2022/september/data-protection-80-of-national-authorities-underfunded-eu-bodies-unable-to-fulfil-legal-duties/

[122] Article 78(1), (2), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e7427-1-1

[123] Article 78(3), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e7427-1-1

[124] Article 17, Law Enforcement Directive. See also: Case C-333/22, 16 November 2023, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62022CA0333

[125] Article 78(3), AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#d1e7427-1-1

[126] Article 97, AI Act, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689#art_97

[127] ‘EU: Human rights must be “central guiding basis” for new AI guidelines’, Statewatch, 16 January 2025, https://www.statewatch.org/news/2025/january/eu-human-rights-must-be-central-guiding-basis-for-new-ai-guidelines/

[128] ‘Setting the rules of their own game: how Big Tech is shaping AI standards’, Corporate Europe Observatory, 9 January 2025,  https://corporateeurope.org/en/2025/01/setting-rules-their-own-game-how-big-tech-shaping-ai-standards

[129] European Parliament, ‘Working Group on the implementation and enforcement of the AI Act’, undated, https://www.europarl.europa.eu/committees/en/working-group-on-the-implementation-and-/product-details/20241113CDT13823

[130] ‘Europe's techno-borders’, Statewatch/EuroMed Rights, 10 July 2023, https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders/

[131] Frontex, ‘Questions – the AI Act and the EBCG’, undated, https://www.statewatch.org/media/4752/doc-14_edoc-1409502-v1-frontex_ai_act_-_questions_jun_2024_full-access.pdf 

[132] EU Asylum Agency, ‘Key Questions on Implementing the EU AI Act - Draft contribution to DG HOME’, 25 June 2024, https://www.statewatch.org/media/4753/doc-15_edoc-1401527-v1-eu_innovation_hub_ai_cluster_-_euaa_questions_ai_act_implementation_full-access.pdf

[133] Europol, ‘AI and policing’, pp.43-44, https://www.europol.europa.eu/cms/sites/default/files/documents/AI-and-policing.pdf

[134] European AI Office, ‘Implementation Timeline’, updated 1 August 2024, https://artificialintelligenceact.eu/implementation-timeline/

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 Previous article

1. Introduction

Next article 

3. Security AI in EU agencies

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error