EU: Artificial Intelligence Act: Council aims to simplify use of mass biometric surveillance by law enforcement

The Council of the EU wants to make it possible for private actors to operate mass biometric surveillance systems on behalf of police forces, and intends to extend the purposes for which such systems can be used under the EU's proposed Artificial Intelligence Act.


The plans are outlined in a progress report on the discussions so far on the Artificial Intelligence Act, covering all aspects of the discussions, recently circulated by the Slovenian Presidency. With regard to justice and home affairs issues, the document says:

"Concerning the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, it has been clarified that such systems could also be used by other actors, acting on behalf of law enforcement authorities." [emphasis added in all quotes unless otherwise indicated]

Furthermore:

"...the objectives for which law enforcement should be allowed to use ‘real-time’ remote biometric identification, as well the related authorisation process, have been extended."

This is in line with previous comments circulated in the Council. The Presidency's compromise text (circulated amongst national delegations in the Council today, 29 November) says:

"(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their behalf for the purpose of law enforcement [is prohibited], unless and in as far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific potential victims of crime, including missing children;

(ii) the prevention of a specific and substantial and imminent threat to the critical infrastructure, life, health or physical safety of natural persons or or of a terrorist attack;

(iii) the detection, localisation, identification or prosecution of a perpetrator, or suspect or convict of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State." [emphasis and strikethrough in original]

With regard to the classification of "high risk AI systems," the Presidency's note says:

"As regards high risk use cases in the area of law enforcement, crime analytics has been removed from the list, and the range of systems for detection of deep fakes falling under the high risk category has been narrowed down."

According to the Presidency, issues that still require further discussion amongst the member states are:

  • Requirements for high-risk AI systems: "To make it less burdensome for businesses to comply, some practical guidance on how to meet them would need to be provided"
  • Responsibilities of various actors in the AI value chain: "it may be relevant to re-evaluate the allocation of responsibilities and roles, in order to better reflect the reality of designing an AI system, putting in [sic] on the market or operating it."
  • Compliance and enforcement: "Some concerns were also raised that the proposed AIA sets out an overly complex compliance framework, which will create significant administrative burden and costs for businesses"

Documentation

Further reading

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error