EU: Council sets the scene to weaken proposed safeguards on artificial intelligence and internal security

Topic
Country/Region
EU

National delegations in the Council of the EU are starting to discuss how proposed new legislation governing artificial intelligence should be interpreted from "an internal security perspective," with the Portuguese Presidency highlighting that "limitations and safeguards should be in balance with the possibilities of law enforcement to use and develop AI systems in the future, in line with the rest of the society."

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

See: NOTE from: Presidency to: Delegations: High risk AI applications: internal security outlook - Exchange of views (Council document 8515/21, LIMITE, 12 May 2021, pdf)

If you this material useful, please consider making a one-off or recurring donation to support our work.

The paper is intended to serve as a basis for an "exchange of views"  amongst national ministers at the June meeting of the Justice and Home Affairs Council, the first high-level Council discussion on the Regulation on artificial intelligence published by the European Commission last month:

The provisions covering the use of high-risk AI applications and the use of remote biometric surveillance systems are certainly not as strong as they could be, but the Presidency's paper suggests they are already too much for interior ministries.

The Presidency's paper includes the following (all emphasis added):

"...several Member States are calling for strategic autonomy and digital leadership of the EU, and reminding of the need to strike a reasonable balance between inherent risks of AI products and their use, in particular on fundamental rights and freedoms of individuals guaranteed by the Charter and, at the same time, new opportunities for innovation... the countries reminded that "serious risks cannot solely be determined by the sector and application in which the AI application is used" since there is a risk that this kind of an approach would likely categorise too much AI as serious risk. Instead, those Member States consider that the risk assessment should be qualified by both the potential impact and the probability of the risks."

Furthermore:

"Prohibition of (the use of) “real-time” remote identification systems

This sensitive balance is highlighted in article 5 which prohibits, on fundamental rights grounds, certain AI systems (manipulation of human behaviour; exploitation of information to target vulnerabilities; and social scoring). By consequence, other AI systems may be used, under certain conditions, according to a proposed set of classes. Whilst AI systems for “real-time” and “post” remote biometric identification (RBI) of natural persons are classified as high-risk systems in article 6(2) and Annex III, and thus usable as long as the ensuing requirements are followed, the use of “real-time” RBI systems in public spaces for the purposes of law enforcement, such as the use of “real-time” facial recognition tools, would be prohibited as a principle, due to the heightened risks for the rights and freedoms of the persons concerned."

However, there are exceptions, that fall into three categories:

"...situations that involve the search for specific potential victims of crime (e.g. a missing child case); prevention of a specific, substantial and eminent threat to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA. Furthermore, each individual use would be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the relevant Member State, unless the case were categorised as urgent.

It will be important to assess how proportionate banning the use of RBI systems for law enforcement purposes would be, and how well it would respond to the risks evaluated to be inherent to this specific use, especially when uses for other purposes are not considered as problematic. It is also essential to evaluate how well the exceptions to the prohibition respond to realistic situations where the importance of the substantial public interest, such as a missing child case, can be seen to outweigh the risks inherent to the use."

"High-risk applications for law enforcement

In addition, it is envisaged that the degree of interference with the fundamental rights serves as one of the criteria to assess the potential harm that an AI system could cause, to qualify it as a high-risk system. According to the draft article 6(2), stand-alone high-risk AI systems are listed in Annex III. A variety of law enforcement tools used for example for risk assessment, polygraphs, detection of deep fakes, evaluation of reliability of evidence, prediction of the occurrence or reoccurrence of a criminal offence, profiling and crime analytics is listed as high-risk. Where an AI system is deemed high-risk, providers and users (together: operators) would have to follow an extensive range of obligations."

EU agencies will be covered by the rules:

"It is important to note that certain systems and tools of the JHA Agencies would also fall in the scope of the proposed Regulation and the categorisation of certain systems and tools as high-risk will thus also affect them, for example Europol in relation to certain crime analytics tools, or FRONTEX in the border security context."

The Presidency is also keen not to hinder any cosy relationships between the private and public sector:

"The detailed implications of such a range of law enforcement tools, including some of those used at the JHA Agencies, becoming listed as high-risk should be closely assessed. It will be particularly important to evaluate whether these requirements covering so many types of essential law enforcement systems could turn into an obstacle that, in practice, may prevent or at least render more difficult private sector involvement in the innovation and development of these tools in the future."

The document also notes potential implications for EU databases and information-processing systems already in use or in development, presumably - although is not explicitly mentioned - in relation to the profiling systems being developed for use against travellers to the EU:

"The temporal aspect of the proposal is also relevant. AI applications to be listed as high-risk currently in use by law enforcement authorities, or in use by the date of application, would not be captured in the scope of the proposal. In relation to AI systems that are components of large-scale IT systems in the JHA area managed by eu-LISA, the date of application is one year after the general date of application, unless there are significant changes to the systems. Though the implications for current or mid-term use and development of those systems or their components would be limited, it is highly likely that any new developments in the overall JHA information architecture would need to be evaluated from a different perspective."

Finally:

"Conclusions

The provisions on the prohibitions (article 5) and on the classification of certain AI systems as high-risk and the ensuing requirements (TITLE III) are critical for the protection of fundamental rights, but these limitations and safeguards should be in balance with the possibilities of law enforcement to use and develop AI systems in the future, in line with the rest of the society. The objective should be to equip law enforcement authorities with appropriate modern tools to ensure the security of citizens, with applicable safeguards in place to respect their fundamental rights and freedoms."

See: NOTE from: Presidency to: Delegations: High risk AI applications: internal security outlook - Exchange of views (Council document 8515/21, LIMITE, 12 May 2021, pdf)

Further reading

Image: Ars Electronica, CC BY-NC-ND 2.0

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error