11 February 2021
A new report by the International Network of Civil Liberties Organisations looks at the use and abuse of facial recognition technology by states across the globe, providing detailed case studies from the Americas, Africa, Asia, Australia and Europe.
The text that follows is the introduction to the report. You can read the full report here (link to pdf).
From Delhi to Detroit, Budapest to Bogota, Facial Recognition Technology (FRT) is being rapidly deployed in public and private spaces across the world.
As of 2019, 64 out of 176 countries were using facial recognition surveillance systems. In the US alone, more than 50 percent of all American adults were in a police recognition database, as of 2016. Law enforcement agencies say they use FRT for law enforcement purposes. For example, the FBI in the US has testified that FRT “produces a potential investigative lead”.
Traditionally, FRT surveillance systems work by locating one or more faces in a moving or still image from a camera before determining unique facial features from that image. The system then runs that image, without consent, against an existing database or ‘watch list’ of images derived from police mugshot databases in the pursuit of a match. Other FRT systems can examine demographic trends or carry out sentiment analysis by scanning crowds, again without consent.
These systems have renowned ethnic, racial and gender biases4 against people of colour and women. This means that image-matching FRT systems, used by law enforcement agencies, are more likely to misidentify people of colour and females than white males. Such inaccuracies were illustrated in 2019 when the UK’s Metropolitan Police FRT system was found to have an error rate of 81 per cent. One can only contemplate the grave implications of such inaccuracies, and consequent chilling effect on the right to protest, when one considers the indiscriminate screening of crowds during protests. For example, police in India used FRT and driving licence and voter identity databases to ‘identify’ 1,900 protesters during riots in Delhi in February 2020.
But technology is always improving and serious ethical issues arise no matter how accurate the technology may become. One only needs to consider how surveillance firm Hikvision has come under renowned criticism for allegedly providing FRT equipment in Xinjiang, China, where Uighur Muslims are being forcibly detained in detention centres. Just because a tool is accurate, does not mean it’s ethical.
In June 2020, the Association for Computing Machinery’s US Technology Policy Committee found these biases to be “scientifically and socially unacceptable”8. It found they compromise individuals’ fundamental human and legal rights to privacy, employment, justice and personal liberty. The group called for all uses of FRT to be suspended immediately, saying it can cause “profound injury” to the lives, livelihoods and fundamental rights of individuals, particularly the most vulnerable in society.
Pushback against FRT has occurred elsewhere in the world in 2020. In the UK, the Court of Appeal found that the use of automated FRT by the South Wales Police was unlawful. In Canada, Clearview AI, which scrapes images from social media sites, builds a database, and offers clients, including law enforcement agencies, access to that database, withdrew from the country. This followed the launch of a probe into the use of the tech by police by the Privacy Commissioner. In Moscow, protesters lodged a complaint with the European Court of Human Rights, over Russia’s use of FRT at protests; and in Israel, refusals by the Israel Police and Israel Defence Forces to reveal the use of FRT in both Israel and West Bank/Occupied Palestinian Territories has been met with resistance from a civil rights group.
This report focuses on the multiple ways in which the growing use of FRT affects the everyday lives of citizens across 13 countries in the Americas, Africa, Europe, Asia and Australia. These stories from 13 member organisations of the International Network of Civil Liberties Organisations (INCLO) outline how this surveillance can harmfully discriminate and infringe on a plethora of rights including people’s right to privacy and their freedoms of expression, and association and assembly.
Each story is unique to each member country but, considered together, they reveal how this harmful surveillance has become pervasive and entrenched in private and public spheres across the world. They also collectively illustrate the need for public, democratic debate about the use of this technology and for robust laws to safeguard citizens around the same.
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.