UK government's AI plans must uphold fundamental rights and democratic values

Topic
Country/Region
UK

Thirty civil society organisations, including Statewatch, have published a joint statement calling for the UK government to ensure that it's approach to artificial intelligence upholds fundamental rights and democratic values.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.


Image: Tomizak, CC BY-ND 2.0


The statement was coordinated by the Public Law Project. It has been covered by the BBC here.

The Government’s approach to regulation of artificial intelligence (AI), as set out in its AI regulation white paper (pdf), misses a vital opportunity to ensure that fundamental rights and democratic values are protected.

In particular, it fails to ensure that adequate safeguards and standards are in place for use of AI by public authorities.

The use of AI in public decision-making offers the promise of greater efficiency and accuracy.

However, there is also a risk of direct or indirect discrimination, and the exacerbation of existing inequalities. Regulation is essential to ensure that AI works for the public good.

As civil society groups who represent individuals and communities impacted by government use of automation and AI across the UK, we urge the UK Government to develop and implement AI regulation at minimum in line with the following principles:

  1. Transparency must be mandatory
  2. There must be clear mechanisms for accountability at every stage
  3. The public should be consulted about new automated decision-making (ADM) tools before they are deployed by government
  4. There must be a specialist regulator to enforce the regulatory regime and ensure people can seek redress when things go wrong
  5. Uses of AI that threaten fundamental rights should be prohibited

These principles will require obligations in statute, which will need to build upon and work with existing data protection safeguards and our human rights framework. Instead of promoting existing standards, the Data Protection and Digital Information (No 2) Bill is weakening them, and threats to leave the European Convention on Human Rights – and legislation which disapplies parts of the Human Rights Act – are putting our human rights framework at risk. Effective AI regulation must strengthen, rather than undermine, existing protections.

  1. Transparency must be mandatory

Effective regulation of how public authorities use AI must have mandatory transparency as its starting point. Individuals and communities whose lives are impacted by AI should know when they are being subject to ADM, and how those decisions are made. Without transparency, individuals and parliamentarians cannot hold decision-makers to account when AI systems produce harmful or discriminatory outcomes. Indeed without such transparency, the Government's own stated objective of increasing public trust in a regulatory framework for AI is unachievable.1

When it comes to transparency, it is not enough for compliance to be optional. Transparency requirements must be in primary legislation, rather than in guidance. Wherever an ADM tool is being used to make, or support, decisions which have a legal, or similarly significant, effect on someone, requirements should include:

  • A statutory duty on the public body to inform the person subject to the decision that ADM has been used, and how it is being used.
  • Mandatory publication of the tool on a register of public use of ADM systems - the current model of optional compliance with the Government’s Algorithmic Transparency Recording Standard is insufficient.
  • A statutory duty to publish a risk assessment (including the data protection, equality, human and child rights impacts) of the tool and measures of impact post-deployment.
  1. There must be clear mechanisms for accountability at every stage

Decision-making which has significant implications for people and which may affect their rights should be undertaken with a ‘human meaningfully in the loop’. More research is required to understand the impact of the use of AI on decision-making by officials, and to determine what meaningful human intervention looks like. Mere rubber-stamping of algorithmic outputs will not lead to proper protection and accountability.

There must be clear division of responsibility between those who develop, own and deploy AI tools, in order to facilitate effective protection and accountability. Those responsible for developing, testing and using AI must be subject to clear statutory obligations – for example, to ensure that at each stage, the necessary checks for and safeguards against discriminatory outcomes have been put in place.

  1. The public should be consulted about new ADM tools before they are deployed by government

Those who will be affected by the use of a new tool - as well as academics and civil society more broadly - should have the chance to participate in that tool’s design and deployment. Alongside robust testing, this would help identify risks and support effective design. It would also build public trust around use of ADM by government and facilitate consensus-building about the role of AI in our society.

  1. There must be a specialist regulator to enforce the regulatory regime and ensure people can seek redress when things go wrong

Individuals and communities who are adversely affected by ADM must have access to quick, accessible, and effective avenues of redress. The existing patchwork of regulatory bodies lack statutory powers and financial resources. Given the specificity and complexity of this domain, an independent expert regulator is required. This regulator needs to be adequately resourced and given the right tools to enforce the regulatory regime, including powers to proactively audit public ADM tools and their operation.

  1. Uses of AI that threaten fundamental rights should be prohibited

As the white paper itself recognises, AI presents a serious risk to fundamental rights in a number of contexts, including the rights to privacy and non-discrimination.2 Furthermore "the patchwork of legal frameworks that currently regulate some uses of AI may not sufficiently address the risks that AI can pose".3 It is therefore critical that any proposed AI-specific regulation addresses those risks, including by prohibiting certain uses of AI which pose an unjustified risk to fundamental rights. Such prohibitions must be set out in primary legislation, in order that they are subject to a democratic process. Other jurisdictions (including the EU and the US) are beginning to do so. As the UK fails to prohibit certain uses of AI in law, we are falling behind in ensuring adequate protections against the risks of AI, and clarity for those who use and are affected by it.

Signatures include:

Public Law Project

Liberty

Big Brother Watch

Just Algorithms Action Group (JAAG)

Work Rights Centre

the3million

Migrants' Rights Network

Helen Mountfield KC

Monika Sobiecki, Partner, Bindmans LLP

Dr Derya Ozkul, Senior Research Fellow, Refugee Studies Centre, University of Oxford

Child Poverty Action Group

Open Rights Group

Louise Hooper, Garden Court Chambers

Dr Oliver Butler, Assistant Professor in Law, University of Nottingham

Birgit Schippers, University of Strathclyde

Connected by Data

Professor Joe Tomlinson, University of York

Welsh Refugee Council

Association of Visitors to Immigration Detainees (AVID)

Statewatch

Dave Weaver, Chair of Operation Black Vote

Lee Jasper, Blaksox

Sampson Low, Head of Policy, UNISON

Shoaib M Khan

Tom Brake, Director, Unlock Democracy

Asylum Link Merseyside

Fair Trials

Clare Moody, Co-CEO, Equally Ours

Isobel Ingham-Barrow, CEO, Community Policy Forum

Jim Fitzgerald, Director, Equal Rights Trust

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

Further reading

17 May 2023

Open Letter to the Spanish Presidency of the Council of the European Union: Ensuring the protection of fundamental rights on the AI Act

With the European Parliament and Council of the EU heading for secret trilogue negotiations on the Artificial Intelligence Act, an open letter signed by 61 organisations - including Statewatch - calls on the Spanish Presidency of the Council to make amendments to the proposal that will ensure the protection of fundamental rights.

07 March 2023

France: Proposed Olympic surveillance measures violate international human rights law

Civil society public letter on the proposed French law on the 2024 Olympic and Paralympic Games condemns a legal proposal to deploy algorithmic surveillance cameras in public spaces. The law would make France the first EU country to explicitly legalise such practices, violate international human rights law by contravening the principles of necessity and proportionality, and pose unacceptable risks to fundamental rights, such as the right to privacy, the freedom of assembly and association, and the right to non-discrimination.

02 March 2023

New technologies having devastating impact on rights in counter-terrorism policy, says UN Special Rapporteur

"New technologies, particularly digital technologies, are transforming the ways in which human rights are impeded and violated around the world," says a damning new report by the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Fionnuala Ní Aoláin. The report "addresses the intersection of counter-terrorism and preventing and countering violent extremism with the use of new technologies," and condemns "the elevation of blinkered security thinking that has accompanied a particularly restrictive approach to countering terrorism".

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error