12 July 2023
Secret negotiations between the Council of the EU, European Parliament and European Commission on the Artificial Intelligence Act have begun, more than two years after the legislation was proposed. A statement signed by more than 150 civil society organisations, including Statewatch, calls for fundamental rights to be put at the centre of the talks.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The statement is the latest in a series of calls from a wide range of civil society organisations for legislators to ensure that the AI prioritises human rights over corporate profits and state power.
While the Parliament has included a number of protections in its negotiating position, the Council is likely to push back against these in negotiations.
For example, the Council has consistently sided with loosening restrictions on police uses of AI whilst increasing secrecy.
The statement - the full text of which is copied below - notes:
"Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making and environmental damage."
The fact that negotiations will now take place in secret adds an extra barrier for civil society organisations seeking to have input in the talks in the name of upholding human rights.
Full text of the statement
EU Trilogues: The AI Act must protect people’s rights
A civil society statement on fundamental rights in the EU Artificial Intelligence Act
As European Union institutions  begin trilogue negotiations, civil society calls on EU institutions to ensure the Regulation puts people and fundamental rights first in the Artificial Intelligence Act (AI Act).
In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare, education and employment.
Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making and environmental damage.
We call on EU institutions to ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:
1. Empower affected people with a framework of accountability, transparency, accessibility and redress
It is crucial that the EU AI Act empowers people and public interest actors to understand, identify, challenge and seek redress when the use of AI systems exacerbate harms and violates fundamental rights. To do this, it is crucial that the AI Act develops a framework of accountability, transparency, accessibility and redress. This must include:
2. Draw limits on harmful and discriminatory surveillance by national security, law enforcement and migration authorities
Increasingly AI systems are developed and deployed for harmful and discriminatory forms of state surveillance. Such systems disproportionately target already marginalised communities, undermine legal and procedural rights, as well as contributing to mass surveillance. When AI systems are deployed in the context of law enforcement, security and migration control, there is an even greater risk of harm, and violations of fundamental rights and the rule of law. To maintain public oversight and prevent harm, the EU AI Act must include:
3. Push back on Big Tech lobbying: remove loopholes that undermine the regulation
The EU AI Act must set clear and legally-certain standards of application if the legislation is to be effectively enforced. The legislation must uphold an objective process to determine which systems are high-risk, and remove any ‘additional layer’ added to the high-risk classification process. Such a layer would allow AI developers, without accountability or oversight, to decide whether or not their systems pose a ‘significant’ enough risk to warrant legal scrutiny under the Regulation. A discretionary risk classification process risks undermining the entire AI Act, shifting to self-regulation, posing insurmountable challenges for enforcement and harmonisation, and incentivising larger companies to under-classify their own AI systems.
Negotiators of the AI Act must not give in to lobbying efforts of large tech companies seeking to circumvent regulation for financial interest. The EU AI Act must:
1. European Digital Rights (EDRi)
2. Access Now
3. Algorithm Watch
4. Amnesty International
5. Bits of Freedom
6. Electronic Frontier Norway (EFN)
7. European Center for Not-for-Profit Law, (ECNL)
8. European Disability Forum (EDF)
9. Fair Trials
10. Hermes Center
11. Irish Council for Civil Liberties (ICCL)
12. Panoptykon Foundation
13. Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)
14. Academia Cidadã - Citizenship Academy
15. Africa Solidarity Centre Ireland
18. All Faiths and None
19. All Out
20. Anna Henga
21. Anticorruption Center
22. ARSIS - Association of the Social Support of Youth
23. ARTICLE 19
24. Asociación Por Ti Mujer
26. Association for Juridical Studies on Immigration (ASGI)
27. Association Konekt
28. ASTI asbl - Luxembourg
30. Austria human rights League
32. Balkan Civil Society Development Network
33. Bulgarian center for Not-for-Profit Law (BCNL)
34. Bürgerrechte & Polizei/CILIP, Germany
35. Canadian Civil Liberties Association
36. Charity & Security Network
37. Citizen D / Državljan D
38. Civil Liberties Union for Europe
39. Civil Society Advocates
40. Coalizione Italiana Libertà e Diritti civili
41. Comisión General Justicia y Paz de España
42. Commission Justice et Paix Luxembourg
43. Controle Alt Delete
44. Corporate Europe Observatory (CEO)
45. D64 - Zentrum für digitalen Fortschritt
46. D64 - Zentrum für Digitalen Fortschritt e. V.
47. DanChurchAid (DCA)
48. Danes je nov dan, Inštitut za druga vprašanja
49. Data Privacy Brasil
50. Data Privacy Brasil Research Association
51. Defend Democracy
52. Democracy Development Foundation
53. Digital Security Lab Ukraine
54. Digital Society, Switzerland
56. Digitale Gesellschaft
58. Diotima Centre for Gender Rights & Equality
60. epicenter.works - for digital rights
61. Equinox Initiative for Racial Justice
62. Estonian Human Rights Centre
64. EuroMed Rights
65. European Anti-Poverty Network (EAPN)
66. European Center for Human Rights
67. European Center for Not-for-Profit Law
68. European Civic Forum
69. European Movement Italy
70. European Network Against Racism (ENAR)
71. European Network on Statelessness
72. European Sex Workers Rights Alliance (ESWA)
73. Fair Vote
74. FEANTSA, European Federation of National Organisations Working with the Homeless
75. Free Press Unlimited
76. Fundación Secretariado Gitano
78. Greek Forum of Migrants
79. Greek Forum of Refugees
80. Health Action International
82. Homo Digitalis
83. horizontl Collaborative
84. Human Rights Watch
85. I Have Rights
86. IDAY-Liberia Coalition Inc.
87. ILGA-Europe (the European region of the International Lesbian, Gay, Bisexual, Trans and Intersex Association)
89. Initiative Center to Support Social Action "Ednannia"
90. Institute for Strategic Dialogue (ISD)
91. International Commission of Jurists
92. International Rehabilitation Council for Torture victims
94. Ivorian Community of Greece
95. Kif Kif vzw
96. KOK - German NGO Network against Trafficking in Human Beings
98. Kosovar Civil Society Foundation (KCSF)
99. La Strada International
101. LDH (Ligue des droits de l'Homme)
102. Legal Centre Lesvos
104. Ligali / IDPAD (Hackney)
105. Ligue des droits humains, Belgium
106. LOAD e.V.
107. Maison de l'Europe de Paris
108. Metamorphosis Foundation
109. Migrant Tales
110. Migration Tech Monitor
112. Mobile Info Team
113. Moje Państwo Foundation
114. Moomken organization for Awareness and Media
115. National Campaign for Sustainable Development Nepal
116. National Network for Civil Society (BBE)
117. National old folks of Liberia.com
119. Observatorio Trabajo, Algoritmo y Sociedad
120. Open Knowledge Foundation Germany
121. Partners Albania for Change and Development
123. Privacy First
124. Privacy International
125. Privacy Network
126. Promo-LEX Association
127. Prostitution Information Center (PIC)
128. Protection International
129. Public Institution Roma Community Centre
130. Racism and Technology Center
131. Red en Defensa de los Derechos Digitales
132. Red Española de Inmigración y Ayuda al Refugiado
133. Refugee Law Lab, York University
135. SHARE Foundation
136. SOLIDAR & SOLIDAR Foundation
138. Stichting LOS
139. Superbloom (previously known as Simply Secure)
140. SUPERRR Lab
141. SwitchMED - Maghweb
143. TAMPEP European Network for the Promotion of Rights and Health among Migrant Sex Workers.
144. TEDIC - Paraguay
145. The Border Violence Monitoring Network
146. The Good Lobby
147. Transparency International
149. WeMove Europe
 European Parliament, the Council of the European Union and European Commission engage in inter-institutional negotiations, ‘trilogues’, to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council.
A new Statewatch/EuroMed Rights publication analyses the past, present and future of Europe’s “techno-borders” – the extensive infrastructure of surveillance systems, databases, biometric identification techniques and information networks put in place over the last three decades to provide authorities with knowledge of – and thus control over – foreign nationals seeking to enter or staying in EU and Schengen territory.
With the European Parliament and Council of the EU heading for secret trilogue negotiations on the Artificial Intelligence Act, an open letter signed by 61 organisations - including Statewatch - calls on the Spanish Presidency of the Council to make amendments to the proposal that will ensure the protection of fundamental rights.
The EU has spent €341 million on research into artificial intelligence technologies for asylum, immigration and border control purposes since 2007, yet the proposed AI Act currently being debated in EU institutions fails to provide meaningful safeguards against harmful uses of those technologies, says a report published today by Statewatch.
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.