Open Letter to the Spanish Presidency of the Council of the European Union: Ensuring the protection of fundamental rights on the AI Act

Topic
Country/Region

With the European Parliament and Council of the EU heading for secret trilogue negotiations on the Artificial Intelligence Act, an open letter signed by 61 organisations - including Statewatch - calls on the Spanish Presidency of the Council to make amendments to the proposal that will ensure the protection of fundamental rights.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.


La Moncloa - Gobierno de España, CC BY-NC-ND 2.0

The letter was drafted and coordinated by a group of organisations based in Spain: Lafede.cat, Algorace, Fundación Eticas, CIVIO, Observatorio Trabajo, Algoritmos y Sociedad; Algorights, Institut de Drets Humans de Catalunya, CECU.


Madrid, May 17th, 2023

We, the undersigned organizations, are writing to draw your attention to a number of serious deficiencies in the Artificial Intelligence Act (AI Act) (COM/2021/206), currently being negotiated by the Swedish Presidency of the Council of the European Union and soon by the
Spanish Presidency.

This letter is based on the position of 123 civil society organizations calling on the European Union to foreground a fundamental rights-based approach to the AI Act in 2021. The AI Act is a fundamental piece of legislation that will have a strong impact on the EU population and, in all likelihood, also beyond our borders.

Some of our concerns relate to dangerous practices that lead to mass surveillance of the population. By fostering mass surveillance and amplifying some of the deepest social inequalities and power imbalances, AI systems are seriously jeopardizing our fundamental rights and democratic processes and values.

In particular, we draw your attention to essential protections of fundamental rights that can be further enhanced during the so-called trilogue negotiations. Given the role that the Spanish government will play as of July, the 1st, 2023 by assuming the rotating Presidency of the Council, and the interest already shown in leading the Artificial Intelligence race in Europe, we believe that the Spanish Presidency should take an assertive role and ensure that this legislation is future-proof and respectful of fundamental rights.

We call on the incoming Spanish Presidency and Member States to ensure that the following features are reflected in the final text of the AI Act:

1- Expand the list of exhaustive prohibitions of those AI systems that pose an “unacceptable risk” to fundamental rights. It is vital that the Article 5 list of “prohibited AI practices” be expanded to cover all systems that are shown to pose an unacceptable risk of violating fundamental rights and freedoms of individuals, also affecting more generally the founding principles of democracy. At the very least, the undersigned organizations believe that the following practices should be banned altogether:

  • Remote biometric identification (RBI) in public space that applies to all agents, not just law enforcement, as well as its uses both “in real time” and “ex post” (no exceptions).
  • The use of AI systems by administrative, law enforcement and judicial authorities to make predictions, profiles, or risk assessments for the purpose of crime prediction.
  • The use of AI-based individual risk assessment and profiling systems in the context of migration; predictive analytics for interdiction, restriction, and migration prevention purposes; and AI polygraphs for migration prevention.
  • The use of emotion recognition systems that aim to infer people’s emotions and mental states.
  • The use of biometric categorization systems to track, categorize and judge people in publicly accessible spaces, or to categorize people based on protected characteristics.
  • The use of AI systems that may manipulate people or take advantage of contexts or situations of vulnerability, in a way that may cause or have the effect of causing harm to them.

2- Eliminate any type of discretionality in the process of classifying high-risk systems (article 6). The general approach of the Council and the negotiations in the European Parliament tend to add layers to the risk classification process by requiring additional requirements for the systems listed in Annex III to be considered high risk. This would seriously complicate the AI Act, give excessive discretion to providers to decide whether the system is high risk or not, and it would also compromise legal certainty and lead to a large fragmentation in the application of the AI Act. Therefore, we request that:

  • All AI systems listed in Annex III of the AI Act to be considered high risk, without further ado.

3- Significant accountability and public transparency obligations on public uses of AI systems and on all “deployers” of high-risk AI. To ensure the highest level of protection of fundamental rights, those who deploy high-risk AI systems (i.e. “users” ” as they are referred to
in the original European Commission proposal and in the general approach of the EU Council or “deployers” as might be agreed in the European Parliament) should provide public information on the use of such systems. This information is crucial for public accountability, as it allows public interest organizations, researchers and affected individuals to understand the context in which high-risk systems are used. The AI Act should include the following obligations for deployers:

  • Obligation of deployers to register all high-risk AI systems in the Article 60 database.
  • The obligation of public authorities, or those acting on their behalf, to register all uses of AI systems in the database of Article 60, regardless of the level of risk.
  • The obligation of deployers of high-risk AI systems to conduct and publish in the Article 60 database a fundamental rights impact assessment (FRIA) prior to deploying any high-risk AI system.
  • The obligation of deployers of high-risk AI systems to involve civil society and other affected parties in the fundamental rights impact assessment that they must carry out.

4- Rights and redress mechanisms to empower people affected by AI systems. While we have seen some positive steps by recognizing, for example, in the general approach of theCouncil the possibility of filing complaints with public authorities in case of non-compliance with the AI Act, we believe that it is also necessary to recognize other basic rights that enable people affected by AI systems to understand, challenge and obtain redress. Therefore, we understand that the final text of the AI Act should include:

  • The right to receive, upon request, a clear and intelligible explanation, including to children and in the language requested, in a manner accessible to persons with disabilities, of decisions made with the aid of systems within the scope of the AI Act and how such systems operate, including the right to object to such decisions.
  • The right of individuals to lodge a complaint with national authorities or to take legal action in court when an AI system or practice affects them.
  • The right to an effective judicial remedy against the national supervisory authority or the deployer to enforce rights recognized under the AI Act that have been violated.
  • The right to access to collective redress mechanisms.
  • The right of public interest organizations to file a complaint with a control authority for non-compliance with the AI Act or for AI systems that violate fundamental rights or the public interest, and the right of individuals to be represented by such organizations in the protection of their rights.

5- Technical standards should not address issues related to fundamental rights and should include more civil society voices in their elaboration process. Civil society is concerned because a large part of the implementation of the AI Act -whose risk-based approach leaves most AI systems almost unregulated (with the exception of high-risk systems, additional transparency obligations for some AI systems, and the debate on generative AI that has recently taken place)- will depend on the development of technical standards and their implementation by manufacturers. Furthermore, we should consider that the standardisation processes are heavily dominated by the industry, considering their complexity. The undersigned organizations state that it is not clear how these standards could impact on the fundamental rights of individuals (e.g., regarding the absence of bias in AI systems). Therefore, we believe it is necessary:

  • That technical standards should not be used to evaluate the possible impact on the fundamental rights of individuals.
  • To provide the necessary means and resources to ensure greater participation of civil society in the standardisation processes related to the AI Act, which are already taking place.

We, the undersigned organizations request:

(1) The organization of a high-level meeting with representatives of civil society before the beginning of the Spanish Presidency of the Council of the European Union, to ensure that fundamental rights are adequately strengthened and protected in the trilogues
elated to the AI Act.

(2) Obtain assurances from the Spanish Government on how it expects the highest levels of fundamental rights protection to be achieved in the final text of the AI Act, as we have previously noted in this letter.

Letter promoted by:

Lafede.cat, Algorace, Fundación Eticas, CIVIO, Observatorio Trabajo, Algoritmos y Sociedad; Algorights, Institut de Drets Humans de Catalunya, CECU.

Signed by:

  1. #Lovetopia
  2. AAV – Associació Arxivers i Gestors de Documents Valencians
  3. Access Info Europe
  4. ACICOM (Associació Ciutadania i Comunicació)
  5. ACREDITRA
  6. Acurema, Asociación de Consumidores y Usuarios de Madrid
  7. AlgoRace
  8. Algorights
  9. Amnistía Internacional
  10. Archiveros Españoles en la Función Pública (AEFP)
  11. Asociación Científica ICONO 14
  12. Asociación de Consumidores de Gran Canaria -ACOGRAN-CECU
  13. ASOCIACIÓN DE USUARIOS DE LA COMUNICACIÓN (AUC)
  14. Associacions Federades de Famílies d’Alumnes de Catalunya (aFFaC)
  15. Calala Fondo de Mujeres
  16. Col·legi de Professionals de la Ciència Política i de la Sociologia de Catalunya
  17. CooperAcció
  18. DigitalFems
  19. Ecos do Sur
  20. Edualter
  21. EKA/ACUV ASOCIACION DE PERSONAS CONSUMIDORAS Y USUARIAS VASCA
  22. Elektronisk Forpost Norge
  23. enreda.coop
  24. Eticas
  25. European Center for Not-for-profit Law (ECNL)
  26. European Digital Rights (EDRi)
  27. European Network Against Racism (ENAR)
  28. Fair Trials
  29. Federación de Consumidores y Usuarios CECU
  30. Federación de Sindicatos de Periodistas (FeSP)
  31. Fundación Ciudadana Civio
  32. Fundación Hay Derecho (Hay Derecho Foundation)
  33. FUNDACIÓN PLATONIQ
  34. Grup Eirene
  35. Hay Derecho
  36. Institut de Drets Humans de Catalunya
  37. Irídia – Centre per la Defensa dels Drets Humans
  38. Irish Council for Civil Liberties (ICCL)
  39. Komons
  40. La Coordinadora de Organizaciones para el Desarrollo
  41. Lafede.cat – Organitzacions per la Justícia Global
  42. Mujeres Supervivientes de violencias de género.
  43. Novact
  44. Observatori del Treball, Algoritme i Societat (TAS)
  45. Observatori DESC
  46. Oxfam Intermon
  47. Panoptykon Foundation
  48. PLATAFORMA EN DEFENSA DE LA LIBERTAD DE INFORMACION (PDLI)
  49. Political Watch
  50. Reds – Red de solidaridad para la transformación Social (Barcelona – Catalunya)
  51. Rights International Spain
  52. SED Catalunya – Solidaritat Educació Desenvolupament
  53. SETEM Catalunya
  54. SOS Racisme Catalunya
  55. Statewatch
  56. SUDS – Associació Internacional de Solidaritat i Cooperació
  57. THE SCHOOL OF WE
  58. Unión de Consumidores de Las Palmas -UCONPA-CECU
  59. Unión Profesional
  60. Universidad Nacional de Educación a Distancia (UNED)
  61. Xnet

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

Further reading

12 May 2022

A clear and present danger: Missing safeguards on migration and asylum in the EU’s AI Act

The EU's proposed Artificial Intelligence (AI) Act aims to address the risks of certain uses of AI and to establish a legal framework for its trustworthy deployment, thus stimulating a market for the production, sale and export of various AI tools and technologies. However, certain technologies or uses of technology are insufficiently covered by or even excluded altogether from the scope of the AI Act, placing migrants and refugees - people often in an already-vulnerable position - at even greater risk of having their rights violated.

06 December 2022

Joint statement: The EU Artifical Intelligence Act must protect people on the move

Joint statement signed by over 160 organisations and 29 individuals, in the run-up to votes in the European Parliament on the position to be taken in negotiations with the Council of the EU.

29 September 2022

EU: AI Act: Council Presidency seeks more secrecy over police use of AI technology

The Czech Presidency of the Council has inserted new provisions into the proposed AI Act that would make it possible to greatly limit the transparency obligations placed on law enforcement authorities using "artificial intelligence" technologies. A new "specific carve-out for sensitive operational data" has been added to a number of articles. If the provisions survive the negotiations, the question then becomes: what exactly counts as "sensitive operational data"? And does the carve-out concern just the data itself, or the algorithms and systems it feeds as well?

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error