17 May 2023
With the European Parliament and Council of the EU heading for secret trilogue negotiations on the Artificial Intelligence Act, an open letter signed by 61 organisations - including Statewatch - calls on the Spanish Presidency of the Council to make amendments to the proposal that will ensure the protection of fundamental rights.
Support our work: become a Friend of Statewatch from as little as £1/€1 per month.
The letter was drafted and coordinated by a group of organisations based in Spain: Lafede.cat, Algorace, Fundación Eticas, CIVIO, Observatorio Trabajo, Algoritmos y Sociedad; Algorights, Institut de Drets Humans de Catalunya, CECU.
Madrid, May 17th, 2023
We, the undersigned organizations, are writing to draw your attention to a number of serious deficiencies in the Artificial Intelligence Act (AI Act) (COM/2021/206), currently being negotiated by the Swedish Presidency of the Council of the European Union and soon by the
This letter is based on the position of 123 civil society organizations calling on the European Union to foreground a fundamental rights-based approach to the AI Act in 2021. The AI Act is a fundamental piece of legislation that will have a strong impact on the EU population and, in all likelihood, also beyond our borders.
Some of our concerns relate to dangerous practices that lead to mass surveillance of the population. By fostering mass surveillance and amplifying some of the deepest social inequalities and power imbalances, AI systems are seriously jeopardizing our fundamental rights and democratic processes and values.
In particular, we draw your attention to essential protections of fundamental rights that can be further enhanced during the so-called trilogue negotiations. Given the role that the Spanish government will play as of July, the 1st, 2023 by assuming the rotating Presidency of the Council, and the interest already shown in leading the Artificial Intelligence race in Europe, we believe that the Spanish Presidency should take an assertive role and ensure that this legislation is future-proof and respectful of fundamental rights.
We call on the incoming Spanish Presidency and Member States to ensure that the following features are reflected in the final text of the AI Act:
1- Expand the list of exhaustive prohibitions of those AI systems that pose an “unacceptable risk” to fundamental rights. It is vital that the Article 5 list of “prohibited AI practices” be expanded to cover all systems that are shown to pose an unacceptable risk of violating fundamental rights and freedoms of individuals, also affecting more generally the founding principles of democracy. At the very least, the undersigned organizations believe that the following practices should be banned altogether:
2- Eliminate any type of discretionality in the process of classifying high-risk systems (article 6). The general approach of the Council and the negotiations in the European Parliament tend to add layers to the risk classification process by requiring additional requirements for the systems listed in Annex III to be considered high risk. This would seriously complicate the AI Act, give excessive discretion to providers to decide whether the system is high risk or not, and it would also compromise legal certainty and lead to a large fragmentation in the application of the AI Act. Therefore, we request that:
3- Significant accountability and public transparency obligations on public uses of AI systems and on all “deployers” of high-risk AI. To ensure the highest level of protection of fundamental rights, those who deploy high-risk AI systems (i.e. “users” ” as they are referred to
in the original European Commission proposal and in the general approach of the EU Council or “deployers” as might be agreed in the European Parliament) should provide public information on the use of such systems. This information is crucial for public accountability, as it allows public interest organizations, researchers and affected individuals to understand the context in which high-risk systems are used. The AI Act should include the following obligations for deployers:
4- Rights and redress mechanisms to empower people affected by AI systems. While we have seen some positive steps by recognizing, for example, in the general approach of theCouncil the possibility of filing complaints with public authorities in case of non-compliance with the AI Act, we believe that it is also necessary to recognize other basic rights that enable people affected by AI systems to understand, challenge and obtain redress. Therefore, we understand that the final text of the AI Act should include:
5- Technical standards should not address issues related to fundamental rights and should include more civil society voices in their elaboration process. Civil society is concerned because a large part of the implementation of the AI Act -whose risk-based approach leaves most AI systems almost unregulated (with the exception of high-risk systems, additional transparency obligations for some AI systems, and the debate on generative AI that has recently taken place)- will depend on the development of technical standards and their implementation by manufacturers. Furthermore, we should consider that the standardisation processes are heavily dominated by the industry, considering their complexity. The undersigned organizations state that it is not clear how these standards could impact on the fundamental rights of individuals (e.g., regarding the absence of bias in AI systems). Therefore, we believe it is necessary:
We, the undersigned organizations request:
(1) The organization of a high-level meeting with representatives of civil society before the beginning of the Spanish Presidency of the Council of the European Union, to ensure that fundamental rights are adequately strengthened and protected in the trilogues
elated to the AI Act.
(2) Obtain assurances from the Spanish Government on how it expects the highest levels of fundamental rights protection to be achieved in the final text of the AI Act, as we have previously noted in this letter.
Letter promoted by:
Lafede.cat, Algorace, Fundación Eticas, CIVIO, Observatorio Trabajo, Algoritmos y Sociedad; Algorights, Institut de Drets Humans de Catalunya, CECU.
The EU's proposed Artificial Intelligence (AI) Act aims to address the risks of certain uses of AI and to establish a legal framework for its trustworthy deployment, thus stimulating a market for the production, sale and export of various AI tools and technologies. However, certain technologies or uses of technology are insufficiently covered by or even excluded altogether from the scope of the AI Act, placing migrants and refugees - people often in an already-vulnerable position - at even greater risk of having their rights violated.
Joint statement signed by over 160 organisations and 29 individuals, in the run-up to votes in the European Parliament on the position to be taken in negotiations with the Council of the EU.
The Czech Presidency of the Council has inserted new provisions into the proposed AI Act that would make it possible to greatly limit the transparency obligations placed on law enforcement authorities using "artificial intelligence" technologies. A new "specific carve-out for sensitive operational data" has been added to a number of articles. If the provisions survive the negotiations, the question then becomes: what exactly counts as "sensitive operational data"? And does the carve-out concern just the data itself, or the algorithms and systems it feeds as well?
Spotted an error? If you've spotted a problem with this page, just click once to let us know.
Statewatch does not have a corporate view, nor does it seek to create one, the views expressed are those of the author. Statewatch is not responsible for the content of external websites and inclusion of a link does not constitute an endorsement. Registered UK charity number: 1154784. Registered UK company number: 08480724. Registered company name: The Libertarian Research & Education Trust. Registered office: MayDay Rooms, 88 Fleet Street, London EC4Y 1DH. © Statewatch ISSN 1756-851X. Personal usage as private individuals "fair dealing" is allowed. We also welcome links to material on our site. Usage by those working for organisations is allowed only if the organisation holds an appropriate licence from the relevant reprographic rights organisation (eg: Copyright Licensing Agency in the UK) with such usage being subject to the terms and conditions of that licence and to local copyright law.