Joint letter to challenge the delay and limited transparency surrounding the establishment of the AI Act Advisory Forum

Topic
Country/Region
EU

Statewatch has co-signed a letter to the Head of AI's office to flag concerns surrounding the delay and limited transparecny surrounding the establishment of the AI Advisory Forum.

Support our work: become a Friend of Statewatch from as little as £1/€1 per month.

The letter was coordinated by the European Centre for Not-For-Profit Law.

Dear Ms Sioli,

We, the undersigned civil society organisations and researchers, are writing to express our concern regarding the delay and limited transparency surrounding the establishment of the AI Act Advisory Forum.

The call for expression of interest to join the Forum was closed in September 2025. Despite several requests to the AI Office for clarification on the timeline and the process for finalising the selection and completion of the establishment of the Forum, we have not received sufficient information. Seven months after the call ended, the final selection results and the first meeting of the Forum have still not been announced. This delay and the lack of clarity create uncertainty about the status and prioritisation of the Forum’s establishment. We call for transparency about the selection of members of the Forum as well as of the next steps.

The Advisory Forum is foreseen in the AI Act as the only formal mechanism to gather multi-stakeholder input to support the AI Office and the AI Board in the implementation and application of the law. As important implementation processes are already underway or completed, the absence of the Forum limits opportunities for structured, consistent, and inclusive input. While we welcome ad hoc consultations with public interest actors, a formalised Forum - in line with its aim envisioned by co-legislators - would provide a more predictable and representative framework for engagement, including for underrepresented perspectives. As a shared dialogue between different types of stakeholders, the Forum is also a key transparency measure enabling increased understanding of each actor’s respective positions. Finally, we highlight that an established Forum would have also been beneficial in providing advice on any changes to the AI Act as well as the schedule of implementation.

The recently adopted EU Civil Society Strategy underscores that meaningful engagement with civil society is a cornerstone of EU policymaking. In this context, we encourage the AI Office to operationalise the Strategy’s guiding principles - particularly predictability and regularity, transparency, and inclusivity - in its approach to stakeholder engagement.

In the absence of the Forum, many implementation discussions - such as those concerning prohibitions, high-risk classification, and fundamental rights obligations - take place primarily between the AI Office and the AI Board. Contributing effectively in this context often requires established networks, institutional knowledge, and proximity to Brussels-based processes. This creates
barriers for many civil society actors, particularly those representing or working with communities most affected by AI-related risks.

In addition, civil society organisations and many academic researchers are operating in a constrained and uncertain funding environment. Predictable timelines and transparent processes are therefore essential to enable meaningful participation, including securing and planning resources and ensuring staff availability for sustained engagement.

In this context, we request the AI Office to prioritise the establishment of the Advisory Forum and to provide clarity on the expected timeline and process. We would also welcome the development of concrete measures to foster an enabling environment for civil society participation in AI Act implementation and enforcement - including outside the Forum, in line with the commitments set out in the EU Civil Society Strategy.

We remain at your disposal to share further insights on the challenges faced by civil society actors and to contribute practical suggestions on how inclusive engagement with civil society and researchers can be strengthened in the context of AI Act implementation.

Sincerely,

Organisations:
Access Now
AI Accountability Lab (AIAL), Trinity College Dublin
AlgorithmWatch
Alternatif Bilisim
Amnesty International
ARTICLE 19
Asociația pentru Tehnologie și Internet (ApTI)
Avaaz Foundation
BEUC - European Consumer Organisation
Bits of Freedom
Centre for Democracy and Technology Europe
Citizens Network Watchdog Poland
Civil Liberties Union for Europe (Liberties)
Danes je nov dan, Inštitut za druga vprašanja
Electronic Frontier Norway
European Center for Not-for-Profit Law (ECNL)
European Civic Forum
European Disability Forum (EDF)
European Digital Rights (EDRi)
European Council of Autistic People (EUCAP)
European Network Against Racism (ENAR)
European Public Service Union (EPSU)
Hermes Center
Homo Digitalis
LDH (Ligue des droits de l’Homme)
Panopykon Foundation
Politiscope
Statewatch
The Swedish Disability Rights Federation
WITNESS
Individuals:
Prof. Lilian Edwards, Emerita Newcastle University
Prof. Douwe Korff Emeritus Professor of international law
Dr. Laurens Naudts, AI, Media & Democracy Lab - Institute for Information Law, University of Amsterdam
Krista Ojala, Tukena-foundation
Dr. Plixavra Vogiatzoglou, University of Amsterdam

Our work is only possible with your support.
Become a Friend of Statewatch from as little as £1/€1 per month.

Further reading

22 April 2026

Europe’s uncertain plans for rolling out the automated border system ETIAS

The European Travel Information and Authorisation system (ETIAS) is due to launch at the end of this year, yet faces criticism from civil society and European institutions. Even Frontex, the EU border agency known for its own poor data practices, has highlighted concerns about the system's compliance with data protection laws. The European Commission’s failure to release legal guidance on compliance and a pending judgment of the Court of Justice of the European Union (CJEU) adds to the uncertainty surrounding the planned system's start.

29 April 2026

Counterterrorism and children's data: Child rights implications of data-driven technologies and migration policies

In a world where data is routinely shared across borders, children are at risk of lifelong consequences when their names are placed on security databases or watchlists. Whether children are used by armed groups, suspected of links to them, or who simply have family members involved, they may be screened by official authorities. This often involves recording and storing their sensitive personal data and can lead to them being flagged as a security risk. This article suggests ways to reduce these risks by integrating a child rights-based lens to counterterrorism work.

30 October 2025

Behind closed doors: Europol’s opaque relations with tech companies

As part of its research into the expanding—and largely unchecked—use of AI by EU security agencies, Statewatch delves into largely uncharted territory: Europol’s links with the private sector. A survey of this landscape reveals conflicts of interests, secrecy and opacity, and a whole array of intrusive and invasive technologies that Europol would like to adopt, and make more widely available to European police forces.

 

Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error