A clear and present danger: Missing safeguards on migration and asylum in the EU’s AI Act

The EU's proposed Artificial Intelligence (AI) Act aims to address the risks of certain uses of AI and to establish a legal framework for its trustworthy deployment, thus stimulating a market for the production, sale and export of various AI tools and technologies. However, certain technologies or uses of technology are insufficiently covered by or even excluded altogether from the scope of the AI Act, placing migrants and refugees - people often in an already-vulnerable position - at even greater risk of having their rights violated.

This briefing has been produced as a complementary document to proposed amendments to the AI Act drafted by a coalition of human rights organisations (including European Digital Rights, Access Now, Migration and Technology Monitor, PICUM and Statewatch).

It begins with key points and recommendations, which largely correspond with those in the proposed amendments. A short introduction follows, before an explanation of what the AI Act is, how it deals with migration, and the associated concerns of civil society over its “risk-based approach”.

It goes on to examine the current development and deployment of AI systems by EU institutions and member states for asylum, border and migration control purposes, outlining key use cases, the risks these pose to fundamental rights, and how these would be regulated (or not) by the proposed AI Act.

The briefing then provides a snapshot of the extensive public funding that the EU has provided for the research and development of ‘border AI’, before giving an overview of the key actors and institutions involved in negotiations on the AI Act as it passes through EU institutions.

Full report

Press release



Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error