EU: Advisory group for security research: Report on "AI and security opportunities and risks"

The Protection and Security Advisory Group (PASAG) advises the European Commission on the content of the EU security research programme, which provides funds for research and development on new surveillance and security technologies. PASAG recently published a report entitled 'AI and security opportunities and risks: Towards a trustworthy AI based on European values', which argues that artificial intelligence (AI) "can have extensive application in public security and cyber security, if sufficiently large data sets are available," but calls for more training, research and education to make AI "secure, reliable, unbiased and explainable."

See: PASAG: AI and security opportunities and risks: Towards a trustworthy AI based on European values (pdf)

Recommendations of the report:

1. Basic research is necessary to make AI more secure, reliable, unbiased, and explainable. Current threats such as adversarial machine learning undermine the trustworthiness of AI and mitigations need to be researched. Assessments and metrics are needed to evaluate how reliable a given decision is.
2. AI’s impact on innovation cultures and new business models related to digital economy requires further research and case studies to generate wider understanding of AI’s infrastructural importance to the economy and society.
3. AI is pervasive and can have extensive application in public security and cyber security, if sufficiently large data sets are available. Research projects should explain why they expect significant progress and provide clear KPIs to measure success and error rates.
4. Current basic AI technologies are by default insecure by design and not trustworthy. This does not affect necessarily all use cases, but research projects should be aware of it and provide measures to mitigate these shortcomings where appropriate.
5. The Ethics Guidelines for Trustworthy AI should be used as guidance towards an AI based on European values.
6. Trustworthy AI requires trustworthy computing capabilities. Many AI applications are deployed into the cloud for learning and scalable production. The EU should promote cloud-computing services operating exclusively under EU legislation to protect data from non-EU access.
7. European data pools will make AI much more effective than national or regional ones. This will require responsible trade-offs between effectiveness of AI and fundamental rights such as privacy, especially in the public security sector. The data quality and homogeneity of merged data is crucial for success.
8. Development of defensive measures to detect and combat malicious use of AI. This includes also measures against fake news and deep fakes. This requires interdisciplinary understanding of attacks against AI and how AI can be used for attacks.
9. The talent pool for AI experts is very limited. Comprehensive education programs sponsored by the EU and member states are necessary to achieve competitiveness. The public security sector will need dedicated funding to successfully attract talent for a sustainable deployment of AI within the government sector. Interdisciplinary research is needed to understand the kinds of new skill sets that will be needed in the future not only to develop and operate new AI systems but to identify their potential societal impacts and how these need to be addressed.

A previous report by the PASAG looked at the issues of "synergies" between the next security research programme (entitled Horizon Europe and running from 2021-2027) and research undertaken through the forthcoming European Defence Fund (2021-27). The aim is to take advantage of overlapping areas of interest between civil and military research and development.

See: Synergies and “dual-use” in the specific areas of common/dual interestin both the security and defence programme (pdf)

For an overview of issues related to the security research programme and proposals for the new budgets, see: Sci-fi surveillance: Europe's secretive push into biometric technology (The Guardian, link)

Background: Observatory: The European security-industrial complex


Spotted an error? If you've spotted a problem with this page, just click once to let us know.

Report error