The rapid adoption of Artificial Intelligence (AI) and Generative AI applications raises important questions about the intersection of cybersecurity and AI/ML and its potential benefits and costs in both offensive and defensive capacities. These also have important governance implications. Each of these angles of the AI-cyber nexus will play a role in ensuring societal resilience, defined by the European Union as “the ability not only to withstand and cope with challenges but also to undergo transitions, in a sustainable, fair, and democratic manner.” Whilst cyber resilience is an established notion across EU and NATO stakeholders, the increasing role of AI calls for a broader approach to resilience to account for its cross-cutting role across threat areas, ranging from cyber to information environments and critical infrastructure.
Workstream Objectives
- Short-term objective: Understanding the status quo: Taking stock of current knowledge and capacities on the interconnections of cyber and AI within the European ecosystem. What are existing resources, what are gaps? Have recent events shifted priorities (e.g. European elections)?
- Medium-term objective: Identifying synergies and build capacities: What gaps in both societal and cyber capacities need to be addressed to foster societal resilience? Identifying action areas for fostering holistic societal resilience in line with the objectives of the EU democracy shield and the EU’s cyber regulatory agenda.
- Long-term objective: Foster European cyber and AI resilience: Be a convening force across siloed AI and cyber stakeholder communities to develop common approaches for building capacities and resources and contribute to the objectives of the EU democracy shield and the EU’s cyber regulatory agenda. Encouraging policymakers to incorporate notions of resilience into governance frameworks on AI.
Thematic Introduction
- What are the new cybersecurity challenges raised by AI?
- AI challenges for cybersecurity: As a dual-use technology, AI/ML has the potential to amplify the threat landscape through deployment by malicious actors or introduction of new security vulnerabilities and risks unique to AI/ML systems with their regularly changing configurations. This will require developing a new threat model to equip security teams to mitigate emerging risks.
- AI for scaling cyber and influence operations: By leveraging basic automation, attackers will create efficiencies and amplify their impact. For example, AI can be used to generate ultra-personalized phishing attacks, capable of duping even the most security-conscious users. AI is also fundamentally influencing the information environment, for example through large-scale disinformation campaigns, scaled and automated by AI-generated content. 2024 is the year of elections, with almost half of the global population heading to the polls, and generative artificial intelligence has democratized the ability to create realistic fake or altered images, videos, and audio recordings – including of political candidates. Risks include malicious or deceptive deepfakes, especially targeted at vulnerable communities or seeking to disrupt electoral processes. As these capabilities advance, so too will the threat actors’ creative use of these tools. Progress is being made on technical aspects of these questions, including on content authenticity technology.
- How can AI systems be applied to cybersecurity for defensive capabilities? AI can also be leveraged to enhance protection and thwart attacks, such as triaging alerts on potential attacks, detecting the ‘fingerprints’ of malware within a computer or on a network or guiding automated approaches to mitigation. Generative AI can act as an amplifier, enabling AI-enabled cyber threat monitoring, synthetic training and data generation or analytics and reporting. AI can be used to reduce the number of false-positives produced in a security environment, allowing cyber defenders to respond more rapidly and with higher confidence using AI-generated threat analysis. These solutions combine speed, efficiency, and scalability, automating and augmenting the defender’s ability to safeguard systems, networks, and data while ensuring confidentiality, integrity, and availability.
- How does the AI-cybersecurity nexus impact policymaking? The rise of AI poses new questions for policymakers, along with opportunities to shape real-world policy action to ensure societal readiness and resilience. While the EU has robust frameworks and legislation on cyber security (NIS2, Cyber Resilience Act etc.), new vulnerabilities and enablers linked to AI expand their scope and raise questions about broader societal resilience. The priorities of the new European Commission and the Democracy Shield introduced by President von der Leyen to counter foreign information manipulation and interference online also reference the need to build societal capacities adapted to the shifting AI landscape. The implementation of major digital legislative files in the EU (AI Act, CRA) requires closely exploring the compatibility, interconnections, and challenges of AI and cyber regulation. In complementing existing regulation under the Digital Services Act (DSA) and other relevant EU legislation, it will be important to consider how media provenance certification for abusive AI-generated content, pre-bunking or targeted AI and dis-misinformation literacy initiatives can help build broader societal awareness and resilience towards cyber- and AI-enabled attempts to undermine the European information environment. Increasing resilience will also include further understanding how to incorporate AI-cyber challenges into governance frameworks beyond the EU, including international treaties and multilateral mechanisms.
