EU Regulation | AI Governance
EU AI Act:
Cybersecurity Requirements
for AI Systems.
The EU AI Act (Regulation EU 2024/1689) is the world's first binding AI law. It classifies AI systems by risk, mandates cybersecurity measures, and imposes fines of up to EUR 35 million. It applies to any company placing AI on the EU market - regardless of where the company is based.
Last updated: March 2026 - reviewed by certified experts
- Prohibited practices in force
- Feb. 2025
- Maximum fine
- 35 M EUR
- High-Risk obligations
- Aug. 2026
- From minimal to prohibited
- 4 risk classes
Overview
What is the EU AI Act?
The EU AI Act (Regulation EU 2024/1689) is the world's first binding AI regulation. The regulation was published in the Official Journal of the EU on 13 June 2024 and applies directly in all EU member states - without national implementing legislation.
The AI Act follows a risk-based regulatory model: the higher the potential harm of an AI system, the stricter the requirements. Particularly relevant for cybersecurity are the requirements on robustness and resilience under Art. 15, and the obligation to conduct adversarial testing (red teaming) for GPAI models with systemic risk under Art. 55.
The AI Act has extraterritorial reach: it applies to any provider placing an AI system on the EU market, regardless of where the provider is headquartered. This makes it relevant for companies in the US, UK, Asia and beyond.
EU AI Act at a Glance
- EU Regulation
- EU 2024/1689, 13 June 2024
- Bans from
- 2 February 2025
- GPAI rules from
- 2 August 2025
- High-Risk from
- 2 August 2026
- DE Authority
- Bundesnetzagentur + BSI (Cyber)
- Max. fine
- EUR 35 M / 7% global turnover
High-Risk deadline approaching
Providers of high-risk AI systems must fulfil all conformity obligations by August 2026 - conformity assessment, CE marking, EU database registration.
Risk-Based Approach
The 4 Risk Classes of the AI Act
The AI Act classifies AI systems by their risk potential. The higher the risk class, the stricter the requirements - from no obligations to a complete prohibition. The classification determines which cybersecurity and compliance measures you need to take.
AI systems posing unacceptable risks, fully prohibited as of 02 February 2025.
- Social scoring by public authorities or private entities
- Emotion recognition in the workplace and educational institutions
- Biometric categorisation to infer sensitive characteristics
- Manipulation of unconscious behaviour (subliminal techniques)
- Real-time remote biometric identification in public spaces (exceptions for law enforcement)
- Exploitation of vulnerabilities of specific groups
AI systems listed in Annex III with extensive obligations from 02 August 2026 (Art. 6 AI Act).
- Credit scoring and creditworthiness assessment
- HR systems: recruitment and promotion decisions
- Biometric identification and categorisation
- Critical infrastructure operation: power, water, gas networks
- Law enforcement, border control, justice
- Education and vocational training
- Migration and asylum
Transparency obligations: users must know they are interacting with an AI system.
- Chatbots and conversational AI systems
- Deepfakes and AI-generated content (labelling required)
- Emotion recognition systems (outside prohibited areas)
No specific obligations - voluntary codes of conduct recommended.
- Spam filters and email classification
- Recommendation systems (without significant risks)
- AI in video games
- Simple image-processing applications
Which risk class applies to your AI system?
The classification determines your compliance effort. AWARE7 assesses your AI systems definitively and creates an action plan based on your specific risk class.
Art. 15 AI Act
Cybersecurity Obligations for AI Systems
Art. 15 AI Act requires providers of high-risk AI systems to ensure robustness, resilience and cybersecurity. The requirements apply throughout the entire lifecycle - from development to decommissioning.
Robustness against Adversarial Attacks
High-risk AI systems must be resilient against hostile manipulation (Art. 15(1) AI Act). This includes robustness tests against adversarial examples, data poisoning and model inversion. Technical measures must be documented throughout the entire lifecycle.
AI Penetration TestingData Integrity and Data Quality
Training, validation and test datasets must be appropriate in terms of errors, completeness, representativeness and statistical properties (Art. 10 AI Act). Data governance processes and bias checks are mandatory.
Technical Documentation and Logging
Complete recording of all relevant events throughout the entire operational period (Art. 12 AI Act). Logs must enable traceability and comprehensibility of the AI system - for audits, authorities and law enforcement.
Security ConsultingHuman Oversight
High-risk AI systems must be designed so that natural persons can effectively monitor outputs (Art. 14 AI Act). Human-in-the-Loop or Human-on-the-Loop - depending on the risk profile. Override mechanisms must be technically embedded.
Accuracy, Precision and Bias Testing
AI systems must be operated with appropriate accuracy, robustness and cybersecurity throughout their entire lifecycle (Art. 15 AI Act). Regular bias tests and performance monitoring are mandatory, especially for systems making decisions about individuals.
AI Security TestingConformity Assessment and Registration
Providers of high-risk AI systems must conduct a conformity assessment before placing on the market (Art. 43 AI Act) and register the system in the EU database (Art. 49 AI Act). CE marking is required.
Incident Reporting to Authorities
Serious incidents and malfunctions of high-risk AI systems must be reported to the competent national market surveillance authority (Art. 73 AI Act). In Germany this will likely be the Federal Network Agency (Bundesnetzagentur) in coordination with the BSI.
Red Teaming for GPAI Models
Providers of general-purpose AI (GPAI) models with systemic risk must conduct adversarial testing (Art. 55 AI Act). Red-teaming tests uncover security vulnerabilities, abuse potential and undesirable behaviour.
Red Teaming for AINote: The AI Act's cybersecurity requirements for high-risk systems apply from August 2026. Providers of GPAI models with systemic risk must be able to demonstrate adversarial red teaming from August 2025 (Art. 55 AI Act). AWARE7 recommends starting preparation immediately.
Regulatory Landscape
AI Act + NIS-2 + GDPR + CRA: The Full Regulatory Picture
For companies deploying AI, a complex interplay of several EU frameworks arises. All must be observed in parallel - there is no choice.
EU AI Act
Risk classes, robustness, red teaming, logging, human oversight. AI-specific regulation.
NIS-2 Directive
Cybersecurity of the entire infrastructure including AI systems. Applies to affected sectors across the EU.
NIS-2 GuideGDPR
Data protection for AI systems processing personal data. Applies to virtually all AI applications.
GDPR GuideCyber Resilience Act (CRA)
Cybersecurity requirements for products with digital elements - includes embedded AI.
CRA GuideNote for non-EU companies: The AI Act applies to any provider placing AI on the EU market regardless of their location - similar to the GDPR's extraterritorial reach. Importers and distributors in the EU share responsibility for the compliance of the AI systems they market.
Case Studies
Real AI Incidents: What the AI Act Would Have Prevented
These cases demonstrate why the AI Act is necessary - and what consequences uncontrolled AI deployment can bring. Under the AI Act, many of these incidents would have been regulatorily addressed or prevented.
ChatGPT & GDPR - Data Leak at OpenAI
March 2023OpenAI reported a bug in the open-source Redis library that allowed users to see the chat titles of other users. Italy's data protection authority Garante imposed a temporary ban on ChatGPT and initiated proceedings for GDPR violations. The incident illustrates that AI systems are not closed systems - data leaks can arise from unexpected dependencies.
Clearview AI - Biometric Data Without Consent
Ongoing since 2022US AI provider Clearview AI was fined by several EU data protection authorities (France: EUR 20 million, Italy: EUR 20 million, Greece: EUR 20 million). Clearview had scraped biometric data from the internet without the consent of those affected. Under the AI Act, this practice would be classified as prohibited biometric categorisation.
Amazon HR AI - Discrimination in Hiring
October 2018Amazon's internal AI recruitment system systematically favoured male candidates because it was trained on historically male-dominated hiring data. Amazon discontinued the project. Under the AI Act, such an HR system would be classified as high-risk and would have required extensive bias tests and human oversight - before deployment.
Tesla Autopilot - Regulatory Investigations
Ongoing since 2021US regulator NHTSA investigated over 750 accidents connected to Tesla's Autopilot system. In the EU, autonomous driving systems would be classified as high-risk AI (Annex III, transport). The AI Act requires providers to carry out extensive conformity assessments, log every incident and conduct regular safety reviews.
Advisory Services
How AWARE7 Supports AI Act Compliance
From risk classification to adversarial testing - AWARE7 guides you through all AI Act requirements with a focus on technical cybersecurity.
AI Penetration Testing & Red Teaming
Adversarial testing of your AI systems: adversarial examples, data poisoning, model extraction, prompt injection - in line with Art. 15 and Art. 55 AI Act.
Request AI pentestAI Act Gap Analysis & Classification
Definitive risk classification of your AI systems, gap analysis against all AI Act requirements, prioritised action plan with timeline.
Start gap analysisIntegrated Compliance: AI Act + NIS-2 + GDPR
Holistic advice combining AI Act, NIS-2 and GDPR in a single engagement - rather than three parallel projects.
Understand NIS-2 interaction„The EU AI Act is not abstract regulation - it has concrete cybersecurity requirements that demand technical expertise. Anyone deploying AI in safety-critical domains needs adversarial testing, not just legal advice.“
Chris Wojzechowski
Auditor with §31 BSIG audit methodology competence · AWARE7 GmbH
FAQ
Frequently Asked Questions about the EU AI Act
The most important questions about the EU AI Act - answered with technical depth and practical focus.
When does the EU AI Act apply?
Which AI systems are classified as "high-risk" under the AI Act?
What does the AI Act mean for companies using AI tools such as ChatGPT?
How high are the fines under the EU AI Act?
What is a GPAI model and what obligations apply?
How do the AI Act, GDPR and NIS-2 interact?
What is BSI AIC4 and how does it relate to the AI Act?
Do I need to register my AI systems in an EU database?
How does an AI security assessment at AWARE7 work?
Does the AI Act also apply to AI systems developed outside the EU?
Aus dem Blog
Weiterführende Artikel
Alle ArtikelSchedule an AI Security Assessment
In a free 30-minute call, we analyse your AI systems, determine the relevant risk class under the AI Act, and show which cybersecurity measures are specifically required - with a timeline and fixed-price proposal.
Kostenlos · 30 Minuten · Unverbindlich