Skip to content

Services, Wiki-Artikel, Blog-Beiträge und Glossar-Einträge durchsuchen

↑↓NavigierenEnterÖffnenESCSchließen

EU Regulation | AI Governance

EU AI Act:
Cybersecurity Requirements
for AI Systems.

The EU AI Act (Regulation EU 2024/1689) is the world's first binding AI law. It classifies AI systems by risk, mandates cybersecurity measures, and imposes fines of up to EUR 35 million. It applies to any company placing AI on the EU market - regardless of where the company is based.

Last updated: March 2026 - reviewed by certified experts

In force since
01 Aug 2024
Regulation (EU) 2024/1689
Risk classes
4 tiers
Minimal, Limited, High, Unacceptable
Maximum fine
EUR 35 M
or 7% of global annual turnover
Full application
Aug. 2027
Phased rollout from Feb. 2025
Prohibited practices in force
Feb. 2025
Maximum fine
35 M EUR
High-Risk obligations
Aug. 2026
From minimal to prohibited
4 risk classes

Overview

What is the EU AI Act?

The EU AI Act (Regulation EU 2024/1689) is the world's first binding AI regulation. The regulation was published in the Official Journal of the EU on 13 June 2024 and applies directly in all EU member states - without national implementing legislation.

The AI Act follows a risk-based regulatory model: the higher the potential harm of an AI system, the stricter the requirements. Particularly relevant for cybersecurity are the requirements on robustness and resilience under Art. 15, and the obligation to conduct adversarial testing (red teaming) for GPAI models with systemic risk under Art. 55.

The AI Act has extraterritorial reach: it applies to any provider placing an AI system on the EU market, regardless of where the provider is headquartered. This makes it relevant for companies in the US, UK, Asia and beyond.

EU AI Act at a Glance

EU Regulation
EU 2024/1689, 13 June 2024
Bans from
2 February 2025
GPAI rules from
2 August 2025
High-Risk from
2 August 2026
DE Authority
Bundesnetzagentur + BSI (Cyber)
Max. fine
EUR 35 M / 7% global turnover

High-Risk deadline approaching

Providers of high-risk AI systems must fulfil all conformity obligations by August 2026 - conformity assessment, CE marking, EU database registration.

Risk-Based Approach

The 4 Risk Classes of the AI Act

The AI Act classifies AI systems by their risk potential. The higher the risk class, the stricter the requirements - from no obligations to a complete prohibition. The classification determines which cybersecurity and compliance measures you need to take.

Prohibited

AI systems posing unacceptable risks, fully prohibited as of 02 February 2025.

  • Social scoring by public authorities or private entities
  • Emotion recognition in the workplace and educational institutions
  • Biometric categorisation to infer sensitive characteristics
  • Manipulation of unconscious behaviour (subliminal techniques)
  • Real-time remote biometric identification in public spaces (exceptions for law enforcement)
  • Exploitation of vulnerabilities of specific groups
High Risk

AI systems listed in Annex III with extensive obligations from 02 August 2026 (Art. 6 AI Act).

  • Credit scoring and creditworthiness assessment
  • HR systems: recruitment and promotion decisions
  • Biometric identification and categorisation
  • Critical infrastructure operation: power, water, gas networks
  • Law enforcement, border control, justice
  • Education and vocational training
  • Migration and asylum
Limited Risk

Transparency obligations: users must know they are interacting with an AI system.

  • Chatbots and conversational AI systems
  • Deepfakes and AI-generated content (labelling required)
  • Emotion recognition systems (outside prohibited areas)
Minimal Risk

No specific obligations - voluntary codes of conduct recommended.

  • Spam filters and email classification
  • Recommendation systems (without significant risks)
  • AI in video games
  • Simple image-processing applications

Which risk class applies to your AI system?

The classification determines your compliance effort. AWARE7 assesses your AI systems definitively and creates an action plan based on your specific risk class.

Request classification

Art. 15 AI Act

Cybersecurity Obligations for AI Systems

Art. 15 AI Act requires providers of high-risk AI systems to ensure robustness, resilience and cybersecurity. The requirements apply throughout the entire lifecycle - from development to decommissioning.

01

Robustness against Adversarial Attacks

High-risk AI systems must be resilient against hostile manipulation (Art. 15(1) AI Act). This includes robustness tests against adversarial examples, data poisoning and model inversion. Technical measures must be documented throughout the entire lifecycle.

AI Penetration Testing
02

Data Integrity and Data Quality

Training, validation and test datasets must be appropriate in terms of errors, completeness, representativeness and statistical properties (Art. 10 AI Act). Data governance processes and bias checks are mandatory.

03

Technical Documentation and Logging

Complete recording of all relevant events throughout the entire operational period (Art. 12 AI Act). Logs must enable traceability and comprehensibility of the AI system - for audits, authorities and law enforcement.

Security Consulting
04

Human Oversight

High-risk AI systems must be designed so that natural persons can effectively monitor outputs (Art. 14 AI Act). Human-in-the-Loop or Human-on-the-Loop - depending on the risk profile. Override mechanisms must be technically embedded.

05

Accuracy, Precision and Bias Testing

AI systems must be operated with appropriate accuracy, robustness and cybersecurity throughout their entire lifecycle (Art. 15 AI Act). Regular bias tests and performance monitoring are mandatory, especially for systems making decisions about individuals.

AI Security Testing
06

Conformity Assessment and Registration

Providers of high-risk AI systems must conduct a conformity assessment before placing on the market (Art. 43 AI Act) and register the system in the EU database (Art. 49 AI Act). CE marking is required.

07

Incident Reporting to Authorities

Serious incidents and malfunctions of high-risk AI systems must be reported to the competent national market surveillance authority (Art. 73 AI Act). In Germany this will likely be the Federal Network Agency (Bundesnetzagentur) in coordination with the BSI.

08

Red Teaming for GPAI Models

Providers of general-purpose AI (GPAI) models with systemic risk must conduct adversarial testing (Art. 55 AI Act). Red-teaming tests uncover security vulnerabilities, abuse potential and undesirable behaviour.

Red Teaming for AI

Note: The AI Act's cybersecurity requirements for high-risk systems apply from August 2026. Providers of GPAI models with systemic risk must be able to demonstrate adversarial red teaming from August 2025 (Art. 55 AI Act). AWARE7 recommends starting preparation immediately.

Regulatory Landscape

AI Act + NIS-2 + GDPR + CRA: The Full Regulatory Picture

For companies deploying AI, a complex interplay of several EU frameworks arises. All must be observed in parallel - there is no choice.

EU AI Act

Risk classes, robustness, red teaming, logging, human oversight. AI-specific regulation.

Applies from 2025/2026

NIS-2 Directive

Cybersecurity of the entire infrastructure including AI systems. Applies to affected sectors across the EU.

NIS-2 Guide

GDPR

Data protection for AI systems processing personal data. Applies to virtually all AI applications.

GDPR Guide

Cyber Resilience Act (CRA)

Cybersecurity requirements for products with digital elements - includes embedded AI.

CRA Guide

Note for non-EU companies: The AI Act applies to any provider placing AI on the EU market regardless of their location - similar to the GDPR's extraterritorial reach. Importers and distributors in the EU share responsibility for the compliance of the AI systems they market.

Case Studies

Real AI Incidents: What the AI Act Would Have Prevented

These cases demonstrate why the AI Act is necessary - and what consequences uncontrolled AI deployment can bring. Under the AI Act, many of these incidents would have been regulatorily addressed or prevented.

ChatGPT & GDPR - Data Leak at OpenAI

March 2023

OpenAI reported a bug in the open-source Redis library that allowed users to see the chat titles of other users. Italy's data protection authority Garante imposed a temporary ban on ChatGPT and initiated proceedings for GDPR violations. The incident illustrates that AI systems are not closed systems - data leaks can arise from unexpected dependencies.

Clearview AI - Biometric Data Without Consent

Ongoing since 2022

US AI provider Clearview AI was fined by several EU data protection authorities (France: EUR 20 million, Italy: EUR 20 million, Greece: EUR 20 million). Clearview had scraped biometric data from the internet without the consent of those affected. Under the AI Act, this practice would be classified as prohibited biometric categorisation.

Amazon HR AI - Discrimination in Hiring

October 2018

Amazon's internal AI recruitment system systematically favoured male candidates because it was trained on historically male-dominated hiring data. Amazon discontinued the project. Under the AI Act, such an HR system would be classified as high-risk and would have required extensive bias tests and human oversight - before deployment.

Tesla Autopilot - Regulatory Investigations

Ongoing since 2021

US regulator NHTSA investigated over 750 accidents connected to Tesla's Autopilot system. In the EU, autonomous driving systems would be classified as high-risk AI (Annex III, transport). The AI Act requires providers to carry out extensive conformity assessments, log every incident and conduct regular safety reviews.

„The EU AI Act is not abstract regulation - it has concrete cybersecurity requirements that demand technical expertise. Anyone deploying AI in safety-critical domains needs adversarial testing, not just legal advice.“

Chris Wojzechowski

Auditor with §31 BSIG audit methodology competence · AWARE7 GmbH

FAQ

Frequently Asked Questions about the EU AI Act

The most important questions about the EU AI Act - answered with technical depth and practical focus.

The EU AI Act (Regulation EU 2024/1689) entered into force on 1 August 2024. It applies in stages: prohibited AI practices have been in effect since 2 February 2025. Rules for general-purpose AI models (GPAI) such as GPT-4 or Gemini apply from 2 August 2025. High-risk AI obligations under Annex III apply from 2 August 2026. The AI Act applies directly in all EU member states - without national implementing legislation. It also has extraterritorial reach: non-EU companies placing AI systems on the EU market must comply.
High-risk AI systems are listed exhaustively in Annex III of the AI Act. These include: biometric identification and categorisation, operation of critical infrastructure (energy, water, transport), education and vocational training, employment and human resources management (HR systems), access to private and public services (credit scoring, social benefits), law enforcement and border control, migration and asylum proceedings, and the administration of justice. Additionally: AI systems in safety-relevant products subject to EU harmonisation legislation (Annex I, e.g. medical devices).
Companies that only use AI services (deployers/operators) have fewer obligations than providers (developers). Nevertheless they must: ensure that AI systems deployed are compliant (due diligence), inform employees when AI decisions affect them, ensure human oversight for high-risk applications, and for some systems carry out a data protection impact assessment under GDPR. Important: anyone who fine-tunes or adapts a GPAI model for their own purposes may become a provider themselves.
The AI Act provides for three fine categories: violations of prohibited AI practices (Art. 5): up to EUR 35 million or 7% of global annual turnover. Violations of high-risk obligations and GPAI rules: up to EUR 15 million or 3% of global annual turnover. Inaccurate information to authorities: up to EUR 7.5 million or 1.5% of annual turnover. In each case the higher amount applies. Proportionally lower caps apply to SMEs and start-ups.
GPAI (General Purpose AI) are AI models trained for a wide variety of tasks and usable across different application contexts - typically large language models such as GPT-4, Gemini or LLaMA. From 2 August 2025, all GPAI providers must provide: technical documentation, a summary of training data (copyright), and EU code-of-practice compliance. For GPAI models with systemic risk (>10^25 FLOPs training) additional requirements apply: adversarial red-teaming (Art. 55), cybersecurity obligations, reporting of serious incidents to the EU AI Office.
The three frameworks complement and overlap each other: the GDPR governs data protection for AI systems that process personal data - which applies to almost all practical AI applications. NIS-2 requires affected entities to secure their entire IT infrastructure, including AI systems. The AI Act adds AI-specific security requirements (robustness, adversarial attacks, human oversight). Companies must comply with all three simultaneously. AWARE7 provides integrated advice: AI Act compliance plus NIS-2 plus GDPR from a single source.
The German Federal Office for Information Security (BSI) has developed the AI Cloud Service Compliance Criteria Catalogue (AIC4), a catalogue of criteria for the security of AI cloud services. AIC4 addresses security aspects such as data integrity, model robustness, transparency and explainability - covering significant parts of the AI Act requirements under Art. 15 (robustness) and Art. 12 (logging). An AIC4 audit by an accredited testing laboratory can serve as evidence of AI Act compliance where requirements overlap. AWARE7 supports AIC4 audits as preparation for AI Act compliance.
Yes - providers of high-risk AI systems under Annex III must register their system in the EU-operated database before placing it on the market or putting it into service (Art. 49 AI Act). Deployers of high-risk AI systems in certain areas (e.g. public administration) must also be registered in the EU database. Registration serves market transparency and facilitates regulatory supervision. Exceptions apply to law enforcement authorities in certain constellations.
Our AI security assessment follows four phases: (1) Classification - determining the risk class of your AI systems under Art. 5, Art. 6 and Annex III of the AI Act. (2) Gap analysis - comparison against all relevant AI Act requirements (data governance, robustness, logging, human oversight, documentation). (3) Technical testing - adversarial testing (adversarial examples, data poisoning, model extraction) and bias analysis. (4) Report and action plan - documentation for conformity assessment, CE marking and authorities. Typical duration: 4 to 10 weeks.
Yes - the AI Act has extraterritorial effect, similar to the GDPR. It applies to: providers placing AI systems on the EU market (regardless of the provider's location), deployers within the EU, and AI systems whose outputs are used in the EU. This means: US, Chinese or Israeli AI providers marketing their products in the EU must comply with the AI Act. Importers and distributors in the EU share responsibility for the compliance of the systems they market.

Schedule an AI Security Assessment

In a free 30-minute call, we analyse your AI systems, determine the relevant risk class under the AI Act, and show which cybersecurity measures are specifically required - with a timeline and fixed-price proposal.

Kostenlos · 30 Minuten · Unverbindlich