Deepfake Security - KI-generierte Täuschungen als Angriffsmittel
Deepfakes are AI-generated fake audio, video, or image content that imitates real people. As attack vectors: voice cloning for CEO fraud (vishing 2.0), video deepfakes for video conference fraud, and synthetic photos for social engineering. Detection methods: Deepfake detectors, biometric liveness detection, watermarking standards (C2PA). Prevention: Passphrases, verification processes, multi-factor authentication.
Deepfakes are AI-generated media—audio, video, or images—that are so realistic that people cannot distinguish them from genuine recordings. As a means of attack, deepfakes represent a new level of social engineering: whereas an attacker could previously only imitate a CEO, they can now create deceptively realistic replicas of the CEO’s voice (voice cloning) or face (face swap).
Deepfake Attack Vectors
Deepfakes as an attack vector:
1. Voice Cloning – CEO Fraud 2.0:
Traditional CEO Fraud:
→ Fake email from "CEO" → CFO transfers money
→ Detection: Check email domain, callback
→ Countermeasure: DMARC, verification by phone
Voice Clone Attack:
→ Attacker collects: CEO interviews, YouTube videos, earnings calls
→ 3 minutes of real voice → ElevenLabs/TortoiseTTS → perfect clone
→ Attacker calls CFO (voice call, not text!)
→ CFO hears familiar CEO voice → transfers funds!
Real-life case: 2019, UK energy company
→ CFO received a call from the "CEO" (perfect German accent)
→ Authorized a transfer of 220,000 EUR
→ Voice was an AI-generated clone
2. Video deepfakes – video conference fraud:
Scenario:
→ Attacker creates a deepfake video of the CEO
→ "Video conference" with CFO → CEO’s face on attacker’s camera
→ "We need this transaction IMMEDIATELY"
Real-life case: Hong Kong 2024
→ Finance employee in a multi-person video conference
→ All other participants: deepfake videos (colleagues and CFO)
→ $25 million transferred!
3. Synthetic Photos - Identity Fraud:
→ LinkedIn profile with AI-generated photo (thispersondoesnotexist.com)
→ Fake employee → Build trust
→ Or: KYC bypass using synthetic ID documents
4. Text Deepfakes (LLM-generated):
→ Hyper-personalized phishing based on the target person
→ Style imitation: Writing style of a colleague learned from emails
→ Also: Comment bots, social media manipulation
Detection of Deepfakes
Technical detection methods:
Visual artifacts (detectable by laypeople):
→ Unnatural blinking patterns (early deepfakes did not blink)
→ Inconsistent lighting: face ≠ background
→ Blurring at the edge of the face (blending artifacts)
→ Teeth/hair: often unnatural
→ Micro-expressions: LLM videos lack sub-expressions
Automatic Deepfake Detection:
Microsoft Video Authenticator (free):
→ Analyzes at the pixel level for GAN artifacts
→ Confidence score 0–100%
Hive Moderation API (commercial):
→ Real-time deepfake detection via API
→ Video, audio, images
→ Integration into: KYC systems, video conferencing platforms
Intel FakeCatcher:
→ Analyzes blood flow patterns in the face (rPPG)
→ Real people: Skin color pulsates subtly
→ Deepfakes: No blood flow → detectable
→ 96% detection rate (as of 2024)
C2PA (Coalition for Content Provenance and Authenticity):
→ Standard for digital content provenance
→ Signatures for authentic media (Nikon, Canon, Adobe)
→ Deepfakes: no valid C2PA signature
→ Supported by: Adobe, BBC, Microsoft, Sony
Audio Deepfake Detection:
Pindrop Pulse:
→ Voice authentication against voice clones
→ Detects synthetic voices in real time
→ Applications: call centers, banking IVR
ElevenLabs AI Speech Classifier:
→ Proprietary tool detects voices generated by its own TTS
→ Limitation: detects only its own generations!
Protective Measures Within the Company
Organizational Deepfake Defense:
1. Code Words (can be implemented immediately!):
→ Internal company verification password
→ For every unusual financial request by phone:
"What is today’s code word?"
→ The CEO knows it; deepfake attackers do not!
→ Change: daily or weekly
→ MANDATORY for executives and the finance team
2. Verification processes:
□ Unusual financial requests:
→ ALWAYS confirm via a second, independent channel
→ Call back using a known/saved number (NOT the one given in the call!)
→ If in doubt: personal confirmation
□ Video conference verification:
→ For unknown participants: ask a random question only the real person knows
→ Request a physical gesture (scratch nose, raise hand)
→ Noticeable jerkiness/lag in deepfakes!
3. Technical Controls:
→ Video conferencing platform: Check for end-to-end encryption
(does not protect against deepfakes, but against MITM)
→ Identity verification for external meetings: Calendar invitation + 2FA
→ Media watermarking for internal videos (C2PA)
4. Awareness Training:
→ Show deepfake examples in security awareness training
→ Employees should ALWAYS be suspicious, even with a "real" voice/video
→ "I recognize the voice" = not a security indicator!
→ Reporting process: report suspicious calls immediately
5. Financial Controls:
→ Dual-control principle for transfers > X EUR (always!)
→ Callback requirement: confirm all transfer requests via email/phone
→ Limits: no instant payments > Y EUR without a 24-hour processing period
Detection checklist for employees:
□ Unusual request via phone/video?
□ Urgency and time pressure?
□ Callback confirmed via a known number?
□ Is the content of the request consistent with company policy?
□ Was a code word requested?
→ If in doubt: REFUSE and inform your supervisor/IT!
AI-Generated Threat Landscape
Trends and Developments:
Current AI tools for deepfakes:
ElevenLabs: Voice cloning, 3 minutes of audio sufficient
HeyGen: Video avatars, lip-sync
Synthesia: Professional AI videos
Midjourney: Photo-realistic images
RunwayML: Real-time video deepfakes
FaceSwap: Open-source face swap
Costs for Attackers (March 2026):
Voice Clone: ~$1/month (ElevenLabs Starter)
Video Deepfake: ~$30/video (HeyGen)
→ ROI for Attackers: 1 successful BEC = 100,000 EUR → 1 EUR investment!
Future threats:
→ Real-time deepfakes: perfect face swap in video conferences (already happening today!)
→ LLM-powered targeting: automatic personalization of attacks
→ Deepfakes + biometric bypass: circumventing liveness tests
→ Synthetic identities on a large scale
C2PA as an industry standard:
→ Authentic media receive a cryptographic signature
→ Goal: "If no C2PA seal → could be a deepfake"
→ Not yet widespread (2026), but a growing trend
→ Adobe Content Credentials: Photoshop, Stock, Firefly
Regulatory developments:
→ EU AI Act: Mandatory deepfake labeling
→ Germany: Section 184k of the Criminal Code (non-consensual deepfakes) since 2021
→ USA: DEEPFAKES Accountability Act (under discussion)