PID Perspectives

A deepfake detection guide for your business

Concept for facial recognition, biometric, security system or deepfake dangers.

With the use of AI, the Internet is being flooded with deepfakes. These pose a risk not only to people but also to organizations. They undermine trust, enable fraud, and spread misinformation. So, what can companies do to safeguard their reputations, assets, and stakeholders? 

This guide outlines actionable steps to address the challenges posed by deepfakes to your organization. 

What are deepfakes?

Deepfakes are synthetic media, typically videos or audio recordings, that use artificial intelligence (AI) and machine learning techniques to manipulate or generate content that appears real but is not. “Deepfake” comes from “deep learning,” a subset of AI used to create these deceptive materials.

How do deepfakes work?

Deep fakes are typically created using a machine learning method called generative adversarial networks (GANs). GANs consist of two neural networks:

  1. Generator: Creates fake data, such as altered images or audio.
  2. Discriminator: Evaluates the generated data and determines whether it looks real or fake.

The networks improve iteratively until the discriminator can no longer reliably distinguish between real and fake data.

Deepfakes applications

While often associated with negative uses, deepfakes also have legitimate applications:

Positive Uses
  • Entertainment: Used in movies for special effects or to digitally “de-age” actors.
  • Education and Training: Creating realistic simulations for medical or military training or e-learning.
  • Accessibility: Generating synthesized speech or visual aids for individuals with disabilities.
Negative Uses
  • Disinformation: Spreading fake news or manipulating public opinion.
  • Impersonation: Creating false representations of individuals for fraud or identity theft.
  • Inappropriate content and contexts: Non-consensual use of someone’s likeness in inappropriate contexts, such as revenge videos with intimate content or smear campaigns of public figures.
How can deepfakes harm your business?

Deepfakes can harm a business, whether perpetrated by strangers or from within a company. They are becoming an increasingly common insider threat vector. 

The UK audio deepfake scam

As an example of an external threat, in 2019, criminals used a deepfake audio file to impersonate the CEO of a UK-based energy company in a sophisticated social engineering attack. The attackers trained an AI model on recordings of the CEO’s voice obtained from public speeches and interviews. The resulting deepfake audio could convincingly mimic the CEO’s tone, accent, and speech patterns. Using the deepfake, the attackers called a senior executive of the company, claiming to be the CEO. The fake CEO requested an urgent transfer of €220,000 to a Hungarian supplier, emphasizing the importance of the payment for a business deal. The impersonation was convincing enough that the executive complied, believing the request was genuine. The funds were wired to the bank account as instructed. By the time the fraud was uncovered, the attackers had moved the money through multiple accounts, making recovery impossible.

Deepfakes as an insider threat vector

Deep fakes can also be powerful tools for insider threats. They enable malicious actors within an organization to exploit trust, bypass security measures, or damage reputations. Here’s how: 

Manipulating Communications

  • False Approvals or Instructions: An insider can use deepfake audio or video to impersonate senior leadership, issue fraudulent directives or approve unauthorized actions.

  • Disinformation Campaigns: Employees with access to internal media can alter recordings to spread disinformation and sow discord among teams or stakeholders.

Undermining Security Protocols

  • Bypassing Biometric Systems: Deepfakes of authorized personnel can be used to bypass facial recognition or voice authentication systems.

  • Social Engineering: Insiders can create deepfake content to manipulate external partners, vendors, or other employees into divulging sensitive information.

Damaging Organizational Reputation

Malicious insiders could release fake videos or statements attributed to the organization, leading to public backlash or loss of trust.

How can we identify deepfakes?

While deepfakes can be highly realistic, there are ways to identify them. Below, you will find a breakdown: 

Visual clues
  • Inconsistent Blinking: Early deep fakes often fail to replicate natural blinking patterns. While newer models are better, inconsistencies may still appear.
  • Irregular Facial Movements: Lack of synchronization between lips and audio, unnatural facial expressions or transitions are all indicators of a deepfake. 
  • Lighting Issues: Shadows and other details might not align with the scene.
  • Image Artifacts: Look for blurring around the edges of the face and distorted textures in hair, skin, or clothing.
  • Unnatural Eye or Teeth Detail: Unusual reflections in the eyes or overly perfect teeth may indicate manipulation.
Audio clues
  • Unnatural Speech Patterns: Robotic or monotone voice, mismatched lip movements, or awkward pauses.
  • Environmental Noise Mismatch: Background noise inconsistencies with the setting.
Behavioural indicators
  • Too Perfect Content: If the content appears unusually dramatic or fits a narrative too perfectly, it might be suspicious.
  • Lack of Source Verification: The absence of credible sources or metadata for videos and images.
AI-based detection cools
  • Microsoft Video Authenticator: Analyzes photos and videos to provide a confidence score on authenticity.
  • Deepware Scanner: A mobile app for scanning media for potential deep fake signs.
  • Sensity AI: Detects and tracks deep fakes used in phishing, disinformation, or fraud campaigns.
Forensic tools
  • Image Forensics: Tools like InVID or Fotoforensics analyze metadata and visual inconsistencies.
  • Audio Analysis: Voice biometrics software like Descript and Resemble AI detects manipulated speech patterns.
Social media and browser plugins

Many platforms are incorporating deepfake detection into their content moderation systems. However, browser plugins to detect deepfakes are circulating more and more. Some examples are DeepFakeProof, DeepFakeDetector, and Verifiction.  

A Guide to DeepFake Detection and Mitigation Strategies in Your Organization

This guide outlines actionable steps to address the challenges posed by deepfakes.

1. Build awareness and media literacy

1.1 Train Your Workforce

  • Workshops and Seminars: Conduct regular training sessions to educate employees about deepfakes, their risks, and how to identify them.

  • Educational Materials: Develop and distribute resources, such as infographics, videos, and case studies, on the impact of deep fakes.

1.2 Promote Critical Thinking

  • Encourage employees to evaluate media critically, questioning its authenticity and source.

  • Use real-world examples of deepfakes to highlight red flags.

2.1 Content Verification Tools

  • Adopt AI Tools:

    • Microsoft Video Authenticator: Detects manipulated videos.

    • Deepware Scanner: Scans for deepfake signatures.

  • Metadata Analysis:

    • Use tools like Fotoforensics to analyze metadata and image integrity.

    • For pictures, here’s some additional information on EXIF data analysis

2.2 Establish Protocols for Media Verification

  • Require employees to:

    • Cross-check media with trusted sources.

    • Use reverse image or video searches to trace origins.

2.3 Multi-Layered Review

Implement a multi-step review process for sensitive content before dissemination.

3.1 Integrate AI Detection Systems

Deploy AI tools capable of real-time monitoring and detecting deepfakes within communication channels, such as email and social media platforms.

3.2 Use Blockchain for Content Authentication

Employ blockchain solutions to embed tamper-proof metadata in organizational media—partner with blockchain-based platforms to verify the integrity of external media sources.

3.3 Implement Digital Watermarking

Embed invisible watermarks in official media to validate their authenticity and regularly monitor for unauthorized use of your organization’s branded content.

4.1 Develop Internal Policies

  • Create guidelines for:

    • Handling suspicious media.

    • Reporting potential deepfakes.

  • Define penalties for employees found creating or sharing malicious deepfakes.

4.2 Advocate for Regulatory Support

  • Collaborate with industry bodies to support laws regulating the creation and malicious use of deepfakes.

  • Stay informed about global and local legislation addressing synthetic media.

5.1 Secure Communication Channels

  • Use end-to-end encryption for sensitive communications to prevent manipulation.

  • Monitor for phishing attempts leveraging deepfake technology.

5.2 Multi-Factor Authentication (MFA)

Require MFA for accessing organizational accounts to reduce risks of impersonation.

6.1 Proactive Threat Monitoring

  • Set up AI-driven monitoring systems to scan social media, news outlets, and internal communication for fake or malicious content.

  • Partner with cybersecurity organizations for continuous threat assessment.

6.2 Establish an Incident Response Team

  • Form a dedicated team to address deepfake-related incidents, ensuring quick identification, containment, and resolution.

  • Develop and test response protocols regularly.

7.1 Partner with Technology Providers

Collaborate with developers of advanced detection tools to stay ahead of evolving deep fake technologies.

7.2 Engage with Industry Groups

Join industry forums and working groups focused on combating synthetic media threats. Share insights and best practices with peer organizations.

8.1 Stay Informed

  • Keep abreast of advancements in AI and deepfake technologies.

  • Update detection tools and protocols to counter new threats.

8.2 Conduct Periodic Audits

  • Review your organization’s policies and technical solutions to ensure they remain effective and relevant.

  • Use audit findings to refine your approach.

Implementing these strategies can significantly reduce your organization’s vulnerability to deepfake threats. 

Related Posts

Table of Contents

This post is about...

Author

Leave a comment

Your email address will not be published. Required fields are marked *