PID Perspectives

AI against AI: how Artificial Intelligence influences cyberattacks and defends against them in cybersecurity

AI artificial intelligence concept - robot hands typing on lit keyboard

The use of Artificial Intelligence (AI) is exploding throughout the Internet. We see it more and more often on our phones, in our cars, in social media. And while its use is progressing faster and faster for entertainment purposes, it’s also affecting the way cyberattacks are performed and the way cybersecurity operates against them. In cybersecurity, AI is already fighting against AI. 

How AI influences cyberattacks

On the offensive side, cyberattacks make more and more use of artificial intelligence on different fronts: 

  • Automated attacks: AI-powered tools can automate attacks such as phishing, brute force, and malware distribution. These tools can find vulnerabilities faster and more effectively than human hackers, reducing the time and effort needed for exploitation.
  • Adaptive malware: AI enables the creation of more sophisticated malware that can evolve. Such malware can learn from defensive measures and change its behaviour, making detection and eradication harder. For instance, AI-powered ransomware can target high-value files and adjust encryption algorithms dynamically.
  • Social Engineering and Deepfakes: Attackers use AI to improve social engineering attacks. By analyzing massive amounts of data, AI can craft more personalized phishing emails or generate deepfake audio and video that manipulate users into divulging sensitive information or authorizing transactions.
  • Vulnerability identification: AI models trained on large datasets of software vulnerabilities can rapidly identify potential security flaws in software, including zero-day vulnerabilities, which attackers can then exploit. AI enables the faster identification of these vulnerabilities across broader attack surfaces.
  • AI against AI: Attackers can reverse-engineer AI-based defence mechanisms and exploit weaknesses in machine learning models used by defenders, such as finding adversarial inputs that make AI misclassify or overlook threats.
AI gone rogue: Deeplocker and Emotet

AI-based malicious software is a growing threat as attackers increasingly leverage machine learning (ML) and artificial intelligence (AI) techniques to enhance their capabilities. Two examples are DeepLocker, an AI-powered malware, and Emotet, an AI-enhanced malware. 

DeepLocker was developed as a proof of concept by IBM Research to demonstrate how AI can be weaponized in cyberattacks. DeepLocker uses AI to remain dormant and undetectable until it reaches its intended target. It is designed to hide its malicious payload inside legitimate applications (like a video conferencing app). It uses AI models to identify the target based on biometric markers like facial recognition, geolocation, voice patterns, or other environmental cues. Only when the target is detected (i.e., the malware’s AI model recognizes the victim’s face or location) will it activate and execute its payload. This precision targeting allows DeepLocker to remain undetected in other environments, avoiding standard security tools and scans.  

Emotet is a notorious malware strain that initially started as a banking Trojan but has evolved into a highly modular, AI-enhanced malware capable of various attacks, including ransomware distribution and credential theft. While Emotet isn’t fully AI-driven, it incorporates machine learning techniques to improve its success rate. In particular, Emotet uses AI to generate polymorphic versions of its malicious code. This means the malware changes its appearance (e.g., file signatures) on the fly, helping it bypass traditional signature-based antivirus tools. Using machine learning, Emotet adapts its behaviour depending on the environment it finds itself in. For example, if it detects that it is in a sandbox or virtual machine (environments typically used by security researchers to analyze malware), it will stay dormant or behave differently to avoid detection. This feature allows it to evade forensics and behavioural analysis. 

How AI impacts cybersecurity

On the defensive side, cybersecurity also takes advantage of artificial intelligence capabilities for: 

  • Threat detection and response: AI is used in security tools to detect patterns of suspicious behaviour in real time. Machine learning models can analyze vast amounts of log data to identify anomalies, potential intrusions, or malicious activities faster and more accurately than traditional methods.
  • Endpoint protection: AI-powered endpoint detection and response (EDR) systems continuously monitor devices for abnormal activity, allowing faster detection of malware or ransomware, especially zero-day attacks. These systems can automatically quarantine infected systems and mitigate damage.
  • Automated Threat Intelligence: AI-driven tools can collect, analyze, and prioritize threat intelligence from various sources in real-time, helping organizations stay ahead of emerging threats. This includes analyzing darknet forums, threat databases, and social media for early warning signs of attacks.
  • Behavioural Analytics: AI models can profile the typical behaviour of users, devices, and network traffic. If users start acting outside their standard patterns, such as accessing sensitive systems at odd hours, AI can trigger alerts or initiate an automated response.
  • Incident response automation: AI can orchestrate and automate incident response processes. For example, once a threat is detected, AI can trigger predefined responses like isolating the affected system, notifying the cybersecurity team, and gathering forensic data for further investigation.
  • Fraud Detection: In sectors like finance, AI systems analyze transactional data in real-time to detect fraudulent activity. AI models are trained to recognize patterns indicative of fraud or suspicious behaviour, significantly reducing the time to detection.
AI for the justice league: Darktrace and CylancePROTECT

AI-based cybersecurity software has become a vital tool in helping organizations defend against increasingly sophisticated attacks. Two notable examples are Darktrace and CylancePROTECT. 

Darktrace is an AI-driven cybersecurity platform that uses machine learning and advanced algorithms to detect and respond to cyber threats in real time. It is based on the concept of “self-learning AI,” which means it can learn from the environment it protects without human intervention or predefined rules. Darktrace builds a model of what constitutes normal behaviour in a network. It continuously monitors the behaviour of users, devices, and systems, identifying subtle deviations or anomalies that could signal an emerging threat. By analyzing behaviour patterns, Darktrace can identify insider threats, data exfiltration, and ransomware attacks before they cause significant damage. Darktrace includes a feature called the “Autonomous Response” (Antigena), which can take automated actions to neutralize or contain cyber threats in real-time. For example, if it detects ransomware spreading, it can isolate infected devices without disrupting the rest of the network. All in all, Darktrace’s AI can assist security teams by automatically investigating incidents, providing context, and reducing the manual workload for analysts. 

CylancePROTECT, developed by BlackBerry, is an AI-based endpoint protection platform. Unlike traditional antivirus solutions that rely on signature-based detection, CylancePROTECT uses machine-learning algorithms to predict and prevent threats before they execute. CylancePROTECT uses AI and machine learning models trained on large datasets of known malware to identify malicious files, applications, or processes. It can detect and block threats based on file characteristics and behaviour rather than relying on known malware signatures. Since CylancePROTECT doesn’t depend on signatures, it’s highly effective at detecting and stopping zero-day exploits, unknown vulnerabilities not yet identified by traditional security tools. The AI models are lightweight and can function without constant cloud connectivity, making them a suitable solution for remote and offline environments. It operates with low CPU and memory usage, ensuring minimal impact on system performance. Furthermore, CylancePROTECT can detect malicious scripts and prevent memory-based attacks like file-less malware, often evading traditional security measures.

AI vs traditional tools: a shift towards behavioural analysis

If you have read so far, you have probably noticed that we used the word “behaviour” a lot. Behavioural analysis in AI-based cybersecurity software represents a significant evolution from traditional tools that rely on repetition, databases or signature-based detection. Traditional antivirus and intrusion detection systems (IDS) rely on a database of known signatures (patterns, code, or characteristics) corresponding to previously identified malware, viruses, or attacks. These tools scan files, processes, and network traffic for known signatures and flag anything that matches a recognized threat. This approach has several limitations: 

  • Inability to Detect Unknown Threats: Since these tools rely on previously seen patterns, they are ineffective against new, unknown threats (such as zero-day vulnerabilities or polymorphic malware that changes its signature).
  • Slow Updates: Signature databases need regular updates to stay current. The time lag between the discovery of a new threat and the release of a corresponding signature leaves systems vulnerable during this period.
  • File-Based Detection: Signature-based tools often struggle with modern, fileless malware that doesn’t rely on executable files but resides in memory or uses legitimate system tools (like PowerShell) to carry out malicious actions.

AI-based behavioural analysis systems don’t rely on predefined signatures. Instead, they continuously monitor the behaviour of systems, users, applications, and network traffic to detect anomalies or suspicious patterns that deviate from what is considered normal. These systems can learn what typical “good” behaviour looks like and identify subtle deviations that could indicate malicious activity. There are several advantages to this approach:

  • Detection of Unknown and Zero-Day Threats: AI-driven tools can detect previously unknown threats by analyzing behaviours rather than relying on known patterns. If a program suddenly tries to encrypt large numbers of files (as ransomware would), behavioural analysis will flag this action as suspicious, even if the malware is brand new.
  • Real-Time Threat Detection: AI-based tools can detect suspicious behaviour as it happens, enabling organizations to respond in real-time. For example, if an insider suddenly starts accessing files they don’t usually touch, AI can alert security teams or trigger an automatic response, potentially preventing data theft or other damage.
  • Adaptability: Since AI-powered behavioural analysis systems learn and adapt over time, they can recognize new attack vectors as they evolve. The more data they process, the better they identify threats, even if they constantly change.
  • Reduced False Positives: Behavioural analysis reduces false positives compared to traditional systems. While signature-based tools can mistakenly flag legitimate activity as malicious (based on outdated or inaccurate signatures), AI-based systems are more context-aware. They can better differentiate between benign and harmful activity.
Challenges of behavioural analysis

Learning Curve: AI-based behavioural systems need time to learn what constitutes “normal” behaviour for a given environment. During this initial learning phase, there might be more false positives or missed detections until the model becomes familiar with the system’s baseline activity.

Resource-Intensive: Behavioural analysis often requires more computational resources than signature-based detection because it constantly monitors and analyzes large volumes of data in real time. This can strain network and system resources, especially in larger organizations.

Complexity of Interpretation: Behavioural analysis tools may detect anomalies, but interpreting those anomalies still requires skilled security teams. Not all deviations from normal behaviour are malicious, so understanding the context of alerts is crucial to avoid unnecessary disruption.

Hybrid approaches to cybersecurity

Many modern cybersecurity solutions combine both traditional signature-based detection and AI-driven behavioural analysis. This hybrid approach offers the best of both worlds: quick detection of known threats using signatures, combined with the ability to detect novel, unknown, or sophisticated attacks using behavioural analysis. This combination allows security tools to continue detecting straightforward, previously identified threats while leveraging AI to spot emerging or stealthier threats that evade traditional defences.

Possible evolution scenarios

It’s hard to predict where adversary AI strategies will lead technology. We are already witnessing a continuous back-and-forth where both attack and defence AI evolve rapidly, with AI systems adapting to each other’s advances. Some of the possible outcomes are: 

  • Adversarial machine learning (AML), where attackers develop methods to exploit weaknesses in machine learning models. They generate “adversarial inputs”—data designed to confuse or mislead AI systems by making minor modifications that cause significant errors in classification or detection. 
  • AI could enable the creation of highly sophisticated and automated attack toolkits. These tools would allow even less-skilled attackers to launch complex cyberattacks by relying on AI to handle much of the decision-making and execution. In response, defenders would depend on AI to automate and scale defence mechanisms.
  • AI might evolve to a point where entire cyberattacks and defensive responses are conducted autonomously without human intervention. Attack and defence AI could operate at machine speed, launching, adapting, and countering attacks in real time. 
  • AI could further commercialize cybercrime through Cybercrime-as-a-Service (CaaS) platforms, where attackers offer AI-powered tools for rent to others. These platforms would provide pre-built AI-powered tools for conducting malware campaigns, launching AI-enhanced DDoS attacks, or generating AI-generated phishing kits. 
  • AI could be used to automate and scale disinformation campaigns by generating and distributing fake content (such as deepfake videos, fake news articles, or social media posts) that influence public opinion, cause confusion, or destabilize organizations.
How do we make the best of AI?

While speculation can take us to very different scenarios, it’s essential to acknowledge that we are only a few steps away from the future. While humans can’t compete with AI for speed of analysis, we can understand AI’s inner workings and how to steer technology toward improving human lives. 

Related Posts

Table of Contents

This post is about...

Author

Leave a comment

Your email address will not be published. Required fields are marked *