New research from Cybersecurity at MIT Sloan and Safe Security examined 2,800 ransomware attacks and found that 80% of them were powered by artificial intelligence. AI is being used to create malware, phishing campaigns, and deepfake-driven social engineering, such as fake customer service calls. Large language models are being employed to generate code and phishing content. There is also AI-enabled password cracking, CAPTCHA bypass, and more.
Perhaps you’re thinking that the answer is to fight fire with fire by building AI-powered defenses. But that’s only part of what’s needed, according to the researchers.
“AI-powered cybersecurity tools alone will not suffice,” they write. “A proactive, multi-layered approach — integrating human oversight, governance frameworks, AI-driven threat simulations, and real-time intelligence sharing — is critical.”
The researchers argue that a comprehensive approach to combating AI-enabled threats consists of three types of defense, all of which are essential:
- Automated security hygiene, such as self-healing software code, self-patching systems, continuous attack surface management, zero-trust-based architecture, and self-driving trustworthy networks. Automating these routine tasks reduces manual workloads while strengthening protection against attacks that target core system vulnerabilities.
- Autonomous and deceptive defense systems, which use analytics, machine learning, and real-time data collection to learn from, identify, and counteract threats. Examples include simultaneously automated moving-target defense, and deceptive tactics and information. Both types of systems enable teams to take a proactive approach to defense, rather than getting stuck in reactive mode.
- Augmented oversight and reporting, which give executives real-time data-driven insights. For example, automated risk analysis uses AI to spot emerging threats and predict how they might impact an organization.
What this means for cybersecurity today
Cybersecurity professionals should look at the history of successful cyberdefenses — protections against phishing, social engineering, and malware attacks, for example — and consider how familiar forms of attack could evolve with the addition of AI.
“The autonomous nature of things has caused there to be a reexamination of the way in which we defend ourselves and the way in which we have to look at both old- and new-style attacks,” said Michael Siegel, the principal research scientist and director at CAMS, and an author of the report.
But it’s yet to be seen how the eternal game of whack-a-mole for security teams will change with AI both deployed regularly in attacks and employed in defense.
“Can we crack the asymmetric warfare nature of cybersecurity?” Siegel asked. “Remember that the attacker only needs one point of entry and exploitation while the defender must stop all entry points and be resilient to all exploitations.