
How Hackers Wiped 200,000 Devices Without a Single Line of Malware: Inside the Stryker Intune Attack
April 23, 2026Why AI Is Making Ransomware Smarter, Faster, and Harder to Detect
Ransomware has been a problem for over a decade. Businesses have lost billions to it. Governments have scrambled to respond. Security vendors have built entire product lines around stopping it.
And yet, in 2026, it is getting worse.
Not because attackers suddenly got more creative. It is because they handed their playbook to AI — and AI is running it better than any human team ever could.
The numbers tell a bleak story. IBM’s X-Force Threat Intelligence Index 2026 found that supply chain and third-party breaches have quadrupled over the past five years. Meanwhile, AI-powered attacks are reportedly 40 times more effective than conventional methods, largely because they adapt in real time to whatever defences they hit. That last part is the bit that should worry you most.
How AI Has Changed the Ransomware Game
Traditional ransomware was, in many ways, a blunt instrument. Attackers would craft a phishing email, blast it to thousands of targets, and wait. Maybe 1 in 100 would click. From there, the malware would encrypt files, demand payment, and the attacker would move on. Crude, but profitable.
AI has dismantled that model and replaced it with something far more dangerous.
Phishing that actually reads like a real person wrote it. Gone are the days of broken English and suspicious attachments. AI-generated phishing emails now mimic writing styles, pull context from social media, and personalise content to individual targets. IBM security researchers flagged credential harvesting via AI-assisted phishing as one of the fastest-growing attack vectors heading into 2026. Your finance manager is not getting a generic “click here” email anymore. They are getting something that looks like it came from your CEO, written in the exact tone your CEO uses.
Ransomware that hunts for its moment. Earlier variants would encrypt everything immediately on execution. Newer AI-driven strains are patient. They study network behaviour first — learning what normal activity looks like, mapping high-value systems, identifying backup locations. When the attack finally triggers, it does so strategically: during off-hours, after disabling shadow copies, targeting the assets that will cause the most pain. Defenders call this “dwell time” abuse. Attackers are weaponising it.
Malware that rewrites itself to avoid detection. Signature-based antivirus works by recognising known malware patterns. AI-generated malware is designed to make that irrelevant. It mutates its own code continuously, producing new variants faster than any threat intelligence feed can catalogue them. SentinelOne and CrowdStrike have both documented this shift — traditional detection methods simply cannot keep up with the velocity.
Commercialised attack kits on the dark web. This might be the most unsettling development. Cybercrime prompt playbooks — essentially step-by-step AI-assisted attack frameworks — are being sold on dark web marketplaces. Someone with no technical background can now run a sophisticated ransomware campaign. The barrier to entry has not just lowered. It has nearly disappeared.
What you end up with is ransomware that is faster to deploy, harder to catch mid-operation, and increasingly difficult to attribute to any single group. Security teams are fighting at human speed against attacks running at machine speed. That gap is the real problem.
What Organisations Need to Do Differently
The instinct is to buy a new tool. A better firewall, an upgraded endpoint solution, a threat intelligence subscription. None of that is wrong, but it misses the point.
The organisations holding up best against AI-powered ransomware are not the ones with the biggest security budgets. They are the ones who have fixed the basics — and fixed them consistently.
IBM’s X-Force report puts it plainly: many incidents in 2025 and 2026 trace back to lapses in foundational security hygiene, not sophisticated zero-days. Weak identity management, unpatched systems, absent multi-factor authentication. Attackers do not need AI to beat you if you leave a door unlocked.
That said, the defensive side of AI is equally powerful — and underused. AI-driven threat detection tools can monitor network traffic, user behaviour, and application activity in real time, catching anomalies that no human analyst would spot at scale. The catch is that most organisations deploy these tools without proper configuration or governance. A tool running unsupervised without tuning is nearly as useless as no tool at all.
A few things that actually move the needle:
- Enforce phishing-resistant MFA across all systems. Not optional MFA. Not SMS-based MFA. Hardware keys or app-based authentication, enforced without exceptions.
- Implement least-privilege access. Most ransomware spreads laterally because accounts have far more access than they need. Tighten that.
- Test your backups. Regularly. Offsite. Isolated from the main network. Ransomware specifically targets backup systems now — if yours are connected and unprotected, you do not have backups. You have a second attack surface.
- Run tabletop exercises. Know what your response looks like before you need it. Who gets called? Who has authority to take systems offline? Who handles communications? Figuring this out mid-incident is expensive.
The threat landscape in 2026 is not the same as it was two years ago. The tools attackers are using have changed. Defences that worked before are less effective now. That is not a reason to panic — but it is a reason to stop treating ransomware preparedness as a once-a-year checkbox exercise.
AI has made ransomware a genuinely harder problem. The answer is not to match attacker sophistication with more complexity on the defence side. It is to be ruthlessly disciplined about the fundamentals, and to use available AI tools thoughtfully rather than symbolically.
The basics, done properly, still stop most attacks. That has not changed.

