Cybercrime 2026: Evolution, Not Revolution
- Richard Tyler

- 21 hours ago
- 2 min read

Warnings about artificial intelligence transforming cybercrime often sound dramatic, but the reality is more uneven.
Recent research from major tech companies suggests criminals are experimenting with AI tools in ways that could shift how attacks are carried out. One Google report found that some hackers were using generative systems to tweak malware on the fly, potentially helping it slip past traditional detection methods. At first glance, that points to a significant escalation.
Look more closely, however, and the picture becomes less clear cut. The malware identified in that research was relatively unsophisticated and easily flagged by existing security systems. Analysts noted that, despite the headlines, there was little evidence that organisations needed to rethink their core defences.
Even so, security experts argue that publishing these findings has value. Exposing early attempts allows defenders to adapt before more capable tools emerge. The concern is not what attackers can do today, but how quickly those capabilities might improve.
There are also signs that AI is being tested as a way to streamline parts of the hacking process. In one case disclosed by Anthropic, its system was used in an attempted intrusion campaign. Yet the operation still relied heavily on human direction. The AI made mistakes, including fabricating information and overstating its success, meaning any results would have required careful verification.
For some specialists, that underlines a broader point. Much of what is being described as new is, in practice, a continuation of long established techniques. Automation in cyberattacks is not new, and many of the tools now being paired with AI have existed for years. Scepticism remains widespread. Some analysts believe expectations around AI in cybersecurity have outpaced reality, driven in part by hype. Current systems, they argue, fall short of the reliability and precision needed to carry out complex attacks independently.
Where AI is having a clearer impact is in efficiency. Criminals are using language models to draft phishing emails, translate messages, and speed up reconnaissance. These are incremental gains rather than breakthroughs, but they allow campaigns to be carried out at greater scale. That shift may prove significant. Attacks do not need to be highly sophisticated to succeed. Producing convincing messages in large volumes can be enough, particularly when targeting individuals or organisations with weaker protections. A wider net increases the odds that something will slip through.
Some researchers believe this marks the beginning of a broader change. If tools continue to improve, the barrier to entry for cybercrime could fall, enabling less experienced actors to carry out more advanced operations. The risk is not necessarily more complex attacks, but more of them.Defenders, however, are not standing still. Many of the protections developed over the past decade remain effective, and detection systems increasingly rely on machine learning to identify suspicious patterns at scale. Collaboration between organisations has also improved, with shared frameworks helping to track how AI is being used both by attackers and against them.
For now, the balance has not dramatically shifted. Cybercrime is evolving, but it is doing so in familiar ways. The question is whether that gradual change will accelerate into something more disruptive, or remain an extension of existing threats.











Comments