At the end of August 2025, Anthropic published a report demonstrating how AI is rapidly transforming the threat landscape. It highlights how the technology is lowering the barrier to entry for sophisticated cyberattacks and outlines how the organization’s flagship LLM — Claude — is being used to create and sell ‘no code’ ransomware and scale data extortion campaigns.
To some extent Anthropic’s report only emphasizes what was becoming clear earlier this year. Security company Palo Alto published a report in July showing how criminal group Scattered Spider leverages AI in their activities, using everything from deepfaked voice audio to manipulate support staff to AI-powered tools for network navigation and lateral movement.
But what’s particularly striking about the Anthropic report is that it shows AI is expanding the cybercriminal talent pool. Attacks that would require considerable technical acumen, such as scripting basic malware, are now within reach of less-experienced individuals.
So, if cybercriminals can now do more with less, what can organizations do about it? Yes, AI will form a significant part of the solution, but, as we'll outline in this blog post, it will also involve leveraging longstanding and proven security practices and principles.
The importance of collaborative defence against AI cybercrime
An essential first step is strengthening internal cybersecurity capabilities. That means investing in talent, whether that’s through recruitment or growth and training opportunities for engineering staff.
But beyond that it's also important to remember that defending in isolation is likely to be a losing strategy, especially in an ecosystem where attackers collaborate and share AI-powered tools.
After all, the threat landscape is evolving faster than any single organization can manage on its own. This demands a shift from individual resilience to collective defence. Yes, governments can and will set boundaries through legislation, but AI technology moves too fast for legislators to keep pace. Waiting won’t work: real resilience and effective security will come from a new level of self-governance and collaboration.
Cross-industry alliances among AI vendors, cloud providers and cybersecurity firms are essential to raise the cost of misuse for attackers. Public-private partnerships must extend beyond advisory councils to build global, scalable defenses that match the speed of AI-driven threats.
At an organizational level, meanwhile, it's only by bringing security into the conversation early — in design and development phases — that we’ll have a real shot at making the right changes, faster.
This shared responsibility extends to the builders and users of these systems. We need clear accountability to answer the hard question of who is responsible when something goes wrong? In this new era, balancing innovation and security isn't just a technical challenge; it's the way we build the trust required to be successful.
A new readiness is required
This isn’t to say the old playbook is obsolete: now certainly isn't the time to panic and throw out decades of security good practice. In fact, it's vital to embrace the lessons of the past and lean on the practices and principles that have served us over the last few decades, like building security in from the start and rigorous continuous testing.
However, embracing good practice doesn’t mean the battle should remain asymmetrical. The key is to build on what we know while also bringing AI to bear on our efforts to anticipate and fight back against cybercriminals. Indeed, failing to leverage the opportunities of AI may leave businesses and other organizations at a permanent disadvantage — that could ultimately have significant reputational and commercial consequences.
What to do
Talking up these new risks is all well and good, you might think — but what can we actually do? While flexibility and adaptation will be essential, there are a number of important steps technology leaders can take now:
Recognize that social engineering is the primary breach vector. AI can do sophisticated things, but it’s particularly effective at enabling some very rudimentary social engineering techniques, like creating convincing emails and replicating other people’s voices. Ensure your training and awareness for end users is keeping pace with what's happening out there in the world, and continue to invest in technical controls like multi-factor authentication and sophisticated detection and response engineering.
Accelerate investments in identity. Identity has been called the new perimeter in cybersecurity; it’s often the first place any AI-assisted threat will probe. This means organizations need, first and foremost, to treat every human, service and AI agent as an identity that can be spoofed. Move to phishing-resistant MFA, continuous risk-based authorization and just-in-time provisioning for privileged access. Doing so will mean even AI-perfect deepfakes will be stopped before they can impact your business.
Harden your high-value endpoints. Attacks on user endpoints, laptops and publicly exposed servers are likely to become more relentless as AI costs decrease and adoption grows. Prioritize patching, hardening, good network visibility and device assurance controls. Alongside that, targeted defenses, like endpoints used by platform or data engineering teams — which may have high levels of access to sensitive data — should be protected as a priority.
Find the vulnerabilities before the adversary does. Adopt AI-assisted red team tactics to find weaknesses in your estate. Bringing AI into the picture can not only strengthen your security posture, it can also help you manage costs on what can sometimes be costly work. Remaining on top of threat intelligence and frequently updating your threat models will need to become the norm.
- Accelerate detection and response. The only realistic way to protect against advanced threats is through an approach that balances protection with detection, response and recovery. Some organizations focus too much on protective controls because legacy technology and established ways of working make it hard to modernize their security approach. The good news is that AI-assisted approaches can support you with proactive threat hunting and targeted modernization.
It’s not just an AI arms race: it’s about process and practice too
The AI genie is out of the bottle. Both sides of the firewall are armed. What will separate tomorrow’s breach headlines from the footnotes won’t be who owns the shiniest algorithm or the costliest AI tooling — it will be who can bring AI to bear on sensible, measurable safeguards faster than the adversary can weaponize the latest trick.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.