Fraud now holds up a mirror to the UK’s economic model. According to the National Crime Agency, around 44% of recorded crime falls into fraud, which makes it the country’s largest crime category and a visible drag on its efforts to attract and retain investment. When scams and money laundering touch everyday citizens, small businesses and cross-border flows, they don’t just drain accounts. They sap trust in digital channels, blunt appetite for innovation and colour how investors rate the stability of the market.
For years, most boards treated economic crime as a cost of doing business inside the financial sector. That view no longer holds. Criminals now route scams through social platforms, search results and telecoms networks, then cash out through banks and payments systems. Tech companies, telcos and financial institutions share this risk, while law enforcement and regulators raise expectations on speed, accuracy and cooperation.
The UK’s national fraud strategy captures the new ambition: a response to fraud that stays agile, resilient and effective, backed by fresh legislation, better resources and tighter coordination. It pulls tech companies, telcos, law enforcement agencies and financial institutions into a common frame, and it reinforces that sanctions lists don’t stop at people and legal entities but reach assets such as ships and classes of product. Boards that still see economic crime as a narrow compliance matter miss the strategic truth: this now sits on the same agenda as growth, capital allocation and national competitiveness.
The core argument of this article is direct:
Economic crime risk now exceeds the capacity of any single institution or sector. C-suite leaders need to sponsor shared, data-led defense models that cross banks, platforms, telcos and the state. They need to do that now, while they still have room to shape how AI shows up in economic crime rather than react to it.
How we got stuck: Three shifts that broke the old model
The traditional approach to fraud and financial crime rested on three pillars: regulatory expectation, operational loss and reputational damage. That framing drove investment into transaction monitoring, Know Your Customer (KYC) utilities, sanctions screening and case management tools. It also reinforced a defensive reflex: hold data tightly, share as little as possible and optimise controls within the firm.
Three structural shifts now break that logic.
Criminals work across institutions, not inside them. Forum discussions on money mules underlined this. A typical mule controls an average of 3.6 accounts, often spread across banks and subcontracted through informal networks. Many hide behind shell companies that impersonate genuine Companies House registrations. Fraudulent document-generation services that use AI can now create convincing corporate paperwork at pace, enough that authorities now need to shut down two such services every couple of months.
Crime journeys start upstream of the bank. Scams open on social media, search results, messaging apps and spoofed telecoms channels. Attackers groom victims, hijack devices and harvest credentials long before money moves. By the time funds hit a payment rail, most of the manipulation has already happened. Banks only see the final act in a longer script that tech and telco players host.
Agentic AI bends the playing field. Attackers now use generative and agentic AI to generate identities, documents and interaction patterns that mimic legitimate behaviour. To counter this, AI agents can test fraud controls, adjust patterns in real time and probe seams between institutions. The board-level question shifts from “Can we spot suspicious transactions?” to “Can we still tell what a genuine customer, relationship or trade flow looks like at all?”
If leaders continue to treat this as a set of local control issues, they will chase symptoms and fall behind the underlying methods.
Early proof points: Fusion and Global Signal Exchange
The UK has started to assemble a different answer, one that rests on shared data, joint analytics and cross-sector teams.
In 2024, the National Crime Agency and seven UK banks launched a joint project that pooled banking account data indicative of potential criminality into a single analytical environment, often referred to as “Data Fusion.” The NCA and banks seconded subject matter experts and investigators into joint teams and combined banking data with the NCA’s own intelligence. That project has already produced 90 intelligence packages to support NCA and wider law enforcement investigations and helped identify at least eight organized crime networks that exploit the financial system.
Global Signal Exchange provides a parallel track on the open internet side. Launched in 2024 as a collaboration between Oxford Information Labs, Google and the Global Anti-Scam Alliance, it functions as a clearinghouse for scam and fraud threat signals. The founders built it to fix three gaps that practitioners flagged: too little data, no feedback loop and not enough visibility of what actually works. The platform gives trusted parties tools to share threat intelligence in ways that fit their own constraints on security, data protection, internet law and commercial sensitivity, including options for bilateral, anonymous or collated sharing.
These initiatives matter for three reasons. They treat banking data and scam signals as shared resources for public protection, not proprietary artefacts. They prove that joint analytical teams can sit upstream of individual casework and still operate within privacy and proportionality bounds. And they sketch a model that other sectors and jurisdictions can adapt, from insurance to e-commerce.
What shared defence asks of the board
Shared defence sounds attractive in the abstract. It needs translating it into a concrete agenda for executives.
1. Set the scope: national risk, not narrow loss
Fraud now functions as macro risk in the UK. When 44% of recorded crime sits in fraud, it shapes citizen trust, investor views and the country’s appeal as a services hub. Boards should treat economic crime as a factor in country risk and capital allocation decisions, not only as a line item in operational loss.
Questions for the board:
Where do fraud and money laundering sit in our narrative to investors about stability and growth?
How do we quantify economic crime in our country and sector exposure, beyond our own loss numbers?
2. Invest in common data models and joint analytical teams
Shared defence requires shared language. That starts with common ways to identify people, entities, devices and assets across organizations, and practical agreements on what counts as a signal worth sharing. Sanctions regimes already give a template: they list people, entities, ships and products as related objects. Extending that approach into fraud and mule detection gives joint teams richer context.
Joint analytical teams sit at the heart of this. In Fusion, bank analysts and NCA investigators work together on shared data sets to identify patterns, typologies and networks, then push intelligence back into both law enforcement and bank controls. Similar teams can sit around Global Signal Exchange nodes, combining scam signals from platforms, banks and civil society.
Actions for the board:
Treat joint analytical teams as strategic assets with explicit funding, not pilot projects.
Ask management to show the common data models they use with external partners and how those link back into core systems.
Require that each major data-sharing initiative includes a clear plan for privacy protection, controlled access and audit trails.
3. Handle “suspected fraud” data responsibly
Suspected fraud data creates both power and danger. If firms share it too cautiously, criminals slip through cracks. If they share it without discipline, they risk unfair outcomes, biased models and economic harm to legitimate customers.
Boards need a principled position on this. That includes:
Clear thresholds for when internal suspicion qualifies as a shareable signal.
Governance that checks for bias and unintended side effects when firms include suspected fraud data in models.
Agreements with partners and agencies on how they treat such data, how long they keep it and how they handle exoneration.
Handled well, this turns suspected fraud indicators into early-warning signs across the system. Handled badly, it feeds a cycle of financial exclusion.
4. Put victims and trust at the centre
Financial crime work cannot stop at asset recovery and conviction counts. Victims carry emotional harm and a lasting loss of confidence in digital channels. That mood matters for the wider economy.
Boards should track and manage three dimensions together:
Financial impact: direct fraud losses and secondary costs.
Redress: speed, fairness and clarity of support for victims.
Trust: customer confidence in digital services after incidents, measured through surveys, complaints and usage shifts.
By tying leadership incentives to this broader set of outcomes, firms signal that they care about the human story, not just the ledger.
5. Attack the money mule economy as a sector, not a single bank
The Money Mule Action Plan sets out five steps: build knowledge of the problem and relevant data, improve data sharing and upstream use of data points, strengthen how the financial sector feeds information to law enforcement, reduce harm to victims and improve repatriation of funds. That sequence mirrors how organiszd crime actually works. Mules move money across several institutions, subcontract accounts and use AI-generated documents to mask identity. Single-bank action therefore achieves only partial effect.
Board-level responses can include:
Sponsoring mule-focused analytics that use external indicators, not just internal behaviour.
Supporting formal joint investigations where banks combine their data with law enforcement intelligence, as in the Santander case where shared analysis of trafficking and slavery indicators led to convictions and then fed into new detection models.
Designing balanced treatment for customers drawn into mule activity, so the system blocks serial abuse without freezing out people who act under duress or deception.
The AI test: From abstract fear to control-layer design
Agentic AI acts as the next stress test. Boards need to tie AI conversations to specific control layers.
Customer acquisition. AI can fabricate convincing identities and histories, complete with synthetic but plausible digital footprints. Controls that rely on document checks and simple identity verification will struggle. Firms need AI models that can cross-check signals from sanctions data, scam signal exchanges and behavioural cues across channels.
Transaction monitoring. Traditional rule-based systems look for patterns in a single institution’s ledger. AI-enabled criminals will fragment flows across institutions and channels. Joint analytical teams that sit on platforms such as Fusion can train models on cross-bank patterns and feed risk scores back to participants in near real time.
Sanctions and embargo enforcement. AI can help both sides: criminals can try to obfuscate ownership chains of sanctioned people, entities, ships and goods, while defenders can use AI to trace asset structures and trade routes. Boards should push for AI models that integrate sanctions lists, trade data and bank transactional data, with clear escalation paths when models spot anomalies.
In each layer, AI will only work as well as the data and shared context that feed it. That loops back to the earlier points on signal hubs, common topology and principled sharing.
Answering the objections
Thoughtful leaders raise three main objections to this agenda.
“Data sharing will create legal and privacy risks we can’t carry.” The concern has weight. Yet the NCA’s partnership with banks shows that institutions can design principles that include only accounts with multiple indicators of economic crime and keep the volume of affected customers to a small fraction of the total. Good design and clear governance reduce, rather than increase, legal exposure.
“The economics don’t stack up.” Shared defence can feel like a public good with unclear payback. But fraud losses, remediation spend, regulatory penalties and brand damage already erode returns. Joint analytics and signal sharing help firms focus resources where they cut the most risk, avoid duplicated tool spend and reduce the volume of false positives that waste staff time.
“Our people don’t have capacity.” Teams already juggle KYC backlogs, sanctions updates and new payment schemes. Yet collaboration done well reduces noise. Joint analytical work can identify which patterns matter, so firms can simplify internal controls elsewhere. Boards should treat this as targeted investment to slim down less effective activity, not an extra layer on top.
A forward view: From compliance obligation to shared advantage
Economic crime now shapes how investors view the UK, how citizens experience digital services and how customers judge financial brands. The national fraud strategy, Data Fusion, Global Signal Exchange and the Money Mule Action Plan all point toward a new contract: banks, platforms, telcos and the state share data, intelligence and responsibility to protect the system as a whole.
Boards now need to turn that direction into action. Set clear goals for economic crime that cover financial loss, victim experience and trust in digital channels. Back common data models and joint analytical teams as core infrastructure, not optional experiments. Expect tech partners and telcos to join as peers in shared-defence arrangements, not simply as suppliers. Shape AI investment so it strengthens shared intelligence and accountability, rather than scattering tools across silos.
Economic crime will continue to evolve. Leaders who treat shared intelligence and principled data sharing as strategic capabilities will put their institutions, and the UK market, in a stronger position than those who cling to isolated controls and hope that old habits will somehow hold.
To discuss your BFSI transformation needs, get in touch. We’re here to help your business innovate, adapt and excel.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.