Working Smarter, Not Harder: Generative AI’s Edge in Financial Crime Detection
-
August 29, 2025
-
With the current pace of technology innovation, financial crime is evolving faster than ever, yet many institutions rely on static rules-based systems designed for yesterday’s threats. As a result, financial institutions’ (“FIs”) defences can be outsmarted, risking exposure to modern financial crime techniques.
Traditional financial crime controls are often inefficient, expensive to implement and maintain and can be prone to human error. Furthermore, as criminals utilise new technologies, these controls are no longer adequate to identify and manage today’s complex and dynamic threats. With the increased risks and skyrocketing costs of non-compliance, modernising anti-money laundering (“AML”) frameworks and fraud detection strategies is no longer optional.
With regulators worldwide actively fostering and encouraging innovation, including the Financial Conduct Authority (“FCA”)1,2 and the European Commission,3 firms with data-rich operations and a focus on improving inefficiencies are actively exploring how to harness artificial intelligence (“AI”). Financial services firms spent $35 billion on AI in 2023, with an estimated investment to reach $97 billion by 2027.4
As such, 75% of Fis are already using AI5 and are progressively utilising new technologies for financial crime compliance, particularly in fraud detection and customer due diligence. Furthermore, automation of key processes such as Know-Your-Customer (“KYC”) and identity verification operations is increasing efficiencies.6
Investing in new technologies and automation should not stop there. Firms should consider enhancing their framework with proactive and adaptive detections that can be facilitated by Generative AI (“GenAI”), in addition to ensuring their systems are explainable, fair and aligned with AML obligations.
Overlay, Don’t Overhaul
Before the application of AI, firms had invested heavily in the implementation of specific AML systems to meet regulatory requirements for screening and monitoring. These processes are now part of the legacy set up that took a considerable amount of time and cost to implement, customise and embed. Whilst these systems can often be seen as ineffective and inefficient in tackling current financial crime risks, a complete overhaul and investing in an AI-powered solution will not be feasible for many institutions.
A more cost and time-effective solution, such as an AI overlay, is coming up as a ‘softer’ shift option. This solution can be ‘overlaid’ on top of, or in parallel, with the results produced by the current legacy systems – achieving similar outcomes in a shorter period and with swifter stakeholder approvals.
The overlay approach enables incremental improvement while maintaining regulatory compliance. It can also reduce operational risk by preserving familiar workflows. This also allows firms to avoid the disruption and regulatory risk of full replacement while realising quick wins from AI adoption.
From Triggers to Context
We see a number of our clients struggling with the volumes of false positives that their traditional AML systems flag given their reliance on static and rules-based approaches, such as value-based thresholds, foreign logins or transactions involving high-risk jurisdictions. Whilst these triggers can spot illicit activity, not all of them are truly suspicious, generating false positives that must be examined. This is draining resources and frustrating stakeholders. It is estimated that the labour of financial crime compliance is costing UK FIs £34.2 billion,7 driven heavily by transaction monitoring and AML screening efforts.
The application of GenAI can introduce contextual awareness to financial crime detection by creating behavioural baselines for each customer, merchant or related account and comparing peer group activities. For example, regular £8,000 transfers to a supplier based in China are not suspicious for a fashion retailer, while £8,000 from a retired individual with no prior links to China should be further investigated. In addition, as GenAI evolves into a more agentic form, operating autonomously, systems are capable of planning, reasoning and taking independent actions within pre-set boundaries. For example, payments to a newly linked account that might be sharing similar IP addresses with known fraudsters can be flagged, even if they are under a certain threshold amount that would otherwise not trigger an alert.
Train for Tomorrow
Criminals are leveraging rapid technological advancements, often outpacing traditional AML systems that struggle to identify emerging threats. To keep up, UK financial institutions must prioritise the training of AI-driven systems capable of simulating potential scenarios and detecting financial crime patterns before they materialise.
Whilst the ever-evolving landscape presents a constant challenge, training models could include developing a functional target state, with detailed functional and technical capabilities. Models should be trained to analyse historic and synthetic transactional data for financial crime activities and any emerging fraud patterns. For example, a combination of GenAI and optical character recognition (“OCR”), a technology using AI to convert documents into machine-readable data, can facilitate effective fraud recognition, such as detection of duplicate invoices, payment fraud or tampered documents at onboarding. By turning scanned invoices into structured data, firms can identify duplicates, inconsistencies, or forgeries before money moves.
Furthermore, firms can train models on historic data to be able to disposition alerts and draft and file Suspicious Activity Reports (“SARs”).
This proactive approach requires not only continuously training models with diverse, high-quality data but also upskilling compliance and fraud teams to interpret and act on AI-generated insights. By investing in both technological and human intelligence, firms can build a forward-looking defence strategy, anticipating threats rather than just reacting to them.
Human Plus AI Not Human Versus AI
Fully replacing the existing systems and resources is neither feasible nor effective. The optimal approach combines AI speed and scalability with human expertise, creating a collaborative environment with enhanced decision-making and contextual understanding.
AI can support KYC teams and investigators, not replacing them. This can include utilising AI for handling repetitive and often time-consuming tasks. For example, AI can summarise long reports, highlight inconsistencies in customer documentation generate a customer profile overview, rationale for discounting or SAR drafts. Analysts are then allowed to utilise the information to make informed yet quicker decisions about the next steps. Including similar type processes not only optimises the KYC process and strengthens the risk management, but also reduces turnaround time and boosts operational capacity.
Validate, Validate, Validate
Rigorous validation of both data and models is essential to ensure accuracy, fairness and regulatory compliance. With the increased regulatory scrutiny our clients are under, using unvalidated, inaccurate, inconsistent or biased data can lead to false positives, missed threats or even discriminatory outcomes for FIs.
Continuous validation of the data and feedback loops will ensure the AI model not only reflects the real-life and current financial behaviours, but also adapts to ever-evolving typologies. For example, our experts validate AI models for conceptual soundness, ensuring the model’s design aligns with business logic and risk objectives, data quality, implementation testing, model performance and governance to optimise monitoring processes.
Regular testing against known threat scenarios, independent audits and explainability checks are all vital in maintaining trust in AI systems. Validation isn’t a one-off exercise – it’s an ongoing discipline strengthening both effectiveness and accountability.
Footnotes:
1: “FCA launches 5-year strategy to support growth and improve lives,” FCA (March 2025)
2: “Harnessing AI and technology to deliver the FCA’s 2025 strategic priorities,” FCA (July 2025)
3: “FinTech action plan: For a more competitive and innovative European financial sector,” European Commission (March 2018)
4: “Artificial Intelligence in Financial Services,” World Economic Forum (January 2025)
5: "Artificial intelligence in UK financial services – 2024,” Bank of England (November 2024)
6: “UK Firms Spend £21.4k Per Hour Fighting Financial Crime and Fraud,” LexisNexis (July 2024)
7: “How Entity Resolution Is Redefining the False Positive Problem,” LexisNexis, (last accessed July 2025)
Published
August 29, 2025
Key Contacts
Senior Managing Director, Head of UK Financial Crime Compliance
Managing Director
Director