A Look at the Global Fight Against Fraud in Digital Assets
-
April 07, 2026
-
Agentic and generative AI are being increasingly used as tools to help bad actors defraud individuals and organizations. Law enforcement, investigators, AI experts, professionals across the financial and emerging technology sectors, and consumers will need to partner to stay ahead of the next generation of criminal activity.
Many seasoned investigators remember handling cases involving the early social engineering scams that used either paper or email versions of fabricated letters claiming to be from a member of royalty or a wealthy family in another country. The letters would describe an unjust imprisonment or loss of access to a fortune that “only” required a small fee to be paid in order to retrieve the treasure. The letters would leverage the usual tactics of urgency, vast wealth and the request for personal financial information.
As far-fetched as these letters sound, they were, unfortunately, highly effective. Fast forward to today, when the underlying methods of these letters are combined with the resources of AI, target research and digital payment methods. It is easy to see how fraudulent activity continues to flourish and becomes dangerously scalable.
This is not only applicable to those who are vulnerable and susceptible to a romance or pig-butchering scam. Anyone can be a target, and the advent of AI-generated content makes all kinds of schemes exponentially more successful.
There’s a widespread availability of AI tools that can be leveraged for fraudulent purposes, including open-source, publicly available AI tools that can build generative AI versions of government-issued documents, video or voice messages, advertisements for fake companies, and synthetic identification documents with relative ease in a very short period of time.
These weapons are effective in any environment, but there are elements of the cryptocurrency landscape with heightened vulnerabilities that contribute to their success. Opportunities are sought by the criminal element based primarily on the continued increase in value of many cryptocurrencies and on the speed at which transactions are executed. For example, falsified AI-generated transaction screenshots can be used to misrepresent token performance as a way to facilitate investment schemes. AI prompts can be designed to effectively create rapport, leverage emotion or communicate empathy with targeted victims and more effectively target lonely or overconfident investors. Deepfakes can be used to socially engineer and promote fraudulent investments by misrepresenting support from corporate, entertainment or industry influencers. Synthetic identities that use real data from harvested credentials or users who reuse credentials can bypass unsophisticated know your customer controls.
This challenge is likely to continue for the foreseeable future as digital assets become more ingrained into the financial system, while remaining a nascent concept in many ways. For example, legislation has allowed 401(k) retirement plans to include digital assets, which will introduce new transaction patterns, data types, trading partners and communication platforms to those involved. This unfamiliarity can create opportunity for AI-driven fraud, as market participants don’t know what to look for and may assume a fake transaction, especially if enhanced by AI, is legitimate.
Despite the effectiveness and depth of the AI-generated attack methods, similarly-designed AI assistants can analyze context, content and patterns of received data. The most successful defense teams will uphold strong human oversight to check automated processes for accuracy and prevent false positives. Combining the human element with other friction points that don’t rely solely on AI solutions (e.g., behavior biometrics and employee education and training) will provide rigor and readiness.
Blockchain Nexus
It’s important to reiterate the fact that the cryptocurrency environment is not solely suited for fraudulent activity. That is a myth. Despite misrepresentations of cryptocurrency as exclusively a tool for criminal activity, the truth is that cryptocurrency and the blockchains on which it is built provide an effective solution to AI-generated fraud. Blockchain technology can be a primary technology to verify the authenticity and integrity of AI-generated data. Blockchain can be used to create an immutable record of the data generation process, including the AI model used, input data and parameters. This allows for the verification of the data’s origin and ensures that it has not been tampered with.
AI-generated data can be digitally signed using blockchain-based cryptographic techniques to ensure the data is authentic and has not been altered during transmission or storage. Blockchain’s hash functions can be used to create a unique digital fingerprint of the AI-generated data, such that any changes to the data would result in a different hash value, allowing for the detection of tampering or manipulation. Blockchain delivered smart contracts can be programmed to check AI data’s integrity and authenticity before allowing it to be used in a specific application.
Organizations will need to continue to be vigilant in the face of bad actors looking to stay ahead of consumers and enforcement. This will include leveraging innovative technologies, including solutions that combine AI and blockchain, to equip experts and investigators so they can more effectively prevent and enforce against crime.
Related Insights
Published
April 07, 2026
Key Contacts
Managing Director
Most Popular Insights
- Beyond Cost Metrics: Recognizing the True Value of Nuclear Energy
- Finally, Pundits Are Talking About Rising Consumer Loan Delinquencies
- A New Era of Medicaid Reform
- Turning Vision and Strategy Into Action: The Role of Operating Model Design
- The Hidden Risk for Data Centers That No One is Talking About