Evidence Tampering, Deepfakes, Image Manipulation and the Impact on Digital Forensic Authentication
When Deepfakes Disrupt Traditional Forensic Authentication: Why Metadata Analysis Matters
-
2025年4月08日
-
Falsification and manipulation of digital media has become so sophisticated and widely available that alterations are now hardly detectable to the human eye or ear. Media frenzies surrounding photo modifications by public figures and deepfake scams compromising millions of dollars (such as the recent voice deepfake incident in Hong Kong, in which a company employee was fooled into making a $25 million payment to fraudsters)1 have introduced an important discussion regarding how verifiability of electronic evidence is changing as technology advances.
Creating convincing deepfakes or believably altering digital media had, up until recently, required substantial time, resources and technical skills. That has all changed with recent technology advancements. The use of fake media for illegal activities like fraud, libel, harassment and other harmful behavior is already happening, and as recent cases and scandals have shown, editing tools and artificial intelligence are becoming more prevalent in efforts to alter or modify items that may eventually be considered as evidence. Verifying the authenticity, or the story behind the modifications and falsifications of digital media, is increasingly critical to establishing facts and conducting defensible investigations.
Legal teams and digital forensics experts encounter these issues regularly in regulatory issues and other legal matters, and are increasingly relying on collection and analysis of digital artifacts and metadata (i.e., the underlying data about the data) to support fact finding. At FTI Technology, our teams have worked on matters in which people have digitally manipulated screenshots of messages to support their cases and wherein individuals have relied on photos of contracts to make their arguments. Having the digital forensic ability to either validate or disprove the authenticity of such images is critical in establishing facts and reaching fair resolutions.
For example, in issues involving modified images, metadata can confirm the camera that was used to take a photo, the type of lens that was used and exactly when the photos were digitally altered. This metadata can be a rich source of information, especially when examining the EXIF data that is commonly associated with pictures taken with digital cameras. This metadata includes the settings used, the location, image metrics and standard date and time attributes. It is possible to manipulate digital media metadata, such as with the use of EXIF editors that manually change camera and time information, or “Timestomping,” which changes the timestamps in metadata. However, these actions leave traces behind, and skilled digital forensics investigators are able to pick them up.
Similar approaches can be used to verify whether an image, video or audio file is a deepfake. Before deepfakes reached the current level of sophistication, investigators could simply take a closer look or listen to an image or recording to judge whether it was real. Now, that kind of inspection may not yield conclusive findings, requiring investigators to leverage additional digital forensic tactics to authenticate a suspicious item. Technology is being developed to detect the probability that the content may not be organic (i.e., was machine-generated), and FTI Technology experts are monitoring this space.
Still, the issue of deepfakes underscores how instrumental metadata analysis is, as it is extremely difficult to consistently falsify all relevant metadata circumventing a file. Consider a photo taken with a smartphone. In addition to capturing the image, the device also records the date, time and location of the photo, as well as certain information about the camera or device the picture was taken on, and it stores that information in connection with the image. In a deepfake, context or alignment between different pieces of metadata will likely be impaired. There may be issues related to a timeline (e.g., creation date is after last altered date), application stamps (e.g., a photo supposedly taken with an iPhone missing the respective meta entry specifying the iPhone), a picture supposedly sent through WhatsApp in the wrong format, etc.
The lines separating reality from artificiality are more blurred than ever. And the ability for machines to learn, predict and create content is significantly changing the way individuals, companies and governments engage with the world, changes that will inevitably spill over into courtrooms and regulatory matters. In matters where authenticity of digital evidence comes into question, digital forensics experts who know what to look for and how to closely examine forensic artifacts must be involved. Likewise, these professionals will need to realign their view of traditional methods. There is now a myriad of complex data considerations that will need to be accounted for in workflows, and an entirely new category of electronic evidence around which new expertise, tools and workflows will need to be developed in the years to come.
This article was developed with contributions from Jerry Lay and Jerry Bui of FTI Technology.
Footnotes:
1: Heather Chen, Kathleen Magromo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN.com, (February 4, 2024).
Related Information
出版
2025年4月08日
主な連絡先
Senior Managing Director, Head of Switzerland Technology
シニア・マネージング・ディレクター