The National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (CSI), Contextualizing Deepfake Threats to Organizations, which provides an overview of synthetic media threats, techniques, and trends.
Threats from synthetic media, such as deepfakes, have exponentially increased — presenting a growing challenge for users of modern technology and communications, including the National Security Systems (NSS), the Department of Defense (DoD), the Defense Industrial Base (DIB), and national critical infrastructure owners and operators. Between 2021 and 2022, U.S. governmental agencies collaborated to establish a set of employable best practices to take in preparation and response to the growing threat. Public concern around synthetic media includes disinformation operations, designed to influence the public and spread false information about political, social, military, or economic issues to cause confusion, unrest, and uncertainty.
The authoring agencies urged organizations to review the CSI for recommended steps and best practices to prepare, identify, defend against, and respond to deepfake threats.
The CSI states, “Organizations should implement identity verification capable of operating during real-time communications. Identity verification for real-time communications will now require testing for liveness given the rapid improvements in generative-AI and real-time rendering. Mandatory multi-factor authentication (MFA), using a unique or one-time generated password or PIN, known personal details, or biometrics, can ensure those entering sensitive communication channels or activities are able to prove their identity. These verification steps are especially important when considering procedures for the execution of financial transactions.”
Basic recommendations include:
- Make a copy of the media prior to any analysis.
- Hash both the original and the copy to verify an exact copy.
- Check the source (i.e., is the organization or person reputable) of the media before drawing conclusions.
- Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Visual/audio examination – look and listen to the media first as there may be obvious signs of manipulation.
- Metadata examination tools can sometimes provide additional insights depending on the situation.
The CSI provided the following advanced recommendations:
- Physics-based examinations – complete checks to verify vanishing points, reflections, shadows, and more using ideas from Hany Farid and other methods that use Fluid Dynamics.
- Compression-based examination – Use tools designed to look for compression artifacts, knowing that lossy compression in media will inherently destroy lots of forensic artifacts.
- Content-based examinations (when appropriate) – Use tools designed to look for specific manipulations when suspected.
The CSI further states, “To protect media that contains the individual from being used or repurposed for disinformation, one should consider beginning to use active authentication techniques such as watermarks and/or CAI standards. This is a good preventative measure to protect media and make it more difficult for an adversary to claim that a fake media asset portraying the individual in these controlled situations is real. Prepare for and take advantage of opportunities to minimize the impact of deepfakes.”
To report suspicious activity or possible incidents involving deepfakes, contact one of the following agencies: