FBI Issues Guidance on Identifying ‘Deepfake’ Content

FBI Issues Guidance on Identifying ‘Deepfake’ Content
CISA Cybersecurity Tools

The FBI has released a guidance aimed at helping cybersecurity professionals and the general public identify cases of “deepfake” which adversaries may use to dissuade public opinion.

FBI released the Private Industry Notification guidance on Wednesday in partnership with the Cybersecurity and Infrastructure Security Agency (CISA).

According to the guidance, foreign actors are likely to use synthetic content including deepfakes in the coming months as part of influence campaigns and social engineering tactics.

Deepfakes or generative adversarial network techniques utilize artificial intelligence and machine learning to manipulate digital content for fraudulent activities.

FBI expects malicious actors to use deepfakes to support spearphishing techniques and Business Identity Compromise attacks designed to imitate corporate personas and authority figures.

“Currently, individuals are more likely to encounter information online whose context has been altered by malicious actors versus fraudulent, synthesized content,” the guidance states. “This trend, however, will likely change as AL and ML technologies continue to advance.’ 

Adversaries have used deepfake techniques to create fictitious journalists for false news items since 2017, according to the guidance.

You may also be interested in...

Defense Innovation Unit

DIU Seeks New Access Security Tool for Commercial Engagements

The Defense Innovation Unit (DIU) is interested in using commercial multifactor authentication to facilitate secure access to industrial systems not directly connected to U.S. military networks. DIU is in search of a tool that would verify identities on platforms not accessible via a military-issued common access card. DIU intends to this tool to securely collaborate with commercial partners.