FBI Issues Guidance on Identifying ‘Deepfake’ Content

FBI Issues Guidance on Identifying ‘Deepfake’ Content
CISA Cybersecurity Tools

The FBI has released a guidance aimed at helping cybersecurity professionals and the general public identify cases of “deepfake” which adversaries may use to dissuade public opinion.

FBI released the Private Industry Notification guidance on Wednesday in partnership with the Cybersecurity and Infrastructure Security Agency (CISA).

According to the guidance, foreign actors are likely to use synthetic content including deepfakes in the coming months as part of influence campaigns and social engineering tactics.

Deepfakes or generative adversarial network techniques utilize artificial intelligence and machine learning to manipulate digital content for fraudulent activities.

FBI expects malicious actors to use deepfakes to support spearphishing techniques and Business Identity Compromise attacks designed to imitate corporate personas and authority figures.

“Currently, individuals are more likely to encounter information online whose context has been altered by malicious actors versus fraudulent, synthesized content,” the guidance states. “This trend, however, will likely change as AL and ML technologies continue to advance.’ 

Adversaries have used deepfake techniques to create fictitious journalists for false news items since 2017, according to the guidance.

You may also be interested in...

Rear Adm. John Lemmon

Navy Engines Project Gains Access to DOD Supercomputers for Simulation Testing; Rear Adm. John Lemmon Quoted

An engines project of the Naval Air Warfare Center aircraft division will gain access to the Department of Defense’s supercomputers, which will aid in conducting simulation tests of naval aviation engines before actual demonstration in a laboratory. As part of the High Performance Computing Modernization Program Frontier Project portfolio, NAWCAD engineers can leverage the computers’ predictive modeling without having to risk actual resources for live demos.