The Defense Advanced Research Projects Agency has partnered with the UL Research Institutes’ Digital Safety Research Institute to further the detection, attribution and characterization of artificial intelligence-generated media.
Tackling the Threat of Manipulated Media
DARPA said Wednesday the two organizations entered a cooperative research and development agreement to address the growing threat of manipulated media or deepfakes. Through the collaboration, DSRI aims to continue DARPA’s Semantics Forensics, or SemaFor, program and sustain efforts to safeguard against manipulated audio, images, video and text. SemaFor, the successor to the Media Forensics program, concluded in September 2024 and has since transitioned its technologies to the government and commercial sectors.
Under the agreement, DSRI will oversee the program’s AI Forensics Open Research Challenge Evaluations, or AI FORCE. It will be tasked with announcing challenge results and awarding research grants. DSRI will conduct these events at academic conferences, which serve as an open scientific research ecosystem for individuals and organizations to share ideas and techniques.
Manipulated media has become a growing problem ever since automated manipulation technologies became more accessible while the use of social media to share these manipulated content have become more prevalent.
Wil Corvey, program manager of DARPA’s SemaFor, stated, “DSRI’s mission of product testing and evaluation, specifically with respect to the complex and evolving sociotechnical environment in which products will be deployed, makes them an ideal fit for this area of transition.”
Jill Crisman, executive director of DSRI of UL Research Institutes, added, “DSRI aims to enable digital information testing and inspection tools [to] keep pace with the rapid advances of generative AI.”