DHS science adviser Brian Henz has thoughts on the challenges of artificial intelligence applicability.
/

DHS S&T’s Brian Henz Discusses Explainability, Other Challenges of GenAI

2 mins read

Brian Henz, a senior science adviser on artificial intelligence for the Department of Homeland Security’s Science and Technology Directorate, identified explainability as a major challenge in applying generative AI. 

The technology expert recently participated in a roundtable discussion with other leaders to exchange perspectives on the applicability of generative AI. He provided a summary of the discussion in a blog post on DHS.gov. 

Why AI Explainability Is Critical

Explainability, according to Henz, is especially crucial to AI used in public services, where impacts on an individual will be significant. It enables decision-makers to determine where the AI model made a mistake and ensure that similar incidents in the future can be mitigated. 

However, the official pointed out that neural networks are “black boxes.” It is difficult to identify the data that led to a certain decision. 

“If Gen AI is applied, by the nature of how it learns and generates new data, it can be difficult to track where things went wrong,” he wrote in the blog post. 

AI: Other Challenges

During the roundtable discussion, Henz and other speakers identified more challenges and areas of consideration when using the technology. For instance, generative AI tools are designed for individual use cases and may not be repurposed for another application without system adjustments. 

Repurposing code can also create new risks and vulnerabilities, the official noted. 

DHS is exploring the use of AI to aid in various agency missions. The agency recently piloted three AI tools that can conduct interviews with refugees and asylum seekers, summarize law enforcement reports and create hazard mitigation plans. According to Henz, the pilots have provided DHS decision-makers with insights on the real-life impact of generative AI tools.