Hello, Guest.!
//

NIST Issues Draft Publication on ‘Explainable’ AI; Jonathon Phillips

1 min read

The National Institute of Standards and Technology (NIST) is looking for input on its draft publication detailing the core principles of “explainable” artificial intelligence.

NIST said Tuesday that the draft report comes as part of the agency’s efforts to develop trustworthy AI systems and provide transparency on such systems’ limitations and theoretical capabilities.

According to NIST, explainable AI must have the capacity to provide accompanying evidence for their outputs while accurately reflecting procedures for output generation.

In addition, explainable AI systems must generate meaningful or understandable explanations while operating under conditions “for which it was designed or when the system reaches a sufficient confidence in its output,” according to the report.

"Understanding the explainability of both the AI system and the human opens the door to pursue implementations that incorporate the strengths of each,” said Jonathon Phillips, an electronic engineer at NIST and co-author of the draft report.

“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” he noted.

Interested parties must submit feedback on the publication until Oct. 15.