Artificial intelligence. CAISI’s new NIST AI 800-2 draft provides guidance on benchmarking language models.
CAISI’s new NIST AI 800-2 draft provides guidance on benchmarking language models, focusing on transparency, validity and reproducibility.
/

NIST Seeks Public Input on Draft Best Practices for Automated AI Benchmark Testing

4 mins read

The National Institute of Standards and Technology is asking industry, government and research stakeholders to weigh in on a new draft framework aimed at improving how language models are evaluated through automated benchmarking.

NIST said Friday that its Center for AI Standards and Innovation, or CAISI, released an initial public draft of NIST AI 800-2, “Practices for Automated Benchmark Evaluations of Language Models,” and is accepting public comments through March 31.

NIST Seeks Public Input on Draft Best Practices for Automated AI Benchmark Testing

The Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18 will bring together federal, defense and GovCon leaders to discuss how AI is being integrated into mission and enterprise environments. Through keynotes and panels, the event will highlight practical approaches to scaling AI, modernizing legacy systems, and building the data and infrastructure foundations needed for responsible adoption across government. Register now.

Why Is NIST Issuing Guidance on Automated Benchmark Evaluations?

Automated benchmark evaluations are increasingly used to support AI procurement and deployment decisions, particularly when organizations face limited time or resources. However, NIST cautions that benchmarks are not suitable for every evaluation need. This reflects a growing concern that while these tests have become essential tools for assessing artificial intelligence performance, consistent standards for ensuring valid, reproducible and transparent results are still in their infancy.

The draft organizes guidance around three areas: defining evaluation objectives and select benchmarks, implementing and running evaluations, and analyzing and reporting results. It notes that automated benchmarks work best when tasks are structured, verifiable and stable over time, but are less effective for subjective, dynamic or human-in-the-loop evaluations.

What Does CAISI Recommend for Benchmark Design and Reporting?

One of the central recommendations is that evaluators should begin by clearly documenting what they are trying to measure and how results will be used.

CAISI emphasizes that evaluation objectives should specify both the intended use of the measurements and the underlying capability or construct being assessed. It also urges organizations to carefully select benchmarks, documenting what each benchmark actually measures and whether it directly aligns with the evaluation goal or serves only as a proxy.

Beyond benchmark selection, CAISI highlights the importance of evaluation protocol design — the operational procedures that shape results.

The draft identifies several emerging principles, including:

  • Comparability across models
  • External validity tied to real-world use
  • Cost control, since a higher reasoning effort can inflate performance safeguards against evaluation “cheating,” such as models searching for answers online

CAISI notes that providing internet access during evaluations is a particularly consequential decision, since it can introduce contamination and undermine benchmark integrity.

The draft also calls for stronger norms around statistical analysis and reporting. It recommends that evaluators quantify uncertainty through confidence intervals or standard errors, rather than treating benchmark scores as absolute measures. CAISI further advises that organizations should make qualified claims and avoid overgeneralizing benchmark outcomes beyond their intended scope.

The draft reflects CAISI’s growing mission as the federal government’s primary industry-facing hub for testing frontier AI models. Recent CAISI initiatives include seeking AI experts to work on national security risk evaluations, AI red-teaming and secure deployment guidance as part of the Trump administration’s AI Action Plan.

NIST has also separately requested industry input on security risks and safeguards for agentic AI systems, highlighting threats such as backdoor attacks and data poisoning.