The Department of the Treasury has released two new resources intended to standardize artificial intelligence governance and risk management practices across the U.S. financial sector.
The agency unveiled an AI Lexicon and the Financial Services AI Risk Management Framework on Thursday as part of the Trump administration’s AI Action Plan, which calls for clearer standards and risk-based oversight to guide AI deployment.
According to the Treasury, the resources are designed to help financial institutions adopt AI technologies while strengthening consumer protection, cybersecurity and operational resilience.

Secure your spot at the Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18 to hear from federal, defense and industry leaders about how AI is reshaping government operations.
Table of Contents
What Do the Treasury’s New AI Resources Include?
The AI Lexicon establishes common definitions for core AI concepts, capabilities and risk categories. By harmonizing terminology, the agency aims to bridge the communication gap between regulatory, technical, legal and business functions as AI adoption accelerates across the financial sector.
The Financial Services AI Risk Management Framework adapts the National Institute of Standards and Technology’s AI Risk Management Framework to the specific regulatory and operational environment of financial institutions. The sector-specific framework includes 230 control objectives mapped to varying stages of AI adoption. It provides guidance for evaluating AI use cases, managing lifecycle risks and integrating AI governance into existing enterprise risk programs.
The framework is scalable and intended for institutions of different sizes.
“It’s an essential resource for both community and multinational institutions alike, empowering them to effectively manage AI risks while driving growth and innovation,” said Josh Magri, CEO of the Cyber Risk Institute.
How Was the Financial Services AI Risk Management Framework Developed?
The resources were developed through the Artificial Intelligence Executive Oversight Group, a public-private body formed by the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council.
More than 100 financial institutions participated in shaping the framework, with input from U.S. and international agencies, including NIST.
The initiative will produce six AI-related resources in February focused on governance, data practices, explainability, identity and fraud prevention, providing practical implementation tools rather than new regulatory mandates.
Why Is Treasury Focusing on AI Governance Now?
As financial institutions are expanding their use of AI, including generative AI, they face emerging risks such as bias, opacity, cybersecurity vulnerabilities and systemic interdependencies.
By aligning the financial sector’s AI risk practices with national standards while tailoring them to sector-specific requirements, the new guidance is intended to promote responsible adoption without slowing innovation.
