Banking’s AI Rulebook: Turning the Treasury Framework Into Action
-
March 20, 2026
-
The AI frontier has been likened to the Wild West: competing priorities, limited oversight, and banks and financial institutions (“FIs”) racing to stake their claim in a modern gold rush. Now, a new regulatory sheriff has arrived. The U.S. Treasury’s Financial Services AI Risk Management Framework (“FS AI RMF”) adapts the 2023 NIST AI Risk Management framework for financial institutions.1 It provides the industry with a shared vocabulary and a common control architecture for governing AI – from fraud detection to customer engagement to internal productivity tools. The sheriff isn’t out to shut down these digital boomtowns: the goal is to tame the chaos and to bring accountability and consistency to AI governance.
The framework provides FIs of all sizes a structured way to evaluate and manage AI risk. The challenge for most institutions is that the framework assumes that banks already know where AI lives within their organizations and how it is being used. Those without a baseline inventory of AI usage will face immediate sequencing challenges when applying the framework.
Implementing the framework is a heavy lift, demanding cross-functional coordination and executive backing. Without clear ownership and accountability, organizations may struggle to operationalize a program spanning governance, legal, risk and other functions.
More importantly, the framework is not just a regulatory checkbox. It signals a clear shift in how Chief Risk and Compliance Officers and external counsel support accountability, operational execution, and advisory roles within these frameworks and outcomes. For law firms, maturity classifications are consequential because they reshape downstream regulatory, supervisory and litigation exposure.
Where AI Risk Meets Operational Controls
The FS AI RMF is structured around four integrated components: an AI Adoption Stage Questionnaire that determines institutional maturity, a Risk and Control Matrix (“RCM”)2 that maps applicable controls, a Guidebook3 that provides implementation guidance, and a Control Objective Reference Guide that offers detailed technical support.
The application of the framework begins with the Adoption Stage Questionnaire, a self-assessment that categorizes an institution into one of four maturity levels based on usage: Initial, Minimal, Evolving, or Embedded.4 Your stage then determines which control objectives in the RCM apply to you. Once the maturity level is determined, controls are built cumulatively. Each stage inherits prior requirements while introducing more rigorous requirements.
The numbers below illustrate how the scope quickly expands:
- Initial: 21 control objectives
- Minimal: 126 control objectives
- Evolving: 193 control objectives
- Embedded: all 230 control objectives
The jump from one stage to another is material and a governance multiplier. The difference between Initial and Minimal expands the control environment by more than 100 objectives. The Questionnaire scopes the breadth and rigor of your risk management obligations under the framework.
You Can’t Govern What You Can’t See
The Questionnaire assumes you know how AI is deployed across your institution. In practice, many organizations do not. To characterize the organization’s use across key dimensions – business impact, governance, deployment model, third-party AI use, organizational goals, and data sensitivity is challenging without an existing inventory.
Here’s the catch when it comes to sequencing: Building the AI inventory is one of the control objectives (GV-1.6),5 spanning six sub-objectives from shadow IT to portfolio-level risk analysis. While the framework implies building an inventory through controls, an organization cannot complete the Questionnaire without a base-level inventory of AI usage.
This includes everything from a production chatbot that likely underwent model risk management to an enterprise-wide ChatGPT deployment where employees can be creative and define their own use cases. For the questionnaire to accurately determine the adoption stage and risk exposure, AI usage analysis must be comprehensive.
While the inventory need not be perfect, investing time in building and updating the inventory (including vendor-deployed items, purchased products, and employee tools) will pay dividends when completing the questionnaire.
The Risks of Underestimating AI Adoption
The questionnaire is a subjective assessment, and opinions within the same organization can vary. Although the framework doesn’t specify guidance on maturity classification, leaning toward a more mature stage is likely the better approach. Underestimating maturity can elevate the organization’s risk by missing essential controls for AI use and creating organizational blind spots.
Activating the Control Environment
Once you’ve built a working inventory and made a thoughtful call on your adoption stage, the rest of the framework is straightforward. The RCM maps risk statements to specific control objectives for your stage. The Guidebook walks you through implementation. The Control Objective Reference Guide gives you concrete examples of controls and the evidence an assessor would expect.
A Bank’s AI Governance Test Case
Consider a hypothetical mid-market bank that has moved cautiously into AI. It has two deployments: a customer-facing chatbot that routes individuals to help desk resources, and enterprise-wide access to ChatGPT for employees.
These represent distinct risk profiles within the same institution and provide a practical lens for how the framework operates.
Understanding Formal Tools Versus Shadow Use
Before starting the Questionnaire, the bank must document its AI tools. The chatbot is straightforward: procured and owned by someone within the organization. Enterprise ChatGPT is different: available to all employees, its use may be opaque. Is it summarizing loan applications, supporting compliance reporting, or drafting client communications?
Each represents a different data sensitivity and risk profile, many of which may not have formal documentation. The range of use cases, particularly those without a formal request and review process, makes ChatGPT challenging.
Taking the Questionnaire
The Questionnaire evaluates six dimensions – business impact, governance, deployment model, third-party AI use, organizational goals, and data sensitivity – by reviewing statements that describe its current practices.
The Questionnaire works from the most mature level of adoption to the least, starting with Stage 4 (Embedded) and moving toward Stage 1 (Initial). You stop at the first stage where any of the six statements applies to your organization.
For this bank, Stage 4 and Stage 3 (Evolving) probably don’t fit. AI isn’t driving autonomous decision-making or transforming critical business functions. It’s not processing high-sensitivity data through externally facing solutions – at least not intentionally. But the chatbot is customer-facing, and employees have access to a third-party LLM that could handle sensitive information, depending on how it’s used.
This is where subjectivity comes into play. Does enterprise ChatGPT access constitute “limited use of AI for non-critical tasks” (Minimal)?6 Or has the bank crossed into something more expansive without fully recognizing it? If even one employee uses ChatGPT with customer data or regulated information, the bank may have greater risk exposure than its governance structure reflects.
The conservative move – classifying as Evolving rather than Minimal – means the bank scopes in 193 control objectives, rather than 126. The bank is also less likely to have control blind spots in areas such as data sensitivity, third-party risk, and external-facing AI governance.
The RCM As the Governance Engine
Once the bank firms its adoption stage, the RCM becomes the primary working document. Each control objective maps to a risk statement and aligns to the NIST Govern-Map-Measure-Manage structure, with implementation guidance included.
For the chatbot, many of the relevant controls resemble those used in model risk management or vendor oversight. Third-party evaluation and selection (GV-6.1.1), Service Level Agreements with AI vendors (GV-6.2.3), performance monitoring in production (MS-2.4.1), and incident response procedures (MG-4.1.6) all apply.7 The bank likely has some version of these in place already through existing risk management processes.
Enterprise ChatGPT surfaces a different set of controls. Acceptable use policy (GV-1.2.3) is the obvious starting point. Does the bank have one, and is it specific enough to address what employees can and cannot put into a third-party LLM? Beyond that, the RCM points to controls around data lifecycle and retention (GV-1.1.6), third-party AI risk documentation (GV-6.1.5), and human-AI supervision policies (GV-3.2.1).8 Institutions may not have built governance around ChatGPT as a productivity tool, exposing them to potential risk.
The gap between current controls and the RCM becomes the roadmap, with adoption-stage scoping guiding prioritization as AI usage matures.
The Real Work
Landing on an adoption stage and identifying the relevant controls is straightforward. For a bank classified as Evolving, what follows is the validation or implementation of 193 specified control objectives – each with its own risk statement, implementation guidance, and evidence expectations.
Execution of this work is not a light lift. It must touch governance, legal, compliance, technology, vendor management, HR, and the board. It requires coordination across functions that may not be used to working together. And it assumes a level of organizational readiness – dedicated resources, clear ownership, executive support – that many institutions are still building.
Footnotes:
1: U.S. Department of Treasury Press Release, “Treasury Releases Two New Resources to Guide AI Use in the Financial Sector” (Feb. 19, 2026).
2: Cyber Risk Institute, Financial Services AI Risk Management Framework (Feb. 9, 2026).
3: Id.
4: Id.
5: See supra note 2.
6: Id.
7: Id.
8: Id.
Related Insights
Published
March 20, 2026
Key Contacts
Senior Managing Director
Senior Managing Director
Managing Director
Managing Director
Most Popular Insights
- Beyond Cost Metrics: Recognizing the True Value of Nuclear Energy
- Finally, Pundits Are Talking About Rising Consumer Loan Delinquencies
- A New Era of Medicaid Reform
- Turning Vision and Strategy Into Action: The Role of Operating Model Design
- The Hidden Risk for Data Centers That No One is Talking About