Redefining AI Governance: Leveraging Risk and Archetype as Drivers
-
November 14, 2025
-
AI governance models traditionally focus on mitigating risk or protecting the downside — protecting against bias, lack of transparency, privacy and security. But the state of today’s AI driven business transformation has moved beyond efficiency and productivity gains to a reimagination of operating models and business models. Strategic risks like the risk of inaction leading to a lack of market competitiveness have become equally, if not more important than just protecting the downsides of AI deployment. Given the pace and overall implications of these changes, it is no longer adequate for an organization’s Board of Directors to focus on traditional risk vectors when it comes to AI governance. They must be equipped to understand, assess and advise on both defensive and offensive plays for creating and protecting shareholder value through AI adoption.
In prior articles in this series, we explored the state of AI-driven business transformation (The Shape of the Fourth AI Inflection in 2025), the frontiers of AI research and where AI model enhancements are going (Frontiers of AI Research in 2025), and most recently, how this is influencing the AI investment landscape (AI Investment Landscape in 2025: Opportunities in a Volatile Market).1, 2, 3 In this 2-part article we review the evolving risk landscape relative to AI deployment and a governance framework for companies to manage such risks. Looking across our work with clients, our view is that organizations need to focus less on the “dark side” of AI and more on a practical mapping of the specific risks AI brings into the business and how they should be handled.
Awareness of the risks associated with AI and generative AI (GenAI) isn’t new — in fact it’s one of the biggest concerns of executives as companies consider the implementation of the technology and how to reap gains from it.4, 5 But missed in the conversation is that these risks are evolving as AI is evolving. To get the most from AI, how leadership at the Board of Directors level thinks about risk needs to transform. More importantly, as organizations implement new AI tools, leadership will need to embrace smart risk taking along with risk mitigation — play offense and defense.
A Balancing Act
To strike the right balance between offense and defense, leadership of an organization needs to understand 1) what risks they are working to mitigate and 2) which governance archetype they fall into. These two key pieces are the building blocks of an AI governance program that Boards need to adopt to establish the criticality of AI governance, setting a tone from the top. Let’s start by identifying the risks that leaders should be concerned with.
A good AI governance program should address not just the tactical risks — defense — associated with protecting enterprise value, brand and reputation, but also address the opportunity risks — offense — of too little action or lack of change, e.g., the lack of AI adoption. A good AI governance program should enable your business to play both defense and offense.
Said differently, organizations must now balance the need for robust oversight of the associated risks of the new technology with the need to embrace advances in innovation and change. If they don’t, organizations will fall behind.
We break these risks into two categories: tactical risk and opportunity risk. Think of tactical risks as the day-to-day threats to business as usual. These risks require hand-to-hand combat to keep out bad actors, to prevent models from drifting from their purpose, from unintentionally sharing internal data with the world, and so on.
Opportunity risks, on the other hand, are bigger picture or more strategic in nature. These risks are existential to a business and become more visible by asking “what if…” questions. What if we don’t act now? What if we don’t invest enough? What if we don’t have the right brain power? These are just some examples, but you can see why these are critical risks that leadership must consider as businesses and markets continue to evolve.
The Figure Below Captures More Specific Examples of Each Type of Risk:
But knowing what risks to look for is no longer enough. Leadership also needs to dive deeper into the context and behavior models behind and around these risks.
Based on our experience, an organization’s Board of Directors will fit into one of four context and behavior archetypes for leadership when considering AI governance, each with strengths and challenges:
- The Value Driver is defined as focusing on maximizing the strategic value AI can deliver to the organization. Leaders in this role prioritize capturing growth and driving competitive advantage through AI innovation. They view AI as a value creator and push the organization to innovate responsibly, making sure the risks do not undermine value creation. However, leadership that fits into the Value Driver archetype may accept higher risk for the sake of growth, potentially underestimating compliance or operational pitfalls that could arise from aggressive adoption.
- The Strategic Catalyst is defined as leading by fostering organizational transformation through AI. They encourage experimentation and agile adaptation, ensuring the organization remains responsive to changing AI landscapes. They balance risk mitigation with strategic flexibility, acting as change agents who enable the organization to seize new AI-driven possibilities. However, leadership in the Strategic Catalyst archetype can stretch organizational readiness, occasionally sacrificing thorough risk controls for quicker strategic gains or broader change.
- The Guardian is defined as prioritizing tactical risk mitigation, compliance and ethical AI use. Leaders in this role emphasize the protection of the organization and manage risks by establishing robust controls, oversight and transparent AI governance frameworks. The Guardian ensures AI systems are safe, fair and accountable, serving as the protector against operational and reputational harm while enabling trust in AI adoption. However, leadership in the Guardian archetype may inhibit innovation by prioritizing caution, slowing opportunity realization and possibly causing resistance to beneficial change if too risk-averse.
- The Advisor is defined as providing expert guidance or subject matter expertise and oversight, bridging the gap between AI technical teams and business leadership. This archetype supports decision-making by clarifying complex AI issues and helping align AI initiatives with organizational interests. However, leadership in the Advisor archetype may lack direct ownership and can therefore struggle to drive action or resolve tension between risk and opportunity if leaders defer decisive moves.
Mapping these archetypes against opportunity and tactical risks can help organizations understand where they are and where they would like to move to with respect to the tradeoffs between value creation vs. risk mitigation.
To Move Forward, Organizations Should Ask Themselves Two Questions:
- Which AI governance archetype does leadership currently fall into?
- Which AI governance archetype does leadership aspire to as the organization moves into the future?
When deciding which archetype your organization’s leadership wants to model, it’s important to weigh the benefits against the potential drawbacks. It is ultimately about finding the balance between the strategic risk appetite and the target archetype that must be reached to determine the right AI governance strategy for your organization. But reaching a decision is critical in today’s environment.
What’s Next
Boards need to use tactical and opportunity risk considerations and their chosen archetype to set a tone from the top for AI adoption. Only with these established can decisions be made to enable risk to be used to advantage and set the course for the broader use of AI in an organization.
The next article will move from this overhead view of how leadership evaluates and manages risk to a ground-level view and how AI risk is managed day-to-day through a dynamic framework fit for AI’s pace of change.
Footnotes:
1: Gupta, Sumeet, “The Shape of the Fourth AI Inflection in 2025,” FTI Consulting (January 27, 2025).
2: Gupta, Sumeet, “Frontiers of AI Research in 2025,” FTI Consulting (February 28, 2025).
3: Gupta, Sumeet, “AI Investment Landscape in 2025: Opportunities in a Volatile Market,” FTI Consulting (April 17, 2025).
4: Jagtap, Jiva, Lars Faeste, Roy Huang, Sam Aguirre, Keith McGregor, “2025 Private Equity Value Creation Index,” FTI Consulting (2025).
5: IBM Institute for Business Value, “The enterprise guide to AI governance: Three trust factors that can’t be ignored” IBM (October 17, 2024).
Related Insights
Published
November 14, 2025
Key Contacts
Senior Managing Director, Leader of AI & Digital Transformation
Managing Director