- Home
- / Insights
- / Whitepapers
- / AI Accountability: 14 Questions the General Counsel Should Ask
AI Accountability: 14 Questions the General Counsel Should Ask
-
September 17, 2024
-
As generative artificial intelligence hype has become a central focal point in corporate strategy and digital transformation initiatives, “AI governance” has become a buzzword in its own right. The concept is loosely defined within most organizations, with many leaders clear only on the fact that they should be doing something with AI governance, rather than what they should be doing or how. While the standards and frameworks for AI governance will mature as AI technology implementations evolve, the issue of who is accountable for AI governance will likely remain ambiguous.
This ambiguity surrounding AI accountability presents a significant opportunity for general counsel. At a time when general counsel have made traction in establishing themselves as not merely risk mitigators, but also strategic business partners, they are poised to step in and assume the lead role in upholding accountability for their organization’s AI projects.
To do this effectively, legal teams must engage with the tools their business is evaluating, piloting and deploying. Developing firsthand knowledge of the technology allows general counsel to recognize its effectiveness and potential, as well as its inherent risks. Additionally, general counsel must understand who their key counterparts should be in the AI governance ecosystem, such as the chief data officer, chief technology officer, chief information security officer, chief digital transformation officer and chief privacy officer. Because not all these roles exist in every organization, assigning AI governance ownership can be complicated. Still, by serving as one of several internal authorities across the spectrum of benefits and pitfalls for any given AI use case, general counsel can help their business partners pursue innovation responsibly.
Alongside their counterparts across IT, security, privacy, etc., general counsel should work to address four central categories of AI accountability: who may be impacted, organizational risks, ethics and financial considerations. Essential questions the general counsel can ask to strengthen governance across these pillars are outlined here.
Groups and Individuals to Whom the Organization Is Accountable
- How is employee data impacted by the AI tools in use? If employee data is being fed into a model, or employees are using AI to do their jobs, the organization is accountable for providing usable tools that can be trusted as safe and ethical.
- What are the expected outcomes for customers and/or clients?AI can be a tool for maximizing value to customers and improving offerings, but if customer data is being used in the model, there are a range of privacy, ethical, security and transparency considerations that will need to be addressed.
- Does the use of AI stand to affect the company’s value in any way? There is a delicate balance between mitigating risk and safeguarding business value for investors, shareholders and/or private equity participants. Failures in AI use can lead to costly business disruptions or penalties, yet moving too slowly may come at the cost of losing market opportunity to competitors.
- Will any of the AI use cases change the standing of or parameters around strategic partnerships? Depending on the use case and the technology being used, there could be aspects that conflict with existing partner agreements or lead to changes in the business that may disrupt the nature of partnerships. Similarly, missteps that result in reputational damage could likewise damage important partner relationships. Relationships with vendors must also be considered from the perspective of what each partner’s AI is doing, so it can be evaluated for compliance with third-party risk management policies.
- Are the board and chief executives in agreement with the strategy? Alignment across business functions, the c-suite and the board are critical to protecting the organization on all fronts.
Organizational Risk
- Are there existing or impending laws that regulate the technology? AI regulation has already taken effect in Europe via the EU AI Act and other existing statutes in data protection laws. Lawmakers around the world are in the process of developing or enacting regulations that will govern many aspects of AI use by businesses, making this an area the general counsel must continuously monitor to understand evolving accountability and transparency requirements. Organizations should expect ongoing policy changes and a wide range of variation between the requirements in each jurisdiction.
- Are the potential regulatory and technological risks understood? General counsel are responsible for protecting their organizations against known and future risks, and therefore must engage in thorough risk assessments of all new AI tools and uses within their organization.
- What data will be generated, stored or shared? As part of a risk assessment, general counsel should map the data underlying their organization’s AI systems to determine whether there are conflicts with regulatory requirements, customer notices, ethical boundaries and other data-related issues.
- Has the full scope of necessary governance, risk and compliance controls been evaluated? Like any other enterprise technology or critical business function, new AI tools will require rigorous attention to all aspects of governance, risk and compliance (“GRC”), and in many cases, will require adjustments or additions to existing processes and policies.
Ethical Responsibility
- Who or what might be harmed? In the foreground, general counsel will be primarily concerned with the potential for AI to harm employees, customers, partners and the company’s value. While these are important, general counsel should also be thinking about ethics beyond the walls of their organization, as there may be impacts to reputation and environmental, social and governance (“ESG”) positioning.
- How is the organization monitoring for and managing bias and drift in the AI models? Data used to train AI models and the underlying algorithms within these systems often have biases embedded in them. Unchecked, they can also drift from their intended purpose. General counsel have an opportunity to be vocal about these issues and work with their counterparts to manage them proactively.
- What training and education are needed to ensure appropriate use? Strong governance starts at the foundation with employees. As new AI tools are deployed, employees will need to be trained on the ethics of these tools, taught how to avoid misuse and educated on the potential for harm.
Financial Implications
- What is the cost of the tool and its potential risk vs. expected value? General counsel can contribute to discussions about the financial soundness of AI projects by calculating the approximate financial downsides that could result from AI missteps (such as potential regulatory violations, reputational harm or litigation).
- How will success be measured? General counsel and their teams are working through their own cost-benefit analyses for AI in legal use cases. This includes evaluating the justification of need for new tools and defining key performance indicators that will measure success. General counsel can leverage their learnings from these exercises to support financial accountability for AI deployments across other parts of their organization as well.
With AI governance, general counsel have the opportunity to be a significant influencer in a developing and important aspect of business strategy and innovation. They recognize that with the rapid pace of technological advancement, they cannot lag behind on AI adoption. Still, they will always be accountable for risk, and therefore must ask the right questions across their business to support ethical, responsible and safe pursuit of digital transformation.
Published
September 17, 2024
Key Contacts
Senior Managing Director