AI Compliance in an Uncertain Regulatory Landscape: What General Counsel Need to Know
-
March 20, 2025
-
The inconsistent nature of artificial intelligence (“AI”) regulations worldwide presents significant and immediate challenges for organizations and their legal counsel. As businesses harness AI’s transformational power, general counsel (“GC”) everywhere must navigate a labyrinth of new regulations that are not just divergent but, in some cases, diametrically opposed. They must ensure regulatory compliance while contending with competing stakeholder interests, international tensions and risks that change with the latest innovations in AI. Successful organizations will be those that identify AI’s true business value while implementing appropriate safeguards for high-risk applications.
The European Union (“EU”) has emerged as a regulatory pacesetter, advancing stringent, rights-based rules like the AI Act1 — a comprehensive, risk-based regulatory framework with strict compliance requirements and hefty penalties. On the other hand, the United States relies on a fragmented, sector-by-sector approach with agency guidelines and state actions addressing privacy and bias rather than on a unified AI law. This approach creates a patchwork of compliance burdens — particularly for UK-based GCs — in high-stakes and highly regulated industries like healthcare, finance and insurance.
Adding to the challenges GCs face is the rapid speed of AI innovation, which has intensified the discussion surrounding the balance between innovation and ethical oversight. Furthermore, the recent rollback of AI safety protocols2 by the new U.S. presidential administration has highlighted the need to find the right balance between spurring innovation and upholding ethical standards like fairness and accountability — which, along with healthy competition and supportive copyright laws, are necessary to maintain public confidence.
Amid these changes, GCs worldwide must navigate the new policy front with agility and urgency — supporting their company’s AI ambitions while ensuring that safeguards exist for high-risk uses. Failing to comply with various laws can result in severe financial penalties, and regulators are increasingly scrutinizing AI uses that have the potential to affect human rights, safety and fairness. Amid a continuously changing and complex regulatory landscape, this article recommends a risk and use-case-focused approach that GCs can leverage to remain in compliance.
Rapid Corporate Implementation of AI Poses Risks
UK-based GCs find themselves in a particularly tough spot — caught between the EU’s comprehensive regulatory standards and the United States’ more fragmented approach. However, at the same time, they must also contend with the rapid implementation of AI across organizations. As AI plays an increasingly major role in corporate functions such as human resources3 (“HR”) and asset management,4 GCs must adjust their compliance strategies to regulatory changes.
An example of where AI is rapidly being woven into corporate functions is the performance reviews employees receive each year. New AI tools can summarize the feedback that employees receive from peers and managers and provide insights into their strengths, areas for improvement and goal-setting opportunities.5 However, when these AI-assisted reviews influence legally sensitive decisions like pay, promotion or termination, organizations must establish robust safeguards. Human oversight is crucial to ensure that the AI models used by HR produce accurate results consistently and any biases are mitigated.
The use of AI in HR introduces unique risks due to its “black box” reasoning, which is often opaque, and its resulting decisions, which may be more pervasive, widespread or harder to account for than individual human biases.6 This raises liability issues akin to those surrounding self-driving cars: with autonomous vehicles, the question is who is liable for accidents; in HR, it’s who bears responsibility when systemic discrimination occurs. To foster trust and reduce such risks, fairness, transparency and accountability must be embedded into all AI-driven processes.
In many organizations, the use of AI also extends far beyond HR and other internal processes. For instance, AI is widely used to improve customer help tools like chatbots.7 These modern chatbots can respond to unexpected questions and customize their answers to fit each customer, with the goal of creating a more satisfying customer experience. However, while the legal risks associated with AI agents in customer support are generally modest, landmines still exist. For one, poorly trained AI agents may provide inaccurate information.8 Errors and unsatisfactory interactions can result in customer frustration, potentially damaging customer relationships and the organization’s reputation.
Global Regulatory Challenges Ahead
While implementing AI presents reputational and ethical concerns, these are intensified by the broader challenges that GCs face in navigating new AI regulations. Given the complexities of this transformative technology, many of the organizations they advise are not fully prepared for the regulatory challenges ahead. Whether counseling multinational corporations or UK-based organizations straddling divergent EU and U.S. regulations, GCs must understand the magnitude of this challenge.
In the United States, the decentralized regulatory approach has led to a mix of divergent state laws — a situation exacerbated by the new presidential administration’s deregulatory approach. This makes compliance a moving target. And while the current EU rules are clearly defined, they also require rigorous adherence, leaving little room for error. Complicating matters further is the reality that AI is evolving at a breakneck pace.
To address such challenges, organizations should use analytics and other advanced, data-driven tools to identify compliance gaps and adapt to policy changes. In addition, fostering internal collaboration across legal, compliance and IT teams is crucial to seamlessly implement AI initiatives and stay in compliance as regulation evolves.9
At the same time, regulations, risks and opportunities will keep changing, therefore GCs need to stay flexible and proactive. Global GCs, in particular, must align their compliance efforts across regions with very different regulatory philosophies, even as they prepare for future shifts. Taking part in industry discussions with policymakers can help influence how future AI governance frameworks are shaped.
A Use Case-Focused Approach to Compliance
For many GCs, the most effective way forward will be to tackle the current tangle of regulations according to specific use cases. GCs should prioritize the AI use cases that pose the greatest risk to individuals, organizations or society at large, like hiring, employee performance evaluations and high-stakes decision-making that impact consumers in regulated industries such as healthcare, insurance and financial services. By identifying and addressing high-risk AI applications, not only can organizations ensure they comply with applicable regulations and ethical standards, but they can also foster innovation while mitigating legal risks.
The following three use cases offer a useful illustration of this approach:
- AI-Powered Hiring Tools: AI systems used in recruitment and hiring processes must be carefully monitored to ensure they do not perpetuate biases or violate anti-discrimination laws. Regular audits of hiring algorithms, combined with human oversight, are essential to maintaining fairness and compliance.
- AI in Financial Decision-Making: In financial services, AI tools used for credit scoring, loan approvals or investment decisions must be held to rigorous standards of accuracy and fairness. GCs can play a pivotal role in implementing robust validation processes to ensure these tools meet both regulatory and ethical benchmarks.
- AI in Healthcare Diagnostics: AI applications in healthcare, such as tools that diagnose illnesses or recommend treatments, carry life-and-death implications. Organizations must prioritize transparency, ensuring that these tools are thoroughly tested, explainable and compliant with relevant healthcare regulations. This includes conducting robust testing, keeping clear records of how the AI makes decisions and complying with industry rules like the U.S. Health Insurance Portability and Accountability Act, which protects patient privacy.
Although GCs work to address various use cases, they must also remain vigilant about avoiding “AI-washing”10 whereby organizations overstate the capabilities or ethical soundness of their AI systems. This will require transparency, accurate communication and a commitment to validating the actual performance and limitations of AI tools.
Conclusion
The rapid implementation of AI in corporate processes and fragmented nature of global regulation present significant challenges for organizations and their GCs, while simultaneously creating opportunities for thoughtful, proactive governance and innovation. This challenge is compounded by the rapid evolution of AI models and use cases, which often outpace lawmakers’ ability to draft rules ensuring their safe and fair use.
By adopting a use case-focused approach, GCs can bring clarity to this complex and quickly changing regulatory landscape. Prioritizing high-risk applications and implementing strategies that balance innovation with compliance becomes even more critical at a time when geopolitical variables — from the new administration’s deregulatory agenda to China’s state-driven ambitions — could further alter the legal terrain. GCs who successfully navigate these dynamics will enable the organizations they serve to thrive as AI and the regulations that govern this transformational technology evolve.
Footnotes:
1: “European Artificial Intelligence Act comes into force,” European Commission (July 2024)
2: Adam Aft et. al., “AI Tug-of-War: Trump Pulls Back Biden’s AI Plans,” The Employer Report (January 2025)
3: Nikita Sood et. al., “Getting Ahead of AI Disruption in the Workforce: The HR Opportunity,” FTI Consulting, (February 2025)
4: Sohnke M. Bartram, Jurgen Branke and Mershad Motahari, “Artificial Intelligence in Asset Management,” CFA Institute Research Foundation (2020)
5: Michelle Gouldsberry, “Your Guide to Using AI for Performance Reviews,” Betterworks (June 2024)
6: Zhisheng Chen, “Ethics and discrimination in artificial intelligence-enabled recruitment practices,” Nature (September 2023)
7: Kent Mao, “AI Chatbots in Customer Service: A Guide,” ComputerTalk (February 2024)
8: Cade Metz, “Chatbots May Hallucinate More Often Than Many Realize,” The New York Times (November 2023)
9: Nicole P. Wells, Adam T. Berry and Andrea B. Levine, “Updated DOJ Guidance Highlights the Importance of AI & Data Analytics,” FTI Consulting (February 2025)
10: Kelly Miller, Meredith Brown and Claudio Calvino, “AI Washing Erodes Consumer and Investor Trust, Raises Legal Risk,” Bloomberg Law (October 2024)
Related Insights
Published
March 20, 2025
Key Contacts
Senior Managing Director
Senior Managing Director
Managing Director