Navigating Global AI Regulation and Innovation
-
March 20, 2025
-
There is a growing need for regulatory clarity surrounding the operational implementation of artificial intelligence, particularly as new AI-specific laws emerge and begin to take effect globally. The gaps between abstract guidance and the practical, technical aspects of leveraging AI systems, make it difficult for organisations to innovate responsibly. A pragmatic framework for AI governance is critical to integrate AI into an enterprise’s broader approach to risk management and to support safe AI use even in the midst of regulatory uncertainty.
In Europe, the initial foundational guidance from the European Commission is emerging to address areas such as the definition of AI1 and prohibited AI practices.2 The guidance complements the recent entry into force of the EU AI Act provisions prohibiting certain AI systems and mandating AI literacy requirements.3 Companies operating within the regulated AI space will benefit from continued guidance on how they might be expected to implement their substantive obligations in practice, particularly with generative AI, governance and confidentiality provisions becoming applicable in August 2025 alongside the penalty provisions of the act.
Emerging AI laws appear to be designed to introduce product-level regulation in relation to certain types of systems that are considered to pose unacceptable or high risk to individuals and broader society. While there are some transparency obligations associated with limited-risk AI systems, the reality is that there is an entire category of systems that do not fall within the unacceptable or high-risk categories that are largely unregulated by these AI frameworks. In fact, according to the European Commission, “the vast majority of AI systems currently used in the EU fall into this [minimal or no risk] category.”4 These types of systems can still give rise to high organisational risks, independent of AI compliance obligations, which leaves many businesses with the challenging task of self-regulating their AI practices.
A common example is an AI chatbot used in customer service, with a host of considerations, including potential data mishandling, currently largely unregulated ─ particularly where organisations are deploying third-party products. Businesses developing homegrown generative AI models to power chatbots will be subject to targeted obligations to maintain technical product documentation, publish detailed summaries of their training datasets and maintain a copyright policy. However, these obligations do not extend to the broader risks associated with the use of chatbots and ultimately do not apply directly to deployers.
For example, system input based on inaccurate data could result in misleading or false responses that may lead to a high volume of customer complaints or potential legal challenges. Information security is another concern, depending on the direct system accesses provisioned to the chatbot. Wider societal impacts are also of paramount concern, as recent chatbot failures have ranged from swearing at and threatening users to giving harmful health advice and advising users to break the law. These considerations are just the tip of the iceberg in terms of potential issues that could stem from poor governance across AI applications that are generally defined as limited or minimal risk.
Ultimately, organisations should take a holistic view in relation to what they consider to be high risk to them, in addition to the regulatory-driven classification of the AI system concerned. A pragmatic way to begin is to first define the organisation’s unique AI risk profile, based on the types of AI in use, their individual use cases, the strategic goals and AI roadmap of the business, the organisation’s acceptable risk tolerance, the industry context, applicable regulations and other defining factors. From there, teams can begin to integrate AI risk management standards, protocols and controls into enterprise risk governance frameworks, where these already exist within the organisation.
Technology solutions can support these efforts by automating certain aspects of process integration and streamlining compliance activities. Many organisations are likely to have existing tools that are readily available to them that can be leveraged for this purpose. This could be a dedicated compliance tool or an enterprise-wide solution that the organisation has already invested in, which could be repurposed to support with AI governance activities. Importantly, the efficacy of these tools is often dependent on their expert, custom configuration and the extent to which supporting processes exist to embed them within the business.
The overall goal of these activities is to facilitate responsible innovation and growth. Adapting existing governance processes can help businesses move forward with exploring the disruptive competitive opportunities that AI technologies present, while minimising an array of associated financial, operational and reputational risks. To that end, organisations can prioritise AI governance adjustments across the following areas:
Enterprise governance. Defining the corporate strategy for AI, including documenting the following:
- Target operating models with clear roles and responsibilities for AI risk.
- Compliance assessments to establish programme maturity and remediation priorities.
- Accountability processes to record and demonstrate compliance.
- Policies and procedures to formulate policy standards and operational procedures.
- Horizon scanning to enhance and align the programme to ongoing regulatory developments.
Product governance. Enterprise policy standards must also be applied at the product level. Organisations can ensure that their AI products match their enterprise strategy by using:
- System impact assessments to identify and address risk prior to product development or deployment.
- Quality management procedures tailored to the software development lifecycle to address risk by design.
- Risk and controls frameworks defining AI risk and treatment based on widely recognised standards such as ISO and NIST.
- Conformity assessments and declarations to demonstrate their products are compliant.
- Technical documentation including standardised instructions of use and technical product specifications.
- Post-market monitoring plans to monitor product compliance following market launch.
- Third-party due diligence assessments to identify possible external risk and inform selection.
Operational governance. The AI strategy will need to be operationalised throughout the business through the development of:
- Performance monitoring protocols to ensure that systems perform adequately for their intended purposes.
- Transparency and human oversight initiatives to ensure individuals are aware and can make informed choices when they interact with AI systems or when AI-powered decisions are made.
- Incident management plans to identify, escalate and respond to serious incidents, malfunctions and national risks impacting AI systems and their operation.
- Communication strategies to ensure transparency towards internal and external stakeholders in relation to the organisation’s AI practices.
- Training and awareness programmes to enable staff with roles and responsibilities for AI governance to understand and perform their respective roles.
Many of these issues were discussed at length in the 2024 AI Governance in Practice Report which provides additional detail to support the creation of governance frameworks in line with emerging best practices and AI-specific laws. Regarding AI regulation specifically, organisations must recognise that the European Commission has already implemented the EU AI Act, with the first set of prohibitions having taken effect in February 2025.5 One key learning from other largescale EU regulations, which can be expected of the EU AI Act, is that there is typically a long tail of enforcement activity and potential follow-on litigation.
For example, whilst the General Data Protection Regulation (“GDPR”) became effective in 2018, regulators were still issuing substantial fines more than five years later for approaches taken in 2018-2019.6 As such, organisations need to take a robust yet pragmatic approach to making the right decisions now, as otherwise they may pay for poor decisions, especially relating to bias, fairness and transparency, several years from now.
As lawmakers around the world continue to enact regulations that will govern many aspects of AI, privacy and compliance officers will have to navigate the complex balance of enabling progress, innovation and efficiency, whilst balancing risk to the business and individuals.
Each organisation’s AI governance needs will depend on its strategy for investing in the technology. Different companies will also have different entry points into the AI governance journey. For example, businesses looking to pilot an AI tool in isolation are likely to find a product-centric approach to be more suitable, wherein the focus is on vendor due diligence and system impact assessments. Conversely, an organisation looking to develop or deploy numerous AI tools for a variety of use cases will likely need to adapt its existing corporate governance frameworks at the enterprise level first.
Having a clearly defined strategy, supported by a governance framework that maps directly to it, is key. Defining a framework that does not block innovation but rather supports it in a responsible way at the early stages of the company’s AI journey will enable a balanced approach now and in the face of new and evolving regulation.
Footnotes:
1: “The Commission publishes guidelines on AI system definition to facilitate the first AI Act’s rules application”, European Commission (February 2025).
2: European Commission, “Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act” (February 2025).
3: European Parliament, “EU AI Act: first regulation on artificial intelligence” (February 19, 2025).
4: European Commission, “AI Act” (last updated February 2025).
5: id
6: European Commission, “Data protection in the EU” (last accessed March 2025).
Published
March 20, 2025