A Targeted Approach is Key to Implementing AI
January 17, 2024
One of the biggest news stories of the past year has been the rise of artificial intelligence (“AI”). Although AI technology has been around for many years now, growing public access to large language models (“LLMs”) like ChatGPT has thrust AI into the spotlight. Likewise, text-to-image models like DALL-E have captured the imagination of the public. Amid rising interest in the perceived benefits and threats of AI, governments and politicians have scrambled to regulate the technology — or at least discuss how to regulate it.
Unfortunately, there is a lot of fear. But the world may be more afraid of AI than it should be, and might benefit instead by embracing it in a targeted and sophisticated fashion. There remains significant opportunity, especially for corporates, to leverage this technology in a way that aids corporations and the public alike. Corporate leaders who assume implementing or dealing with AI requires an all-encompassing solution that should be executed in fast and drastic ways to achieve maximum results are misguided; rather, they must foster an ambitious and data-oriented culture to encourage the creation of successful applications. And although the capabilities of AI are certain to grow in the coming years, it remains a powerful tool — but not one yet capable of replacing human judgment. Nor is it certain that it ever will. Instead, AI is likely to modify and limit what we will be required to judge.
Understandably, with so much interest in AI, it is one of the themes of this year’s World Economic Forum, under the banner, “Artificial Intelligence as a Driving Force for the Economy and Society.” Government, business and nonprofit leaders will come together, bringing vastly different perspectives to tackle this fascinating and wide-ranging topic. The timing of the conversation is interesting, as it is taking place while the EU prepares to implement its AI Act,1 which will mandate a risk-based approach to artificial intelligence in the bloc. The groundbreaking legislation is being watched by regulators in the United States and beyond as government officials worldwide tackle the question of how to regulate AI.
Further regulations2 will surely emerge — and the EU’s legislation may have a so-called “Brussels Effect,” in which other countries adopt frameworks similar to that of the European bloc. Of course, companies should remain cautious and make sure they are complying with regulations to avoid negative, unintended consequences. The insurance3 and banking industries are just a few examples of sectors that are already experiencing scrutiny for their use of AI and the potential bias such systems may propagate.4
However, a desire to comply with the law and mitigate perceived risk should not lead corporates to consider AI a liability; avoiding the technology is a luxury they do not have. Rather, C-suites should approach AI as something that represents great opportunity — so long as they are targeted and agile in their approach. Companies should maintain a proactive approach to the technology. One way they can do this is by piloting their own specific use cases and point solutions. AI has the potential to kickstart growth and opportunity for the world’s leading corporations, but they shouldn’t rush to adopt entire frameworks to implement the technology. Rather, organizations should experiment and research to see what works and what doesn’t.
Companies should also be ready to seek external help. Readiness assessments can help companies identify where they are in their journey to implementing and applying AI. External partners can help companies better understand their investments and ensure that they are consistent with overall objectives. If they aren’t already doing so, now is a good time for corporates to strategize on how to boost their competence around AI — while also focusing on compliance in an evolving and often murky regulatory environment.
1: Claudio Calvino, et al. “The Four Risks of the EU’s Artificial Intelligence Act: Is Your Company Ready?” FTI Consulting. (July 2023).
2: “New (and Old) Regulations GCs Need To Know.” FTI Consulting. (November 2023).
3: Claudio Calvino, et al. “The Evolving Impact of AI on the Insurance Industry.” FTI Consulting. (October 2023).
4: Marc Zimmerman, et al. “Ethical Use of AI in Insurance Modeling and Decision-Making.” FTI Consulting. (March 2023).
Most Popular Insights
- 10 Global Cybersecurity Predictions for 2024
- Global CFO Survey 2024
- Bridging the Gap Between Artificial Intelligence Implementation, Governance, and Democracy: An Operational and Regulatory Perspective
- The Power of Positive Paranoia: A Key Trait for Every CEO and General Counsel in 2024
- A Targeted Approach is Key to Implementing AI