Public-Private Dialogue Will Ensure Stability in a High-Tech World
January 16, 2024
Last April, I wrote about how to manage the so-called “Fourth Industrial Revolution,”1 in which the world is becoming increasingly digitized. I highlighted a white paper for which I was a contributor, published last January in collaboration between the World Economic Forum and FTI Consulting. It argued that we need robust public-private dialogue to facilitate discussion, information-sharing and incentives that align business and government stakeholders across the world to ensure that societal benefits are realized. Less than a year later, the role of artificial intelligence (“AI”) in this challenge has only grown.
Unsurprisingly, AI is one of the main themes of discussion at this year’s World Economic Forum. But it also relates significantly to another theme being discussed there, “Achieving Security & Cooperation in a Fractured World,” as well as the overall central theme of this year’s conference, “Rebuilding Trust.”
The past year has seen warnings about AI technology and a high-profile attempt at a global pause2 on development. These have not seriously impacted the accelerating rate of AI development. Still, I remain optimistic about our potential to navigate this emerging technology productively and effectively.
Most importantly, people are already convening to discuss the implications of AI — and this is only the beginning of the conversation. It’s very encouraging that the first-ever AI Safety Summit3 took place this past November in the UK and is set to become a recurring event. Meanwhile, the EU has passed the AI Act4 and the White House has issued an executive order around AI. Ongoing discussions must delve into every sector and industry to understand the highly specific opportunities and risks at play. Subject-matter experts should understand the capabilities of AI, experiment with it in a controlled fashion in which they can trip over themselves and speak with consumers directly to understand their needs. Such experts must have extensive access to AI so that the benefits can accrue to as many people as possible, in as many areas of the economy and world as possible. We need to make sure that AI remains accessible to all, from small and medium-sized businesses to tech giants, so that the conversation that takes place around it is wide and varied in scope and perspective. The worst thing would be having little conversation or oversight. Luckily, this is not the case.
On top of this, unlike the previous industrial revolution around personal computing, in which many new unintended consequences emerged around online communication and social media platforms, AI will quickly and greatly impact existing and longstanding industries, like financial services or energy. This means there is potential for the implementation of AI to be highly regulated5 as new laws intersect with existing ones in these sectors. Companies in these spaces have a major incentive to get things right. After all, the trust they have gained could quickly dissolve if AI is implemented in a way that unintentionally harms customers and society. They need to leverage the trust they already have, and they cannot afford to lose it. This transition will resemble the adoption of online banking, in which major players sought to develop safe platforms, implement multifactor authentication and bring the trust they already earned at brick-and-mortar locations to consumers on the internet.
There will be challenges along the way, of course — AI is already being accused of bias.6 However, fostering a productive conversation among a range of enthusiastic stakeholders will help ensure that AI is implemented in an effective and reasonable way, with an eye toward safeguarding around possible future risks.
In the end, society will require a framework for adopting AI that results from a public-private partnership. Putting the brakes on AI development is unlikely to prove helpful, because not everyone will participate, and even among those who do, motivations will vary. But with measured, global cooperation bringing together government and business to build guardrails for specific sectors and industries, we can help improve the likelihood that AI has a positive impact on society. The conversation around AI should not be binary (excuse the intended pun) in a way that pits humans against machines; a better future will instead be driven by an augmented intelligence, in which the complementary skills of machines and humans create better outcomes and build trust.
1: Charles Palmer. “Getting the Fourth Industrial Revolution Right.” FTI Consulting. (April 2023).
2: “Pause Giant AI Experiments: An Open Letter.” The Future of Life Institute. (March 2023).
3: “AI Safety Summit 2023.” UK Government.
4: Emmanouil Patavos. “Five Points to Keep in Mind Before the EU’s AI Act Takes Effect.” (May 2023).
5: “New (and Old) Regulations GCs Need To Know.” FTI Consulting. (November 2023).
6: Claudio Calvino et al. “Mitigating Artificial Intelligence Bias Risk in Preparation for EU Regulation.” FTI Consulting. (November 2022).
January 16, 2024
Senior Managing Director
Most Popular Insights
- 10 Global Cybersecurity Predictions for 2024
- Global CFO Survey 2024
- Bridging the Gap Between Artificial Intelligence Implementation, Governance, and Democracy: An Operational and Regulatory Perspective
- The Power of Positive Paranoia: A Key Trait for Every CEO and General Counsel in 2024
- A Targeted Approach is Key to Implementing AI