- Home
- / Insights
- / Whitepapers
- / The Criticality of AI Impact Assessments to Cybersecurity
The Criticality of AI Impact Assessments to Cybersecurity
-
September 17, 2024
-
The benefits of artificial intelligence (“AI”) are undeniable, from analyzing large datasets in a matter of seconds to reducing human error, but these added efficiencies may carry corresponding cybersecurity risks. This balance of benefits and risks must be carefully considered. Conducting an impact assessment is a comprehensive approach to identify both AI’s cybersecurity threats and associated outcomes, such as data privacy concerns, security vulnerabilities, bias, and performance issues, which can have ethical impacts as well.
An organization’s General Counsel (“GC”) should require an impact assessment before AI is deployed into existing systems and networks. This ensures GCs have time to work with the appropriate departments in their organization to mitigate the identified risks and implement controls that lessen the organization’s exposure to cybersecurity attacks or data leaks.
In addition to aiding with security and data privacy risk mitigation, an impact assessment also helps GCs and their organizations maintain compliance with current and future regulatory obligations and reduces exposure to litigation, which has the downstream effect of reducing the need to make changes to the organization’s AI system post-deployment, which can be time consuming and difficult to implement.
Concept of an AI Impact Assessment
Improperly configured AI can lead to the exposure of sensitive customer information, unauthorized access or security breaches due to lack of security controls, inaccurate outputs based on unmitigated, inherent biases existing in the data or from system-created biases, and compliance violations. Each of these issues can disrupt operations and negatively affect the existing cybersecurity program.
Performing an AI cybersecurity impact assessment prior to implementation identifies vulnerabilities, which, if addressed, increases an organization’s overall cybersecurity posture. This is vital because “deploying AI in your enterprise introduces a new attack surface that’s very different,” according to Apostol Vassilev, a Research Team Supervisor at the National Institute of Standards and Technology (“NIST”).1 Threat actors attempt to manipulate or corrupt source information used by AI systems, either through an organization’s developed AI models or through third-party vendors of AI platforms. This tactic is known as AI poisoning.
AI poisoning attacks pose significant threats, as manipulating data can result in incorrect outputs, backdoor access to networks and systems, and challenges with the AI system’s availability. Due to the resources and sophistication required to conduct an AI poisoning attack, nation-states are the most likely culprits. This is concerning, as nation-state cyber attacks are traditionally difficult to defend against. Conducting an AI impact assessment offers organizations an opportunity to get out in front of an AI poisoning attack.
Like traditional cyber attacks, an AI attack can aim to disrupt operations, access sensitive data or intellectual property, or to hold information hostage until a ransom is paid. Threat actors are adept at identifying new opportunities for attacks, tailoring their methods to exploit new vulnerabilities. Developing an awareness of this threat is a critical step in safely implementing AI, which can be supported through an impact assessment.
Another vital element of an impact assessment is evaluating how the AI is trained, e.g., how it makes decisions, and analyzing its output to determine if any biases exist. This process includes determining what controls are in place to mitigate bias, ensuring that outputs are not disparate, unfair, or discriminatory. Biased outputs can present significant reputational, legal, and regulatory risk further emphasizing the importance of identifying and understanding potential impacts in advance.
Government Guidance
While a comprehensive federal law that governs AI in the U.S. does not yet exist, states like Colorado and Utah have passed legislation in the interim.2,3 State-level laws are generally focused on preventing discrimination and bias, protecting consumer data and privacy, and ensuring AI usage is properly disclosed. These laws also demonstrate a legislative desire to enforce either internal or external assessments to ensure that AI issues are discovered before they impact the public.
States are not alone in their efforts to regulate the use of AI. The U.S. Federal Trade Commission (“FTC”) has “decades of experience enforcing three laws important to developers and users of AI,”4 and has issued guidance on using AI, which includes “lessons about how companies can manage the consumer protection risks of AI and algorithms.”5 While this guidance may be viewed as a recommended strategy for approaching AI implementation, the FTC warns that AI misuse can result in violations of the FTC Act and carry corresponding penalties.6
In addition to laws and regulations that suggest proper risk management, U.S. government agencies have published AI-related guidance to assist organizations with implementation and risk management strategies. In January 2023, NIST published its “Artificial Intelligence Risk Management Framework,” designed to offer practical guidelines – not obligations – that “increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.”7 This framework was followed by another from NIST, focused specifically on Generative AI, with similar guidance on how to manage corresponding risks.8 While a solid starting point for GCs and organizations interested in implementing AI and ensuring cybersecurity risks are properly accounted for, NIST’s guidance is not yet robust and will require organizations and their GCs to go a step further with their protections by considering the best way to implement controls. Without a federal law, it is likely that state governments will continue introducing bills requiring AI risk management. The growing use of AI systems may also apply pressure to regulatory bodies and other agencies to publish more and detailed recommendations and obligations. An AI impact assessment is necessary to stay ahead of threats and compliance requirements.
Conclusion
There is no denying the beneficial aspects of AI, especially as tools and platforms are tweaked and updated for peak performance. Yet failure to conduct an impact assessment prior to implementation exposes organizations to a myriad of cybersecurity and privacy risks that can negatively impact strategic interests and open organizations up to regulatory scrutiny. Threat actors are savvy and understand how to leverage emerging technology to their benefit, including exploiting vulnerabilities created by improperly implementing AI-related controls.
With increased consumer and government attention regarding how organizations leverage AI, organizations should carefully consider potential implications identified through an impact assessment and use the results to remediate and mitigate cyber threats. Implementing AI might seem like an easy decision for an organization, but without a full understanding of the corresponding new threats and vulnerabilities, the costs can quickly outweigh the benefits.
Footnotes:
1: Mary K. Pratt, "AI System Poisoning Is a Growing Threat: Is Your Security Regime Ready?", CSO Online (June 10, 2024).
2: Senators Rodriguez, Cutter, Michaelson Jenet, Priola, Winter F., Fenberg and Representatives Titone and Rutinel, Duran, “Concerning Consumer Protections In Interactions With Artificial Intelligence Systems,” General Assembly of the State of Colorado (May 17, 2024).
3: Kirk A. Cullimore, Jefferson Moss, “Artificial Intelligence Amendments,” State of Utah (March 13, 2024).
4: Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI,” Federal Trade Commission (April 19, 2021).
5: Andrew Smith, “Using Artificial Intelligence and Algorithms,” Federal Trade Commission (April 8, 2020).
6: Ibid
7: Gina M. Raimondo, Laurie E. Locascio, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology, (January 2023).
8: Gina M. Raimondo, Laurie E. Locascio. “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” National Institute of Standards and Technology, (July 2024).
Published
September 17, 2024
Key Contacts
Senior Managing Director