From Deepfakes to Agents: How AI Is Rewriting the Threat Playbook
-
April 02, 2026
-
As artificial intelligence (“AI”) rapidly evolves, organizations are examining how these new technologies may impact their cybersecurity landscape. While AI introduces new risk considerations, many of the underlying challenges remain consistent with longstanding cybersecurity principles.
AI is shifting the cyber threat landscape by significantly altering the speed, scale and personalization of cyber threats. In addition to enhancing existing cybersecurity threats like phishing attacks, businesses rushing to adopt AI can inadvertently introduce new challenges and risks such as prompt injection attacks and incidents leveraging or exploiting agentic AI. Along with practical risk management advice, this article will provide an overview of three risks present in the AI threat landscape today: AI-enabled impersonation, prompt injection attacks and the abuse of agentic AI.
AI-Enabled Phishing and Impersonation: Lower Cost, Higher Credibility
AI has significantly simplified and enhanced the process for threat actors to create convincing phishing lures through the evolution of image, voice and video generation. Where phishing emails and text messages were once generic and easily identifiable with poor grammar and unbelievable messaging, accessible and user-friendly AI tools have allowed for personalized messaging at scale, including highly convincing deepfakes. Phishing tactics that once required technical skills or native language expertise can now be carried out using AI, lowering the barrier of entry for threat actors.
Regardless of advancing techniques, phishing continues to exploit reoccurring weaknesses, including gaps in identity verification and limitations in employee security awareness and controls. The key human elements that make someone click or act with urgency, such as relationship trust and personal judgement, also remain intact as advances in AI make it more difficult to distinguish real from fake communications. As these techniques evolve, organizations may reassess how identity verification and out-of-band validation procedures are applied to higher-risk requests, including confirming legitimacy through a separate communications channel where appropriate.
Prompt Injection: A Structural Challenge in AI Systems
Prompt injection is a technique where malicious instructions are hidden in content that an AI system processes, attempting to manipulate the AI into ignoring its intended rules or producing unintended outputs. This attack can be used by threat actors to exfiltrate sensitive data, override guardrails and trigger unauthorized actions through integrated tools and APIs. Prompt injection presents a continuing challenge due to AI’s inherent difficulty in distinguishing trusted user input from untrusted content.
While prompt injections are a known risk, it remains difficult to prevent and fix due to blurred boundaries between data, instructions and context. Common injection vectors including user inputs, retrieved documents, tool responses and external content are hard to monitor and control, and AI models are often unable to identify which are unauthorized on their own. Additionally, the development and integration of AI copilots such as email and meeting assistants within enterprise tools provide potential points of entry for prompt injection attacks.
Agentic AI: When Automation Gains Access, Privilege and Autonomy
Agentic AI refers to AI systems that can take actions autonomously to achieve a defined objective, often requiring little human input or interaction to execute tasks after the agent’s goals are provided. Agentic AI presents significant new risks to organizations integrating the technology into their existing workflows. Unlike generative AI such as chatbots, agentic AI can act independently and often requires the use of sensitive data via access credentials and third-party tools in order to effectively accomplish its goals. For example, an AI agent responsible for resolving a support ticket may require access to the support ticket system to monitor incoming requests, an email environment to draft responses and schedule meetings, and other internal systems used to manage support tickets.
As organizations strive to remain competitive by implementing agentic AI before others in the market, security risks can be overlooked, resulting in over-privileged AI agents, poorly scoped access and limited visibility into agent actions. Should a threat actor then gain access to an agentic AI system (through API compromise, prompt injection, or otherwise), it could lead to abuse of agent permissions or even data exfiltration via legitimate integrations.1,2
Agentic AI presents a new access vector, but the security risks are often rooted in traditional cybersecurity gaps. Issues like over-privilege and misconfiguration can be exploited by threat actors to gain access to networks, exfiltrate data and poison AI workflows. However, these risks can be mitigated through controls commonly associated with broader cybersecurity risk management such as (agent) identity confirmation, least privilege access, logging and monitoring. The key is implementing these cybersecurity controls in the appropriate context of the AI deployment.
Practical Risk Management
While it may be difficult to predict the next evolution of AI and the risks that will accompany it, organizations can better position themselves to stay ahead of challenges with a focus on proactive risk management. This approach involves AI systems being evaluated as a part of the enterprise attack surface, with established cybersecurity principles applied to the underlying systems and AI architectures where appropriate.
- Treat agents as an extension of your workforce. Strong identity management strategies should be implemented for AI agents. Similar to human or service accounts, least-privilege access should be applied to AI agents, limiting access to only the tools and integrations necessary to perform their intended functions. Organizations can also enable visibility into the actions of agentic AI through logging and monitoring.
- Implement strong AI governance across the organization. Cybersecurity industry standards such as the National Institute of Standards and Technology (“NIST”) AI Risk Management Framework (“RMF”) and ISO 42003 provide guidelines for secure implementation of AI and best practices for mitigating risks.3,4 As new AI regulations, such as the EU AI Act, and state laws across the United States continue to emerge, organizations should stay abreast of requirements to meet compliance deadlines.5
- Provide user training at all levels. Employees with access to AI tools should be properly trained in how to use the tools and informed about emerging risks associated with AI-enabled technologies. Common training topics may include evolving phishing techniques, recognizing potential deepfakes, responsible use of AI and awareness of sensitive data that may be inappropriate as an input for AI models.
Familiar Fundamentals in an AI-Driven Threat Landscape
As AI evolves, it will continue to change how cyber attacks scale and impact the tools at the disposal of threat actors. However, many of the underlying cybersecurity challenges remain the same. At their core, AI-related risks often stem from foundational cybersecurity issues that organizations have historically faced. Organizations that proactively apply core cybersecurity principles to their AI systems and infrastructure are not only better positioned to respond to and defend against emerging threats, but they will also build the necessary resilience to innovate securely and confidently in a future driven by AI.
Footnotes:
1: Xiao, Weixuan, “OpenClaw security vulnerabilities include data leakage and prompt injection risks,” Giskard (Feb. 4, 2026).
2: Sharbat, Tamir Ishay, “AgentFlayer: Discovery Phase of AI Agents in Copilot Studio,” Zenity Labs (Jun. 11, 2025).
3: “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology (Jan. 2023).
4: “ISO/IEC AWI 42003: Information technology — Artificial intelligence — Guidance on the implementation of ISO/IEC 42001,” International Organization for Standardization (Mar. 2025).
5: EU Artificial Intelligence Act, Regulation (EU) 2024/1689, OJ L 2024/1689 (July 12, 2024).
Related Insights
Published
April 02, 2026
Key Contacts
Managing Director
Most Popular Insights
- Beyond Cost Metrics: Recognizing the True Value of Nuclear Energy
- Finally, Pundits Are Talking About Rising Consumer Loan Delinquencies
- A New Era of Medicaid Reform
- Turning Vision and Strategy Into Action: The Role of Operating Model Design
- The Hidden Risk for Data Centers That No One is Talking About