Navigating National Security Risks in AI-Related Investments
Policy Recommendations Aim To Streamline Regulation and Accelerate U.S. Innovation in AI
-
October 08, 2025
-
In July 2025, the White House released its “AI Action Plan” (the “Plan”) of policy recommendations in furtherance of the White House’s vision for “winning the AI race.”1 The spirit, tone, and letter of the Plan are clear that the White House largely seeks to remove regulation and other forms of government oversight in the interest that doing so will accelerate AI innovation in the United States. The Plan says, “America’s private sector must be unencumbered by bureaucratic red tape.”2 While this may come as welcome news to some, business leaders should still recognize that there are many risks in the AI industry, particularly when it comes to the national security issues arising in relation to investments in AI companies and related technology. Deal teams must understand the push and pull between deregulation and U.S. national security for their investments to be successful.
Unique Risks Posed by Investments in AI
Investments in AI and related technology, whether inbound or outbound, can have significant national security ramifications. These include the potential for unauthorized data exposure, access by foreign adversaries to sensitive technology or locations, introduction of supply chain vulnerabilities, and theft of intellectual property. Many of these issues also boil down to cybersecurity risks, such as the compromise of proprietary tools, algorithms, and extensive datasets. Funding directed at AI opens the door to these kinds of risk because they create opportunities for threat actors to exploit weaknesses and force companies to contend with cross-border information-sharing practices.
Threat actors may be particularly drawn to AI-related investments, because of the potential for AI to fuel sophisticated, believable, and effective cyber attacks. These can be launched at large scale using seemingly authentic content. In an effort to cause geopolitical disruption, nation-states, strategic competitors, and other adversaries may leverage their investments in AI to create disinformation campaigns that aim to spread false narratives, influence opinions, and undermine trust. “Deepfakes,” highly realistic, AI-generated videos that falsely depict people’s words or actions, are a frightening example of how such campaigns deceive and manipulate audiences.
This dynamic landscape highlights how investments in AI and related technologies involving national security require revamped compliance, risk-management programs, and oversight from corporate executives, even if other impediments to investment fade away.
Policy Developments and Corporate Expectations
The Plan mentions the Office of Management and Budget will coordinate with Federal agencies to potentially repeal regulations “that unnecessarily hinder AI development or deployment,” but we do not believe national security regulations will be eased, and repealing certain authorities would require Congress.3 Several long-standing regulations, as well as new ones, are likely to remain crucial for deal teams.
For example, the Bureau of Industry and Security of the U.S. Department of Commerce, which administers the Export Administration Regulations (EAR), has established rules governing the cross-border transfer of advanced semiconductors, supercomputers, and other emerging technologies critical to the AI industry, as well as detailed guidance to prevent the diversion of such items to countries and parties that may act in a manner contrary to U.S. national security and foreign policy interests.4 These controls carry direct implications regarding investments in AI, including the development of data centers abroad. As a result, corporate executives must proactively determine if AI-related transactions involve “controlled technologies,” or destinations or parties of concern, obtain necessary licenses and approvals, and comply with detailed conditions specified in such approvals. Penalties for non-compliance can be significant, including the transaction being blocked and the potential loss of export privileges.
Corporate executives involved in foreign investments in AI are also likely going to continue to contend with the Committee on Foreign Investment in the United States (CFIUS), an interagency committee authorized to review certain transactions for the potential effect on national security.5 In addition, the new Outbound Investment Security Program (OISP) regulates certain transactions involving AI, when such deals have a nexus with certain countries of concern.6 The OISP outright prohibits certain investments in AI abroad.
Another recent example of the U.S. government’s scrutiny of AI on national security grounds is the Department of Justice’s Data Security Program (DSP).7 Under the DSP, certain non-passive investments involving a handful of foreign countries that meet certain thresholds trigger new data security compliance obligations. These include working with an independent auditor to certify that specific security requirements are met.
U.S. businesses investing in and developing emerging technologies like AI historically have faced increased scrutiny and investigations from regulators, which means those companies have required more robust compliance programs, meticulous due diligence to prepare for cross-border deals, and documentation that their efforts are not causing national security risks. The U.S. government’s expectations in this regard will likely persist even as other regulations are de-prioritized or rescinded.
Actionable Insights
The policies within the Plan highlight the role corporate executives can play in proactively identifying risks from inbound and outbound investments in AI, and in ensuring these risks are appropriately mitigated. Actionable best practices include:
- Conducting risk analysis. Assessments should be regularly performed to ensure the unique threat profile of the organization is considered and that the output of the analysis is effective.
- Developing or enhancing a risk-based management and compliance program. This framework should be comprehensive and detail policies for uncovering, assessing, and addressing risks, as well as explicitly stating procedures to follow.
- Coordinating across the enterprise. Collaborating with cross-functional leaders (legal, technology, compliance, risk, etc.) and relevant stakeholders to determine all potential risks and ensure that corresponding mitigation plans are implemented is essential.
- Integrating protections into investment decisions. Integrating cybersecurity and data privacy considerations into all investment decisions will help uncover potential national security risks and compliance challenges in advance of a transaction.
- Staying in the know. Learn about evolving threats and regulatory demands and implement updates and changes to investment plans as needed.
The Path Forward
Corporate executives today face the challenge of balancing how to foster AI innovation, manage investments in AI, remain competitive in the market, and ensure compliance with U.S. national security-related regulations. This can be achieved through a multidisciplinary approach that accounts for various business perspectives and associated guidance. Not only does a unified, cross-functional framework create strategies that combine innovation, competitiveness, compliance, and security, but it also helps anticipate future challenges and determines effective responses. Companies that know how to align their investment and business objectives with national security priorities are more likely to succeed in this new era of AI advancement.
1: The White House, “Winning the Race: America’s AI Action Plan,” (July 2025).
2: Ibid., 3.
3: Ibid.
4: Bureau of Industry and Security, “Export Administration Regulations (EAR),” (accessed August 26, 2025).
5: U.S. Department of the Treasury, “The Committee on Foreign Investment in the United States (CFIUS),” (accessed August 26, 2025).
6: U.S. Department of the Treasury, “Outbound Investment Security Program,” (accessed August 26, 2025).
7: U.S. Department of Justice, “Justice Department Implements Critical National Security Program to Protect Americans’ Sensitive Data from Foreign Adversaries,” (April 11, 2025).
Published
October 08, 2025
Key Contacts
Senior Managing Director
Senior Managing Director
Senior Managing Director