Washington, D.C. – The U.S. Department of Defense has delivered a clear ultimatum to leading AI companies: in matters of national security, privately developed artificial intelligence could be requisitioned by the state. This message crystallized in a high-stakes confrontation with Anthropic, where Defense Secretary Pete Hegseth demanded unrestricted access to the company’s Claude AI models, overriding ethical restrictions on uses like mass surveillance and autonomous weapons. Hegseth met with Anthropic CEO Dario Amodei on February 24, 2026, setting a Friday deadline at 5:01 p.m. for compliance, threatening contract termination, designation as a “supply chain risk,” or invocation of the Defense Production Act (DPA) to force adaptations without safeguards.
The DPA, a Korean War-era law enacted in 1950, grants the president broad powers to prioritize or compel private industry for national defense needs. Historically used for procurement queue-jumping or information gathering, it was expanded under former President Biden’s 2023 Executive Order 14110 to require AI firms to report on high-risk model training, red-teaming, and weights—though that order was rescinded in 2025. Now, under Title I, the Pentagon seeks to apply its compulsion authority to AI, potentially forcing modifications for “any lawful use,” as outlined in Hegseth’s January 2026 AI strategy memo.
Anthropic, founded in 2021 by former OpenAI executives, secured a $200 million Pentagon contract in July 2025 to prototype AI for national security, alongside Google, OpenAI, and xAI. While competitors like xAI agreed to broader access, Anthropic maintains “red lines” against harmful applications, integrated into its models and contracts. A senior Pentagon official described Anthropic’s guidelines as “woke AI,” emphasizing the need for unrestricted tools in operations.
This standoff follows AI’s reported role in the January 3, 2026, U.S. raid capturing Venezuelan President Nicolás Maduro, where Anthropic’s models aided classified systems in the operation to apprehend him on drug trafficking charges. Maduro, now detained in Brooklyn, pleaded not guilty, sparking international backlash and protests in Venezuela.
Experts warn of broader implications: invoking the DPA could set a precedent for “quasi-nationalization” of AI labs, chilling innovation and raising ethical concerns over military AI in surveillance or warfare. Advocacy groups urge congressional oversight, arguing it risks violating civil rights and international norms. As the deadline looms, the dispute underscores tensions between private ethics and state imperatives in the AI era.








Discussion about this post