Anthropic Sues Trump Administration Over Pentagon Blacklisting of Its AI Technology
Artificial intelligence company Anthropic filed two federal lawsuits Monday challenging the Trump administration’s decision to label the company a national security “supply chain risk” and cut off use of its technology across the federal government. The company argues that the action was unlawful retaliation for its refusal to remove safeguards preventing its AI systems from being used for autonomous weapons or domestic surveillance.
The San Francisco-based company filed one lawsuit in the U.S. District Court for the Northern District of California and a second case in the U.S. Court of Appeals for the District of Columbia Circuit. The lawsuits ask the courts to overturn the Pentagon’s designation and block federal agencies from enforcing the restrictions. Anthropic claims the action threatens billions of dollars in revenue and could damage its reputation with customers and partners, according to court filings.
The dispute follows months of negotiations between Anthropic and the Defense Department over how the military could use the company’s flagship artificial intelligence system, Claude. According to the complaint, Anthropic insisted the technology should not be used to operate fully autonomous weapons systems capable of selecting and attacking targets without direct human control.
The company also objected to using the technology for domestic surveillance in the United States. Defense officials, including Defense Secretary Pete Hegseth, insisted the system must remain available for all lawful military uses.
After negotiations collapsed, the Pentagon formally designated Anthropic a “supply chain risk to national security.” The designation bars the company from working with the Defense Department and its contractors.
President Donald Trump also stated that federal agencies should stop using the company’s technology across government networks. Anthropic filed challenges in two courts because the Pentagon relied on separate legal authorities in issuing the designation, according to the company’s filings.
Anthropic argues the decision amounts to an unlawful campaign of retaliation. Court filings say the designation violates the company’s constitutional rights and exceeds federal authority. The company also claims the government’s actions are disrupting contracts and negotiations with customers.
The lawsuits challenge the government’s actions on constitutional grounds. One of the central claims involves the First Amendment, which protects individuals and companies from government retaliation based on their speech or viewpoints. Anthropic argues that its stance on limiting certain uses of artificial intelligence is protected expression. Courts have ruled that the government generally cannot punish a company or individual for expressing a viewpoint protected by the Constitution.
The company also argues that the government violated due process protections under the Fifth Amendment. The Constitution generally requires federal agencies to follow established procedures before imposing restrictions that could seriously affect a company’s business or reputation. Anthropic claims the national security designation was imposed without the safeguards normally required before the government takes such actions.
A key part of the dispute involves the legal authority used by the Pentagon to label the company a “supply chain risk.” The designation relies on national security powers that allow the government to block companies from supplying technology to sensitive defense systems if officials believe it could threaten national security. These authorities are most often used against companies connected to foreign governments. Anthropic argues that applying the designation to a U.S. technology company involved in a dispute with federal officials represents an unprecedented use of that power.
Artificial intelligence technology has expanded rapidly in recent years, with governments and military agencies increasingly adopting it for operational tasks. AI tools are now used in areas such as intelligence analysis, battlefield simulations, and large-scale data analysis. The company’s AI systems have been deployed on classified government networks to assist with some of these operations.
The company maintains that current AI systems are not reliable enough to safely operate fully autonomous weapons systems while emphasizing its commitment to supporting national security efforts. Anthropic has also argued that using advanced AI systems for widespread surveillance could raise constitutional concerns.
The lawsuits name several federal agencies and officials, including the departments of Defense, Treasury, State, and Commerce. The company is asking the courts to declare the designation unlawful and prevent the government from enforcing the restrictions while the case moves forward.