A federal court has temporarily blocked the U.S. government from banning Anthropic’s AI products for federal use and from formally labeling the company as a supply chain risk. This preliminary injunction halts actions taken by the Trump administration after Anthropic refused to alter its contract terms to permit the use of its technology for mass surveillance and autonomous weapons development. The dispute began when Anthropic declined a Pentagon request to modify its standard service agreement, which would have allowed the military to use Claude, Anthropic’s AI assistant, for unrestricted purposes including potential warfare applications. In response, President Trump issued an order directing federal agencies to cease using Anthropic services. The Defense Department escalated by designating Anthropic as a supply chain risk, a label typically applied to foreign entities from adversarial nations, and warned other government contractors to sever ties with the AI firm. Anthropic swiftly challenged these actions in court, arguing the designation was unlawful and violated its First Amendment rights and due process. The company sought a pause on the ban while its lawsuit proceeds. In its defense, the Pentagon argued that allowing Anthropic continued access to defense infrastructure would introduce an unacceptable risk to national security supply chains. However, Judge Rita F. Lin of the District Court for the Northern District of California saw the government’s measures differently. In her decision, she stated the actions appear designed to punish Anthropic for its contractual stance and for bringing public scrutiny to the government’s position. Judge Lin wrote that punishing a company for criticizing the government in the press constitutes classic illegal First Amendment retaliation. She further found the supply chain risk designation to be contrary to law, arbitrary, and capricious. The judge dismissed the government’s argument that Anthropic showed subversive tendencies by questioning the use of its technology, noting that nothing in the governing statute supports branding an American company a potential adversary for disagreeing with the government. Anthropic expressed gratitude for the court’s swift action and stated it remains focused on working productively with the government to ensure Americans benefit from safe and reliable AI. The company’s underlying lawsuit continues, and a final court decision is pending. Judge Lin indicated that Anthropic has shown a likelihood of success on its core First Amendment claim. This case highlights the growing tension between leading AI developers and government agencies over ethical boundaries, particularly concerning military and surveillance applications. The court’s intervention underscores the legal protections for corporate speech and dissent, even when confronting national security arguments from the executive branch. The outcome of the ongoing lawsuit could set a significant precedent for how the U.S. government regulates and interacts with domestic technology firms on matters of security and ethics.

