This article was first published on TurkishNY Radio.
A US federal judge in San Francisco has granted Anthropic temporary relief in its dispute with the Pentagon. The ruling blocks, for now, the Pentagon’s “supply chain risk” label and halts enforcement of a Trump directive that would stop federal agencies from using Claude.
Judge Rita Lin issued a preliminary injunction against the Pentagon on March 26. She said the administration’s actions appeared arbitrary, retaliatory, and likely unlawful under the record before the court.
Anthropic Lawsuit Challenges Pentagon Pressure on Claude
The Anthropic lawsuit centers on whether the government punished the AI company after it refused to allow unrestricted military use of Claude for all lawful purposes. Anthropic has said it would not support lethal autonomous weapons or mass domestic surveillance of Americans.
The Anthropic lawsuit began after the Defense Department labeled the company a national security supply-chain risk and President Donald Trump ordered federal agencies to stop using Anthropic products.
Anthropic sued on March 9, arguing that the government exceeded its authority and retaliated against the company for its public stance on AI safety.
The case has drawn wide attention because it touches both AI policy and constitutional law. It also carries business risk for Anthropic, which Menlo Ventures said held 32% of enterprise LLM usage in mid-2025, ahead of OpenAI at 25%.

Judge Blocks Pentagon’s Supply-Chain-Risk Label
The court’s order temporarily blocks the Pentagon from branding Anthropic a supply-chain risk. It also pauses the government’s effort to push federal agencies away from Claude while the case moves forward.
Judge Lin wrote that the statute did not support branding an American company a saboteur for disagreeing with the government. She also said the broad measures against Anthropic appeared to be an abuse of discretion.
Anthropic Lawsuit Grew Out of a Defense Contract Fight
The Anthropic lawsuit stems from a July 2025 agreement with the Pentagon tied to Claude’s use on classified networks. The dispute later deepened when the Pentagon sought broader military rights over the model.
According to reports on the case, negotiations broke down in February after the Pentagon pushed for use of Claude “for all lawful purposes” and without Anthropic’s restrictions. Anthropic refused, saying its model should not be used for lethal autonomous weapons or mass surveillance.
Trump Directive Became Part of the Fight
The legal battle widened on February 27, when Trump ordered federal agencies to stop using Anthropic products. Defense Secretary Pete Hegseth also moved to classify the company as a supply-chain risk, a step Anthropic says damaged its reputation and business.
That made the Anthropic lawsuit about more than one Pentagon label. It became a challenge to a broader federal response that could have cut the company off from major government work.
Court Focused on Retaliation Claim
At a March 24 hearing in San Francisco, Judge Lin questioned government lawyers on whether Anthropic was being punished for publicly criticizing the Pentagon’s contracting position.
The later ruling said punishing Anthropic for bringing public scrutiny to the government’s stance looked like classic First Amendment retaliation.
That point is central to the Anthropic lawsuit. The court said the current record suggests the government may have acted not because Anthropic posed a real security threat, but because it challenged the Pentagon in public.
Business Stakes Extend Beyond Washington
The ruling matters beyond federal procurement. Anthropic has become a major enterprise AI provider, and a government-wide ban could have hurt its standing with customers and contractors.
Menlo Ventures reported in July 2025 that Anthropic led enterprise LLM usage with 32%, while OpenAI held 25%. That backdrop explains why the Anthropic lawsuit has drawn close attention across the AI sector.
Pentagon Can Still Choose Other Vendors
The injunction does not force the Pentagon to keep using Anthropic products. It only blocks the government, for now, from using the supply-chain-risk designation and related broad measures in the way challenged by Anthropic.
That means the case is still in its early stage. Reuters reported that Judge Lin delayed the order’s effect for seven days to allow for appeal, so the legal fight is likely to continue.

Anthropic Responds After Early Court Win
Anthropic said it was grateful the court moved quickly and pleased that the judge agreed the company was likely to succeed on the merits. The company framed the ruling as an important check on government retaliation.
For now, the Anthropic lawsuit has delivered a clear early win for the company. But the broader question of how AI firms and the US defense establishment will negotiate military use, safety limits, and speech rights remains unsettled.
Conclusion
The Anthropic lawsuit has quickly become one of the most closely watched legal fights in AI. Judge Rita Lin’s ruling gives Anthropic temporary relief and raises sharp questions about executive power, procurement law, and First Amendment retaliation.
The case now moves into its next phase with high stakes for both Washington and the AI industry. Its outcome could influence how future disputes over military AI use are handled across the federal government.
Appendix Glossary of key terms
Supply chain risk: A label tied to possible security concerns in government systems.
Claude: Anthropic’s AI chatbot and large language model product.
Pentagon: The US Department of Defense headquarters and military authority.
First Amendment retaliation: Punishment for speech or public criticism of the government.
Classified networks: Secure government systems used for sensitive information.
Enterprise AI market: The market for AI tools used by businesses and institutions.
Frequently Asked Questions About Anthropic lawsuit
1- What is the Anthropic lawsuit about?
The Anthropic lawsuit challenges the Pentagon’s decision to label the company a supply-chain risk and the Trump administration’s order stopping federal agencies from using Claude.
2- Who issued the ruling?
US District Judge Rita Lin of the Northern District of California issued the preliminary injunction.
3- Why did the dispute start?
The fight grew out of contract talks over military use of Claude. Anthropic opposed unrestricted use for lethal autonomous weapons and mass domestic surveillance.
4- Did the ruling fully end the case?
No. The ruling gives temporary relief, but the case is still active and could be appealed.





