Close Menu
  • Home
  • Education
  • Health
  • National News
  • Politics
  • Relationship & Wellness
  • World News
What's Hot

‘Aishwarya Rai was a simple girl, wasn’t comfortable wearing backless dresses’, recalls stylist: ‘Her mom said it was too revealing’

March 10, 2026

Gautam Gambhir reveals real reason behind Sanju Samson’s recall: 'Not about spin matchups' | Cricket News – The Times of India

March 10, 2026

UPSC announces CSE 2025 prelims, mains, final cut-off

March 10, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Global News Bulletin
SUBSCRIBE
  • Home
  • Education
  • Health
  • National News
  • Politics
  • Relationship & Wellness
  • World News
Global News Bulletin
Home»National News»Anthropic to fight US govt in court over ‘supply-chain risk’ label: Behind the standoff, and what it means for Claude AI
National News

Anthropic to fight US govt in court over ‘supply-chain risk’ label: Behind the standoff, and what it means for Claude AI

editorialBy editorialMarch 9, 2026No Comments7 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
Anthropic to fight US govt in court over ‘supply-chain risk’ label: Behind the standoff, and what it means for Claude AI
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

On January 3, 2026, when the US captured Venezuelan President Nicolás Maduro as part of Operation “Absolute Resolve” without sustaining any American casualties, it integrated an unexpected asset into its field operations: the artificial intelligence firm Anthropic’s AI system Claude. This ignited a publicised standoff between the US Department of Defense (DOD) and Anthropic over Claude’s inclusion in military operations.

Now that the Pentagon has branded it a “supply-chain risk”, Anthropic’s ability to work with either the DOD or any other institution under contract with the US government has been effectively revoked. Even as the White House announced a six-month phase-out period for all uses of Claude in existing systems, the company has vowed to challenge the Pentagon’s risk designation in court.

So, why did Anthropic choose to let itself be cut out of the world’s largest military budget in the first place? We explain.

Anthropic vs Pentagon

Reports of Claude’s involvement in the January raid prompted Anthropic to ask the Pentagon questions they were not particularly keen to answer, and this escalated to a standoff between US Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei. The DOD issued a strict ultimatum to Anthropic: drop its AI safeguards by 5:01 pm on February 27 and allow the military to use Claude for “all lawful purposes”, or face severe retaliation. Anthropic’s refusal to cross its own red lines — specifically regarding fully autonomous weapons and mass domestic surveillance — prompted Hegseth to officially designate the firm as a “supply-chain risk” under Section 3252 of Title 10, United States Code (principal body of military law governing the US armed forces). The section outlines requirements for information relating to supply chain risk.

Even as OpenAI swiftly swooped in to take over the Pentagon contract, Amodei wrote a scathing internal memo to his employees that was subsequently leaked to the press. Terming OpenAI’s safety guardrails of being “maybe 20% real and 80% safety theatre”, he argued that OpenAI was merely placating the government while Anthropic cared about preventing abuse. He also suggested that the Trump administration retaliated because Anthropic had failed to donate to the president’s campaign. Although he apologised for the memo in an interview with The Economist, this episode has not deterred Anthropic from a possible legal challenge.

Making the Pentagon blacklist

Anthropic’s decision to reject the Pentagon is rooted in its recent market valuation and the economic importance (or lack thereof) of its $200 million contract with the DOD. Last month, Anthropic’s Series G (7th round) funding raised approximately $30 billion, valuing the firm at $380 billion. According to AI investments facilitator MLQ.ai, this latest funding released Anthropic from the leverage the US government had over it.

With an estimated $14 billion in annual recurring revenue, Anthropic — unlike defence majors Raytheon or Northrop Grumman — does not have the US government as its dominant single buyer. And its ability to fight the “supply-chain risk” designation in federal court for years is untouched.

Story continues below this ad

Secondly, the company’s adoption of a global business-to-business (B2B) model draws completely different boundaries for it compared with OpenAI or Google, which directly interact with consumers instead. In the B2B market, a predictable regulatory framework is critical, with customer enterprises meticulously drafting plans years in advance. They expect the software they invest in to remain clear of compliance breaches and government fines.

After its latest round of funding, Anthropic confirmed that eight of the top 10 companies (by revenue) on the Fortune 500 list use Claude. These companies legally bind Claude to strict international frameworks, especially the European Union’s Artificial Intelligence Act of 2024. The AI Act draws red lines against mass surveillance and using biometric data to categorise individuals, besides mandating rigorous, case-by-case proportionality tests that force authorities to weigh the scale of harm against individual rights.

OpenAI CEO Sam Altman vs Anthropic OpenAI CEO Sam Altman during the Express Adda at New Delhi on February 20, 2026. Photo: Abhinav Saha

The Pentagon’s demand for Claude to be used for “all lawful purposes” attempted to bypass these constraints, illustrating how US military doctrine and Anthropic’s business model are incompatible. For Anthropic, this creates a glaring commercial risk: if the company were to compromise its algorithms to grant the US military unrestricted operational freedom, it would destroy the guardrails that ensure compliance with the EU’s stringent guidelines and compromise the use of its product offerings by its clients, In its defiance, Anthropic was protecting its international market share.

Also, corporate clients are fearful of intellectual property leakages and internal company data being compromised, as a recent Gartner report suggests. Incidentally, the Pentagon’s blacklist serves as an assurance to global enterprises that Anthropic’s core ideology remains steadfast even in the face of State intrusion.

‘All lawful purposes’

Story continues below this ad

As to why building two separate models — one for enterprise customers as per EU regulations and another for the Pentagon — remains unfeasible, one must understand the architecture of AI models. The US military’s mandate for “all lawful purposes” (which also includes the use of autonomous lethal targeting) cannot be accommodated with a simple software patch. To avoid compromising its commercial enterprise product, Anthropic would be forced to maintain a separate, unconstrained model for the military.

Maintaining parallel models also presents a grave cybersecurity risk. Research has shown that AI models inevitably memorise and leak training data. If Anthropic attempted to save compute costs by having the military and commercial models share any foundational architecture, the risk that classified data from the military model could bleed into the other would be so massive that no insurance agency would cover it, leaving Anthropic liable for billions of dollars in damages out of its own pocket.

OpenAI swoops in

Finally, the unpredictability of Anthropic’s human capital — the intangible economic value of its workforce’s skills and knowledge — cannot be ignored. Instances such as engineer revolts forcing Google to abandon Project Maven, which was dedicated to developing the Pentagon’s drone capabilities, in 2018 have highlighted that the ideological rift within Silicon Valley is the industry’s ultimate chokepoint.

Considering Anthropic was founded by individuals defecting from OpenAI with the agenda of safety over commercialisation of AI, there is always a risk that creating an unbridled military system could trigger an exodus of scarce engineering talent required to maintain the $380-billion enterprise business.

Story continues below this ad

With Anthropic’s exit leaving a vacuum, rivals like OpenAI aggressively moved in. Where Anthropic CEO Dario Amodei drew a hard line at integration with lethal use, OpenAI CEO Sam Altman publicly tweeted his support for equipping the US and its allies. Altman’s statement of being “terrified of a world where AI companies act like they have more power than the government” was perceived as an indication that OpenAI agreed to provide the DOD with the customised AI architecture that Anthropic refused to.

Moreover, while both companies proclaim their refusal to participate in domestic mass surveillance, foreign surveillance is a different matter. The operation against Maduro did not seemingly cross Anthropic’s red line, even though Claude’s integration into a working military kill chain — coupled with the Pentagon’s demand for untargeted surveillance capabilities — would instantly trigger violations under the EU’s AI Act.

By allowing OpenAI to monopolise the Pentagon’s AI contracts — and absorb the massive regulatory, reputational, and international liabilities that come with being the US military’s official AI — Anthropic appears to have cemented itself as the strictly neutral, sovereign architecture globally, making Claude the default, risk-free choice for the rest of the world’s enterprise economy.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleShubhanshu Shukla reveals how astronauts stay fresh in space: ‘There are no showers up here…’ | – The Times of India
Next Article Virat Kohli's emotional post after India lift T20 World Cup: 'Champions once again' | Cricket News – The Times of India
editorial
  • Website

Related Posts

‘Aishwarya Rai was a simple girl, wasn’t comfortable wearing backless dresses’, recalls stylist: ‘Her mom said it was too revealing’

March 10, 2026

UPSC announces CSE 2025 prelims, mains, final cut-off

March 10, 2026

New footage challenges Trump’s claim of Iran hitting Iran school where 165 died, points to US Tomahawk strike

March 10, 2026

Nizam’s property partition case: Why Hyderabad court rejected plea of society claiming to represent 4,500 heirs

March 10, 2026

Oil prices fall back sharply as Trump claims Iran war will be over ‘very soon’

March 10, 2026

Home ministry bars NCW, NHRC, NCPCR, NCLT from directly seeking issuance of Look Out Circulars

March 10, 2026
Add A Comment
Leave A Reply Cancel Reply

Economy News

‘Aishwarya Rai was a simple girl, wasn’t comfortable wearing backless dresses’, recalls stylist: ‘Her mom said it was too revealing’

By editorialMarch 10, 2026

4 min readUpdated: Mar 10, 2026 09:19 AM IST Aishwarya Rai was just 21 when…

Gautam Gambhir reveals real reason behind Sanju Samson’s recall: 'Not about spin matchups' | Cricket News – The Times of India

March 10, 2026

UPSC announces CSE 2025 prelims, mains, final cut-off

March 10, 2026
Top Trending

‘Aishwarya Rai was a simple girl, wasn’t comfortable wearing backless dresses’, recalls stylist: ‘Her mom said it was too revealing’

By editorialMarch 10, 2026

4 min readUpdated: Mar 10, 2026 09:19 AM IST Aishwarya Rai was…

Gautam Gambhir reveals real reason behind Sanju Samson’s recall: 'Not about spin matchups' | Cricket News – The Times of India

By editorialMarch 10, 2026

India’s Sanju Samson (ANI Photo/Rahul Singh) Sanju Samson’s return to the India…

UPSC announces CSE 2025 prelims, mains, final cut-off

By editorialMarch 10, 2026

3 min readNew DelhiUpdated: Mar 10, 2026 01:14 PM IST The Union…

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

News

  • Education
  • Health
  • National News
  • Relationship & Wellness
  • World News
  • Politics

Company

  • Information
  • Advertising
  • Classified Ads
  • Contact Info
  • Do Not Sell Data
  • GDPR Policy
  • Media Kits

Services

  • Subscriptions
  • Customer Support
  • Bulk Packages
  • Newsletters
  • Sponsored News
  • Work With Us

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

© Copyright Global News Bulletin.
  • Privacy Policy
  • Terms
  • Accessibility
  • Website Developed by Digital Strikers

Type above and press Enter to search. Press Esc to cancel.