One AI company said no to the Pentagon — and the world is watching what happens next.
What does it mean when the most powerful military in the world gets into a public brawl with a tech startup over a clause in a contract? In February and March 2026, that question stopped being theoretical. A fight between the U.S. Department of Defense and Anthropic — the San Francisco company behind the AI model Claude — exploded into a political firestorm that now defines the stakes of AI in the age of war. This is not just a story about contracts. It is a story about who controls the most powerful technology in human history, and what lines, if any, cannot be crossed.
In February 2026, Anthropic CEO Dario Amodei refused a Pentagon demand to allow its AI model Claude to be used for any “lawful purpose,” including potential mass domestic surveillance and fully autonomous weapons. President Trump blacklisted Anthropic on February 27, and Defense Secretary Pete Hegseth declared it a supply-chain risk to national security. Within hours, OpenAI struck its own Pentagon deal, drawing fierce backlash and accusations of opportunism — even from its own employees. OpenAI later revised its contract language after public criticism. As of March 5, 2026, Amodei had returned to negotiations. The dispute has exposed the deepest fault lines in global AI governance: who sets the ethical limits of military AI, and who — if anyone — has the power to enforce them.
How Claude Became the First Frontier Model on Classified Networks
In July 2025, the U.S. Department of Defense awarded contracts worth up to $200 million each to four leading AI companies: Anthropic, OpenAI, Google, and Elon Musk’s xAI. The purpose was to prototype frontier AI capabilities that would advance national security. Of the four, only Anthropic’s Claude was deployed into the government’s most sensitive, classified networks — made possible through a partnership with data analytics firm Palantir Technologies.

Claude was used in intelligence analysis, operational planning, cyber operations, and reportedly played a role in the U.S. military operations in Venezuela that led to the capture of former President Nicolás Maduro. It was also being used in active operations against Iran. Anthropic, founded in 2021 by former OpenAI researchers including CEO Dario Amodei and his sister Daniela Amodei, had built its entire identity around one idea: that AI development had to be safe, or it should not happen at all.
The Ultimatum No One Expected
The crisis began to take shape in January 2026, when Defense Secretary Pete Hegseth issued an AI strategy memorandum requiring all Department of Defense AI contracts to incorporate standard “any lawful use” language within 180 days. For most tech companies, this would have been routine. For Anthropic, it was a direct collision with the company’s founding principles. The tension broke into the open on February 25, 2026, when Hegseth gave Anthropic CEO Dario Amodei a stark choice: remove the company’s safety guardrails from its military contract or lose the $200 million deal entirely and face a government blacklist.

Amodei, who had just attended the AI Impact Summit in New Delhi with French President Emmanuel Macron on February 19, flew back into the storm. On Thursday, February 27, hours before the Pentagon’s 5:01 p.m. deadline, Amodei released a public statement that was quiet in tone but thunderous in consequence: Anthropic could not, he wrote, “in good conscience” accede to the Pentagon’s request. The two issues he refused to bend on were clear — no use of AI in fully autonomous weapons systems that fire without human involvement, and no use of AI for mass domestic surveillance of American citizens.
The Blacklist Heard Around the World
Within hours of the deadline passing, President Donald Trump took to Truth Social and ordered every federal agency in the United States to “immediately cease” all use of Anthropic’s technology. Defense Secretary Hegseth went further. He posted on X — using the Pentagon’s new “Department of War” rebranding — that he was designating Anthropic a “Supply-Chain Risk to National Security,” a label previously reserved almost exclusively for foreign adversaries like China’s Huawei. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote.

Trump’s Truth Social post called Anthropic a company run by “radical leftists” who had no idea about “the real world.” Emil Michael, the Under-Secretary of Defense for Research and Engineering who had been leading Pentagon negotiations, called Amodei a “liar” with a “God complex” who was “ok putting our nation’s safety at risk.”
Anthropic fired back. In a blog post, the company cited a federal statute and said Hegseth lacked the legal authority to restrict companies from doing business with Anthropic for non-Pentagon work. It also announced it would challenge the designation in court. Then came the twist: Claude reportedly continued to be used in strikes on Iran even after the ban was issued, underscoring just how deeply embedded the technology had become and how messy any phase-out would truly be.
OpenAI Moves In — and the Backlash Begins
Within the same chaotic Friday, OpenAI CEO Sam Altman announced that his company had struck its own deal with the Pentagon. The timing looked deeply calculated. Altman had spent the morning publicly saying he shared Anthropic’s “red lines” on autonomous weapons and surveillance — then revealed he had been quietly negotiating his own contract. The public backlash was immediate and brutal. Claude surged to the number-one spot on Apple’s App Store, while ChatGPT reportedly saw a wave of uninstallations.

Even inside OpenAI, the reaction was raw. According to CNN, many employees “really respect” Anthropic for standing up to the Pentagon and were deeply frustrated with their own leadership. Amodei did not hold back either. In a memo seen by the Financial Times, he called OpenAI’s deal “safety theater” and said the messaging around it was “just straight up lies.” He wrote that “the main reason they accepted the deal and we did not is that they cared about placating employees, and we actually cared about preventing abuses.” Altman, to his credit, acknowledged that OpenAI “shouldn’t have rushed” and later said the optics “don’t look good.” By Monday, March 3, OpenAI had revised its contract terms to include explicit language stating the AI “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
Safety Theater or Real Protection?
OpenAI’s position was nuanced but controversial. Rather than seeking hard contractual prohibitions like Anthropic, OpenAI argued that its safety guarantees came from architecture — cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared OpenAI engineers embedded in the process. Altman wrote that Anthropic “seemed more focused on specific prohibitions in the contract, rather than citing applicable laws.”
But critics quickly picked apart the approach. Techdirt’s Mike Masnick argued that OpenAI’s contract still permitted domestic surveillance by complying with Executive Order 12333, a Cold War–era intelligence directive that allows the NSA to capture communications outside the United States — including those of American citizens — without a warrant. MIT Technology Review wrote plainly that OpenAI had pursued a “pragmatic and legal approach that is ultimately softer on the Pentagon.” The Electronic Frontier Foundation made the broader point that privacy protections for hundreds of millions of people should not depend on the judgment of a few powerful executives behind closed doors.
Brendan Carr, Chairman of the Federal Communications Commission, sided with the Pentagon on March 3, telling CNBC that Anthropic “made a mistake” and should “try to correct course as best they can.” Meanwhile, Piper Sandler analysts noted that Anthropic was “heavily embedded” in military and intelligence operations, and that moving away from Claude would “pose some short-term disruptions” for Palantir, which relies on the government for close to 60% of its U.S. revenue.
A Deal Still Unfinished — and a World Still Watching
As of March 5, 2026, the story had not ended. Reports from the Financial Times and Bloomberg confirmed that Amodei had resumed talks with Emil Michael — the same official who publicly called him a liar just days earlier — in a last-ditch attempt to find common ground.

A tech industry group whose members include Nvidia, Google, and Anthropic had sent a letter to Hegseth expressing concern that designating a U.S. company as a supply-chain risk would send a chilling signal to the entire AI industry. Both sides, it seemed, had reasons to want a deal. The Pentagon was already dependent on Claude and faced a painful, complex transition. Anthropic, which draws roughly 80% of its revenue from enterprise customers, could not afford indefinite exclusion from the government’s vast procurement ecosystem.
The fight, in the end, is not between good and evil. It is between two sincere and incompatible visions: one that sees AI safety as non-negotiable ethics baked into code, and another that sees national security law as sufficient guardrail for a technology that moves faster than any law can chase. The question of who is right may not be answered in a contract. It may be answered in a courtroom — or on a battlefield.
What This Means for Southeast Asia and the World
The geopolitical ripples of this dispute reach far beyond Washington D.C. — and nowhere is that more important than Southeast Asia, a region standing at the very crossroads of U.S.-China AI competition, rapid digital adoption, and governance frameworks that are still being written.
Nations like Singapore, Indonesia, Thailand, Malaysia, Vietnam, and the Philippines are not passive observers of this moment. They are active users of the AI tools at the center of this storm. Governments across the region are deploying AI in everything from immigration systems and social services to military intelligence and public health. The question of whether powerful AI models can be used for mass surveillance without explicit legal guardrails is not a hypothetical in this region — it is a live policy question in countries where democratic institutions are still being strengthened and civil liberties still negotiated.
For international businesses and investors watching the Anthropic-Pentagon conflict, the stakes are equally concrete. The blacklisting of a U.S. company — not a foreign adversary — by its own government over a contract dispute signals that the AI industry’s relationship with the state is entering a raw, unpredictable phase. Enterprises in Southeast Asia and globally that have built workflows on Claude, or that operate in industries dependent on U.S. defense contractors, are now recalibrating their AI vendor strategies. Companies that assumed political neutrality was a given in Big Tech are now aware that the platform beneath their feet can shift without warning.
What Anthropic’s stand ultimately offers the world is something rarer than technology: a precedent. A major AI company looked a superpower in the eye and said there are lines we will not cross — not because we were forced to, but because we believe it is wrong. Whether or not Amodei eventually signs a deal, that act of refusal has permanently altered the conversation about what AI companies owe to the societies that allow them to exist.
For readers tracking how technology, power, and geopolitics are reshaping the world, this is only the beginning of the story. For deeper analysis of global tech and policy developments, visit our homepage to explore more reporting and commentary.
Sources:
[1] Pentagon blacklists Anthropic, labels AI company “supply chain risk”
[2] OpenAI announces Pentagon deal after Trump bans Anthropic
[3] Experts raise questions and concerns about Pentagon’s threat to blacklist Anthropic amid AI spat
[4] Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says
[5] OpenAI’s “compromise” with the Pentagon is what Anthropic feared
[6] The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
[7] Our agreement with the Department of War
[8] Anthropic CEO Dario Amodei could still be trying to make a deal with Pentagon
[9] Anthropic and the Pentagon are back at the negotiating table, FT reports
Keywords: Anthropic Pentagon AI War, Anthropic Pentagon Deal, Ai Military Contract, Dario Amodei Pentagon, OpenAI Department Of War, Ai Autonomous Weapons, Mass Domestic Surveillance Ai, Claude Ai Blacklist, Ai Safety Red Lines, Department Of War Ai, Sam Altman Pentagon, Pete Hegseth Anthropic, Supply Chain Risk Ai











