The Pentagon's Chief Digital and AI Officer, Cameron Stanley, described something revolutionary on March 12 at Palantir's AIPCon 9 conference: The Maven Smart System, Palantir's AI-powered targeting platform, had consolidated what used to require eight or nine separate systems into a single visualization interface.

Chad Wahlquist, a Palantir architect, added the human dimension: What once required roughly 2,000 intelligence officers is now managed by approximately 20 operators working in rapid succession. 

According to reports by The Washington Post and NBC News, citing anonymous sources, Maven helped generate around 1,000 prioritized targets in the first 24 hours of Operation Epic Fury aka Roaring Lion, the US-Israeli strike campaign against the Islamic Republic of Iran that began in the early hours of February 28. At the operational core of that campaign was Claude, the AI model built by Anthropic. The very technology that the US government had banned the day before.

On February 27, President Trump ordered every federal agency to stop using Anthropic's AI, calling the company "left-wing nut jobs."

Defense Secretary Pete Hegseth designated Anthropic a “supply-chain risk to national security,” a label historically reserved for foreign adversaries. Hours later, American and Israeli forces launched the largest military operation in the Middle East since the 2003 invasion of Iraq. Pentagon officials have privately acknowledged that replacing Claude would take months. One expert called it “open-heart surgery.” The ban was real. The dependency was even more so.

US President Donald Trump, Secretary of State Marco Rubio, and CIA Director John Ratcliffe meet in Mar-a-Lago, Florida, to oversee Operation Epic Fury, February 28, 2026.
US President Donald Trump, Secretary of State Marco Rubio, and CIA Director John Ratcliffe meet in Mar-a-Lago, Florida, to oversee Operation Epic Fury, February 28, 2026. (credit: Daniel Torok/White House via Getty Images)

The brake in the kill chain

The modern kill chain runs from Find through Fix, Track, Target, Engage, and Assess. AI has made it faster than any human planner could have imagined. Maven compresses what used to take weeks into hours. But speed without scrutiny is not precision. It is recklessness at scale.

This is where the conventional reading of the Anthropic-Pentagon dispute gets it wrong. The headlines frame Dario Amodei's red lines as moral obstruction, a CEO slowing down the war machine because of philosophical squeamishness. But look at it from inside the kill chain itself. Every well-designed system has a quality control step, a moment where someone asks: Should we? Not just: Can we? In the kill chain, that step lives between Target and Engage. It is the last chance to prevent a catastrophic error before metal meets flesh.

And the numbers demand that pause.

In US military testing, as of 2024, Maven's object recognition accuracy stood at roughly 60 percent, compared with approximately 84% for human analysts, according to reporting by Bloomberg and Tech Brew. No updated figures have been officially published. 

When you are generating a thousand targets a day, even that gap translates into hundreds of potential misidentifications. Can a team of 20 operators genuinely maintain meaningful human oversight at that pace? Or are we building a system in which the human in the loop becomes a rubber stamp?

Anthropic's insistence on guardrails is not a brake that stops the machine. It is the brake that keeps the machine on the road.

Twenty-four objections

The Talmud tells the story of Rabbi Yochanan and Reish Lakish, two scholars who formed one of the most celebrated study partnerships in Jewish intellectual history. For every argument Rabbi Yochanan made, Reish Lakish raised 24 objections, forcing 24 answers. The friction was the engine.

When Reish Lakish died, the rabbis sent Rabbi Elazar ben Pedat, a brilliant and agreeable scholar, to be Rabbi Yochanan’s new partner. For every statement Rabbi Yochanan made, Rabbi Elazar offered supporting proof.

Rabbi Yochanan wept, saying: "When I stated a law, Reish Lakish would raise 24r objections and I would give 24 answers, and the understanding of the law would expand. But you only tell me that a teaching supports me. Do I not already know that I am right?"

The Pentagon is making the Rabbi Elazar choice.

By blacklisting Anthropic and rushing to embrace alternatives, the Defense Department is exchanging the partner that challenges for the partner that complies. The New York Times has noted that xAI's Grok “is not considered as advanced or as reliable” as Anthropic's model.

OpenAI struck its own Pentagon deal on the exact same Friday that Anthropic was banned in 2014, even as its CEO, Sam Altman, publicly claimed to share Anthropic's ethical “red lines.” A compliant partner that signs the contract while whispering its reservations is pleasant for the Pentagon to use. It provides the illusion of ethical alignment without the friction of actual pushback. But a system that never truly argues is ultimately blind to its own errors.

But honesty demands harder scrutiny of Anthropic too.

Reish Lakish's power came from the fact that he was fully committed to the partnership. He argued fiercely, but he never walked away from the study hall. You cannot build the most powerful AI tool ever deployed in warfare, integrate it into classified military systems, profit from a $200 million Pentagon contract, and then act shocked that the military wants to use it like a military tool.

If Amodei believes deeply in using AI to defend democracies, as he has publicly stated, then sitting in a fortress of red lines while soldiers rely on his technology in active combat is not moral courage. It is moral convenience.

The real product is not the AI

Alex Karp, Palantir's CEO, revealed the deeper strategic picture at AIPCon. Palantir does not sell AI. It sells the orchestration layer. The language models powering Maven, whether from Anthropic or OpenAI or anyone else, are replaceable components. The pipeline that Palantir built for the battlefield is the lock-in.

This reframes the entire dispute. The Pentagon has six months to phase out Claude and integrate a replacement while conducting active combat operations in Iran. It is the equivalent of changing an engine on a plane in flight. If Palantir pulls it off without degrading performance, it will prove definitively that the large language model is a commodity. If the transition stumbles, Amodei will hold leverage that no one anticipated.

And the legal dimension may prove equally consequential.

On March 9, Anthropic filed two federal lawsuits against the Trump administration, alleging the supply-chain risk designation violates its First Amendment rights and exceeds the Pentagon's statutory authority.

If a court rules that the government cannot blacklist an American company for its usage policies, every AI company gains legal protection for its red lines. If the Pentagon prevails, the message to Silicon Valley will be clear: Standing your ground against the government costs billions in contracts.

Where there is difficulty, there is progress

While all of this plays out, China and Russia are building AI weapons systems with no internal debate at all. There is no Chinese Anthropic refusing to build autonomous weapons. Russia's autonomous combat systems face no ethical review. China is converting retired fighter jets into AI-controlled unmanned aircraft. The Pentagon requested a record $14.2 billion for AI and autonomous systems for 2026.

The Western tension between state and corporation is not a weakness. It is the feature that produces superior technology, sharper ethical frameworks, and more resilient systems. But only if both sides show up with the seriousness the moment demands. A government that calls its most advanced AI partner “nut jobs” is not conducting a negotiation. It is breaking dishes. And a company that fortifies itself behind moral absolutes while its technology is selecting targets in an active war zone is not being principled. It is being absent at the moment when being present matters most.

The paradox is the power. The AI that was banned on Friday was bombing on Saturday. Democracy is stronger because of the argument, not despite it. But friction only works when both hands stay on the machine.