To strike approximately 1,000 targets within the first 24 hours of an attack on Iran, the U.S. military relied on the most advanced artificial intelligence ever deployed on the battlefield. This is an intelligent system that will be difficult for the Pentagon to give up, even after severing ties with the company that developed it.

The U.S. military’s Maven system, built by data-mining company Palantir, integrates Claude - the artificial intelligence model from Anthropic. According to a report first published by the Wall Street Journal, the system processes massive amounts of classified data from satellites and intelligence sources, providing real-time target scoring and prioritization. During attack planning, Claude suggested hundreds of targets, provided precise coordinates, and even estimated the strike’s outcomes afterward, significantly reducing Iran’s ability to respond. So far, the model has assisted in thwarting terrorist plots and in the raid to capture Venezuelan President Nicolás Maduro, but this is the first time it is managing a large-scale military operation.

The irony is that this unprecedented use is taking place amid a severe conflict. Just hours before the start of the airstrikes on Iran, U.S. President Donald Trump announced a future ban on the use of Anthropic tools by government agencies, giving the Pentagon six months to completely remove them from service. The dramatic move followed a dispute with Anthropic CEO Dario Amodei over the use of these tools for mass domestic surveillance and autonomous weapons. However, military commanders are so dependent on the system that U.S. officials indicated that if Amodei halts its operation, the government will use its authority to seize the technology. “His decisions cannot cost the life of a single American,” noted a source familiar with the matter.

The system, integrated into the Pentagon at the end of 2024, now serves over 20,000 military personnel. In parallel with the American strikes, the Israel Defense Forces reported close cooperation for thousands of hours with the U.S. military in building an extensive target database. Experts, such as Paul Scharre from the Center for a New American Security, warn that while the system enables planning “at machine speed instead of human speed,” humans must supervise it because it “sometimes makes mistakes.” Now, as Claude is on its way out, giants such as xAI and OpenAI have already signed agreements to take its place at the heart of the American war machine.