Artificial intelligence (AI) is already deeply embedded in how modern militaries sense, analyze, decide, and operate. AI is rapidly becoming foundational to defense effectiveness, from AI-assisted intelligence analysis and mission planning via predictive logistics and maintenance to autonomous platforms and battlefield decision-support systems.

This transformation is not happening in isolation. A lot of today’s defense innovation relies on dual-use technologies, systems originally developed for civilian markets and later adapted for military applications. Commercial cloud computing, advanced chips, computer vision models, and autonomous capabilities are now central components of modern defense architectures.

In the past months, I have spoken to many start-up and tech companies engaged in developing defense and dual-use capabilities. Everyone is chasing the soaring defense market. The claim to fame of almost every company pursuing these areas is profound, modern and effective AI systems.

Are AI-driven defense systems resilient enough ? 

The speed of this convergence is unprecedented. So is the question it raises: Are AI-driven defense and dual-use systems secure enough for the role they are now being asked to play?

The answer is crucial for national resilience. Because, while AI dramatically expands operational capability, it also reshapes the cyber risk landscape in ways that traditional defense models were never designed to handle. This tension is becoming increasingly difficult to ignore.

AI powered drones.
AI powered drones. (credit: XTEND)

According to the World Economic Forum’s (WEF) Global Cybersecurity Outlook 2026, AI is now the dominant force reshaping cyber risk worldwide. An overwhelming 94% of cyber leaders surveyed said AI would be the most significant driver of change in cybersecurity risk this year.

Of the respondents, 87% identified AI-related vulnerabilities as the fastest-growing cyber risk in 2025. Addressing the practical challenges of cybersecurity for AI, 54% of the organizations identify insufficient or skills to deploy AI for cybersecurity and 41% point to the need for human oversight over AI operations. 

These figures describe a world in which technological capability is accelerating faster than the structures meant to secure and regulate it.

This gap has direct operational consequences. Modern defense systems do not function any more as closed, siloed, air-gapped platforms. They operate as complex digital ecosystems built on software updates, remote connectivity, distributed sensors, and shared data pipelines. AI models sit at the core, and sometimes at the edge, of this architecture; correlating sensor inputs, recommending actions, prioritizing threats, and supporting real-time decisions. Eventually, AI may be allowed to act upon these decisions.

This is especially evident in dual-use systems. The same computer vision algorithms that enable autonomous vehicles on civilian roads are now embedded in unmanned aerial systems. Commercial satellite imagery platforms feed military intelligence workflows. Cloud-based analytics engines support command-and-control environments.

These technologies bring enormous advantages. They reduce cost, shorten development cycles, and allow defense forces to benefit from the pace of commercial innovation. But they also import commercial vulnerabilities directly into military systems.

AI expands the attack surface in unprecedented ways. Models can be manipulated through poisoned training data or adversarial inputs. Automated systems can be deceived at machine speed. Shared software libraries and open-source components create hidden dependencies that adversaries can exploit.

In effect, dual-use AI systems blur the boundary between civilian and military cyber domains. An AI-related vulnerability discovered in a commercial environment may have implications far beyond it. This reality is forcing a redefinition of what defense security actually means.

Needed: A redefinition of what defense security means

Historically, defense superiority was measured in platforms, munitions, and physical reach. Today, it increasingly depends on the integrity of algorithms, data pipelines, and decision systems. A compromised sensor feed or manipulated AI model can degrade situational awareness just as effectively as physical sabotage.

Simultaneously, AI is becoming deeply embedded in command and control environments. Decision-support systems now synthesize streams of information that no human team could process alone. Autonomy enables a single operator to manage multiple platforms simultaneously. These capabilities are essential in modern multi-domain operations, but they also introduce new failure modes.

When an AI system misinterprets context, amplifies noise, or obscures uncertainty, the human operator may not immediately recognize the error. In high-tempo environments, trust in automation can become a dependence.

It’s important to emphasize that this is not an argument against AI. It is an argument against deploying AI without adequate governance, resilience, cybersecurity and verification.

The WEF data underscores how unprepared many organizations remain. The combination of high incident likelihood, inadequate governance, inexperience in response related to AI cyber attacks, and skills shortages, suggests a structural vulnerability. In defense environments, where consequences escalate rapidly, this vulnerability may have strategic implications.

A key challenge lies in governance. Traditional defense certification models assume deterministic systems whose behavior can be exhaustively tested. AI systems do not behave that way. They learn from data, adapt to patterns, and may respond unpredictably to unfamiliar conditions.

This creates difficulties in validation, explainability, governance, and accountability. Military commanders must be able to understand why a system produced a recommendation. Engineers must be able to verify performance under adversarial conditions. Policymakers must be confident that automated systems align with doctrine, law, and ethical and operational constraints. Without these guardrails, AI risks becoming a powerful but very risky layer inserted between decision-makers and reality.

The cybersecurity dimension intensifies this challenge. Attackers are already using AI to automate reconnaissance, craft adaptive malware, and scale intrusion attempts. Defenders increasingly rely on AI to detect anomalies and respond at machine speed. This creates an accelerating feedback loop in which both offense and defense become more automated, more complex, and less transparent.

For defense systems built on dual-use foundations, that loop becomes particularly dangerous. Commercial AI platforms were not designed with contested battlefields in mind. They were optimized for performance, scale, and efficiency. Bridging that gap requires deliberate action.

AI security is a strategic defense issue

AI security must be treated as a strategic defense issue, not only as a technical one. Cyber risk associated with AI should be elevated to national security frameworks and integrated into defense planning, alongside traditional threats. Procurement decisions must weigh not only capability and cost, but also resilience under cyber attacks.

Defense organizations must invest in AI-literate cyber expertise. The WEF’s finding that 54% of organizations identify skills shortages should serve as a warning. Securing AI systems requires professionals who understand AI, cybersecurity, and adversarial threat models.

Explainability and continuous validation must become non-negotiable. AI systems deployed in defense environments should be transparent, testable, and continuously monitored for drift or manipulation. Trust must be engineered and verified. It cannot be assumed.

Finally, collaboration across government, industry, and allied nations is essential. AI-centric cyber risk does not respect institutional or state boundaries. Threat intelligence, defensive techniques, and best practices must flow across sectors that increasingly share the same technological foundations.

Israel is uniquely positioned in this landscape. Its defense sector is deeply intertwined with a world-class technology ecosystem, and its experience operating under persistent cyber threat provides important perspective. Leadership in AI-driven defense will depend not only on the speed of innovation, but also on the ability to secure what is built.

The strategic competition unfolding today does not relate solely to whoever develops the most advanced AI models. It is about who can deploy them reliably, protect them under pressure, and maintain trust in their operation when conditions deteriorate.

The lesson emerging from the World Economic Forum’s cybersecurity outlook is clear: The future of defense will also be defined by its cyber and AI security. In the AI era, the strategic advantage belongs to those who can ensure that the systems guiding their decisions remain trustworthy, resilient, and secure when it matters most.

Esti Peshin is a global cybersecurity, AI, and aviation executive and former VP and General Manager of the Cyber Division at Israel Aerospace Industries. She is a licensed general aviation and ULM/LSA pilot and flight instructor and a member of Forum Dvora.