The battlefield of the 21st century is undergoing a revolution that is changing the face of modern warfare. UAVs and drones with various levels of autonomy patrol above the Gaza Strip and essentially in all sectors where Israel is fighting. Robots roll through Hamas tunnels, and smart weapon systems provide support and assist in real-time decision-making.

Israel is one of the most advanced countries in military technology, and the future is already a reality. It’s a technological revolution, but also, and perhaps primarily, it’s a social revolution.

One of the central questions facing the IDF and advanced militaries around the world in general is not what robots can do, since they can perform almost any task, but rather how humans and robots will work together as a team, especially when it comes to life-and-death decisions in a dynamic and complex battlefield like the ones in which the IDF operates on multiple fronts.

A New Definition for the Combat Team: MUMT

The term MUMT – Manned-Unmanned Teaming – defines the future of military warfare where joint teams of humans and machines are operating toward a shared objective. This is not about a UAV or drone operator sitting in a control trailer and operating the system remotely, but rather a complex system where human and machine share responsibility, initiative, and decision-making, all while creating a joint system of initiative and decision-making in which the human and robot depend on each other and operate as a team.

A concrete example: an air defense system against missiles like the 'Iron Dome' identifies an enemy missile aimed at a civilian settlement. The system analyses the threat level, suggests a fire command from among the fire commands pre-programmed into it, and presents a recommendation for action.

The Arrow 3 air defense system, used for the first time on November 9, 2023, to intercept a missile fired at Eilat by Iran-backed Houthis in Yemen.
The Arrow 3 air defense system, used for the first time on November 9, 2023, to intercept a missile fired at Eilat by Iran-backed Houthis in Yemen. (credit: MINISTRY OF DEFENSE)

The human – in real time and under pressure – chooses whether to accept the recommendations, confirms identification of the target as hostile, and gives final authorization to fire. This is the dynamic cooperation where each side contributes its unique strengths: the machine in data processing speed and recommendations, and the human in exercising judgment, understanding context, and "out of the box" thinking when necessary.

Manned Aircraft Alongside Machines: CCA and LW

In the world's advanced air forces, we can already see the cooperation between humans and machines. Pioneer concepts such as Loyal Wingman (LW) and Collaborative Combat Aircraft (CCA), demonstrate how manned and unmanned aircraft can operate as a team in a real combat environment, while leveraging the relative advantages of each partner.

The human pilot serves as the overall mission manager, sets objectives, assesses strategic risks, and makes more complex decisions.

Meanwhile, the autonomous escort aircraft perform specific and sometimes more dangerous missions – advanced reconnaissance, electronic warfare to deceive the enemy, even drawing in enemy missiles so as not to expose their location and not to endanger the human pilot, and of course, attack as well.

The key to implementing these models – which differ from each other in their level of autonomy (higher in the CCA model) – is not just the technology, but what is called 'shared mental models'; meaning: deep mutual understanding of how each team member thinks and behaves, how they operate and react under pressure situations, and whether their actions are predictable to such an extent that it will be possible to rely on them, or in other words, mutual trust.

The Central Challenge: Building Trust on the Battlefield

Building trust is composed of multiple layers. Is the person-in-the-loop open to adopting new technology or perhaps they fear changes and technology?

Generally, today's generation of fighters will be very open to trusting a robot, sometimes too much trust, as happens with Tesla's autonomous driving mode where the driver stops looking at the road and sometimes even falls asleep, due to the trust they place in the machine.

As the fighter understands the robot's capabilities and can predict how the robot will behave, cognitive trust is created between human and machine, which expresses the human's ability to understand the robot's capabilities, the logic behind its decisions, and the robot's limitations. A person who understands why their drone chose a specific route or recommended specific armament will rely on it more in similar future situations.

The next stage in creating trust is personal emotional trust, where the person in the loop already develops a sense of comfort, security, and partnership with the robot. This is the most vulnerable type of trust, as one malfunction, unexpected response (like a car with driving assistance that brakes forcefully unexpectedly), or case of the robot's "judgment failure" could severely damage trust built over time. A fighter who experiences a situation where the autonomous drone they operate mistakenly strikes a civilian or friendly force may develop deep apprehension about any future use of autonomous technology.

Another layer in creating trust is the initial impression, also called 'critical initial credibility' where the first impression is critical, starting from the design of the user interface and the robot itself, what the initial performance is and how the robot "feels" to operate (like the ease of operation of a certain type of smartphone) as well as the general feeling of professionalism and reliability. A robot that appears threatening, overly complex, with an unclear user interface or one that requires in-depth explanation, will struggle to gain trust even if it later turns out that its operational capabilities are excellent.

The Danger: Breach of Trust

If trust between human and machine is breached, the result could be fatal on the battlefield. A fighter who does not trust their autonomous tool may avoid using it in a critical situation where it could save lives. The opposite outcome can also occur, such as over-trust by the fighter, who may become a "rubber stamp" for the robot's decisions. Both extremes are equally dangerous.

One of the worst scenarios, a scenario that is frequently mentioned in everything related to the topic of lethal autonomous weapons, is a situation where the human does not fully understand the operation of the lethal tool, places excessive trust in it and allows it to carry out the lethal mission, similar to an autonomous vehicle driving alone, but with weapons.

And when an error occurs on the battlefield, the human who placed trust in the robot, regardless of what the cause was, becomes primarily responsible for the robot's mistakes, and effectively becomes a "moral crumple zone" – a situation where the human bears formal responsibility for the machine's decisions, but in practice has no real understanding or control over the way it makes decisions. In such a situation, the human at the end becomes a victim of failures that they could not have anticipated or prevented. And this is a situation that should be prevented, since the fighter in the field is not the only factor that should bear responsibility.

The Required Israeli Approach: Transparency vs. Efficiency

Israel, as a leader in developing military technology and advanced weapon systems, must think ahead about these challenges. In my research, I deal among other things with a model of "Value Alignment" according to which it is necessary to program the robot so that it operates in accordance with our military and moral values, norms and rules of conduct, and in accordance with what is defined in international humanitarian law, even at the cost of certain limitation of its maximum technical capabilities.

The central requirement that must be clear to everyone is transparency toward the human operator. The robot must explain its decisions, communicate its intentions and plans, and enable the human operator to understand when, why and how it operates in a certain way. The robot's decisions cannot hide behind that "black box" of artificial intelligence, in which life and death decisions are made without explanation or possibility of human intervention.

The implication is, among other things, developing clear user interfaces, efficient communication protocols, and "emergency brake" mechanisms, those that will enable the operator to stop the mission or change the robot's operation in real time, even when this comes at the cost of certain speed or operational efficiency.

From Robotic Tools to Battlefield Partners

The fundamental change required is in basic perception,  transitioning from robots as advanced tools to robots as full team members. This transition requires dedicated training, new protocols, and fresh thinking about the essence of military leadership in the robotic era.

Tomorrow's warrior will need not only to know how to fight and operate weapons, but also to know how to work with robotic partners-to understand their language and cues, to trust their judgment in appropriate situations, to be ready to intervene when required, and to maintain meaningful human involvement even as technology advances and becomes increasingly sophisticated.

This also requires changes in recruitment and training processes, searching for warriors suited to working with autonomous systems, developing dedicated training courses and integrating the autonomous dimension into existing courses, and creating a military culture that views human-robot partnership as an advantage rather than a threat.

Summary – Whoever Builds the Best Team Will Win

We are only at the beginning of the robotic revolution.

This revolution will permanently change the face of modern warfare. The side that knows how to create the best, most precise, and secure cooperation between human fighters and autonomous robots will gain a significant tactical and strategic advantage. In an era of multi-domain warfare, evolving threats, and technology advancing at a dizzying pace, it is no longer a matter of who is the most technologically advanced, but who knows how to build the joint team of human and machine in the most advanced and appropriate way.

The winner will be the one who succeeds in building joint human-machine teams based on mutual trust, deep understanding, effective communication, and genuine cooperation. The future belongs not to robots or humans alone, but to the intelligent partnership between them.