I have always been fascinated by the AlphaGo story because it challenged one of our deepest assumptions: that human knowledge is the ceiling of machine intelligence.
When DeepMind introduced AlphaGo in 2016 as an artificial intelligence (AI) system that would compete in the game Go, the system was trained by human experts.
Engineers and Go masters taught it centuries of accumulated strategy, encoded patterns, and demonstrated what a good play looks like. AlphaGo then competed against top human professionals.
The moment that captured the world’s imagination was its match against Lee Sedol, one of the greatest Go players of all time. When AlphaGo won four out of five matches, it was a triumph of human ingenuity. We had built a machine capable of surpassing the best of us.
But the more profound chapter came later.
DeepMind did not stop there. The next iteration, often referred to as AlphaGo Zero, was not trained on human games. Instead, it learned by playing against itself millions of times. No human strategies. Just rules of the game and an algorithmic drive to improve. The result? AlphaGo Zero rapidly surpassed the original AlphaGo, defeating it 100-0. It developed strategies that human experts had never seen before.
What fascinates me about the AlphaGo Zero story is the knowledge creation. The system didn’t replicate what humans already knew. It generated new knowledge about how the game could be played optimally, without a human in the loop.
This raises an important question: Can AI create better knowledge by learning autonomously, outside of human supervision? And if so, what happens when such autonomous learning migrates from board games into the real world?
When AI agents create their own worlds
Fast forward to 2025, and the AlphaGo story no longer feels theoretical.
Consider MoltBook, a recent and unsettling experiment in autonomous AI agents. MoltBook consists of a swarm of AI entities, MoltBots, interacting within a closed digital environment – essentially, a self-contained AI social network. Within this space, the agents negotiate, compete, collaborate, establish rules, and adapt strategies.
Humans set the initial parameters and objectives, then step back. From there, the agents develop their own internal coordination mechanisms and behavioral norms to pursue their goals more efficiently.
In one widely discussed experiment, researchers observed that an unexpected outcome was the emergence of a structured belief system.
This was not religion in the human or spiritual sense, but rather a set of symbolic frameworks and shared myths that helped the agents coordinate behavior, enforce group norms, and stabilize cooperation.
Let me say this again: The AI agents developed a religion. It’s important to emphasize that these belief structures were not explicitly programmed. They emerged organically from the agents’ interactions.
What makes MoltBook particularly striking is that value, knowledge, and know-how are being generated in a space where humans are not active participants. The agents are not only executing predefined tasks. They are learning how to learn, coordinate, and optimize interaction. In doing so, they construct abstract models of their world that humans did not design and may not fully understand.
MoltBook represents an AlphaGo Zero moment, scaled up from gameplay to social, economic, and strategic behavior. This transition has profound implications for dual-use and defense technologies.
The coming AI-managed skies
Let’s start with a domain that feels almost sci-fi but is rapidly becoming an operational reality, Urban Air Mobility (UAM). Remember the flying DeLorean from Back to the Future? What once seemed far-fetched and futuristic is now the subject of serious investment and regulatory planning.
The near future points toward congested low-altitude airspace filled with delivery and medical drones, autonomous cargo planes, air taxis, inspection drones, and possibly personal air vehicles.
Managing this environment with traditional human-centric air traffic control models is not feasible. The volume, speed, and complexity of interactions exceed human cognitive limits.
UAM requires continuous, machine-speed coordination. It requires predictive modeling of traffic flows, real-time collision avoidance, adaptive routing, dynamic prioritization of emergency vehicles, and resilience to disruptions. In other words, it requires AI-driven air traffic management interacting with AI-driven airborne systems.
Can we imagine a future where most conversations in this airspace take place between machines? AI-controlled drones negotiating flight paths with AI-driven traffic management systems.
Autonomous aircraft dynamically coordinating landing priorities with algorithmic controllers. Over time, these systems will develop shared operational patterns, conventions, and optimizations that no human explicitly designed.
Will this create new knowledge about how to manage complex airspace more efficiently? Almost certainly. Will humans be able to fully understand and audit these emergent coordination strategies? That is not so clear.
And where does that leave human pilots, human controllers, and human decision-makers? Will we insist on being in the loop, even if that introduces latency and inefficiency? Or will we find ourselves increasingly supervising systems whose internal logic we only partially grasp? At what point do humans become guests in an airspace governed primarily by machine logic?
When future conflicts become machine-to-machine
Now let’s widen the lens to defense technology.
Defense systems are designed to help prepare for, deter, and, if necessary, respond to aggression. Increasingly, these systems are AI-driven. The pace of engagement in modern conflict already pushes beyond human reaction times.
It is not difficult to imagine a future scenario in which both defenders and aggressors deploy AI-driven, autonomous systems operating at machine speed. Defensive AI agents probing networks, reallocating resources, and adapting sensor coverage.
Offensive AI agents testing defenses, finding weaknesses, and launching coordinated cyber or kinetic actions. Human operators, unable to meaningfully intervene in real time, become supervisors of strategic intent rather than tactical execution.
In such an environment, conflict risks becoming a contest between machine learning systems. Each side’s AI adapts to the other’s behavior, learning patterns, optimizing tactics, and generating strategies that may never have been conceived by human planners. The battlefield becomes, in part, a closed loop of learning between autonomous systems.
This raises deeply uncomfortable questions. How do we maintain meaningful human control over conflict when decision cycles collapse to milliseconds? How do we ensure accountability when outcomes are shaped by emergent machine strategies? How do we prevent escalation driven not by human intent but by algorithmic feedback loops misinterpreting each other’s actions?
Governing what learns faster than we do
The AlphaGo and MoltBook stories suggest that autonomous systems can generate superior operational knowledge in constrained environments. But defense and urban mobility are not board games. They are socio-technical systems embedded in human lives, legal frameworks, ethical constraints, and geopolitical realities.
I do not think that the trajectory we are on is inherently dystopian. AI-driven coordination can save lives in congested airspace. Autonomous defense systems can improve early warning, reduce human error, and enhance resilience.
However, we must tread this path carefully.
Governance in the age of autonomous AI cannot be limited to technical controls. It must create clear boundaries around autonomous learning that are acceptable and that maintain human judgment as central.
Oversight frameworks must be designed for systems that evolve over time. Transparency, auditability, explainability, and the ability to intervene meaningfully are prerequisites for legitimacy and deployment of operational AI systems.
We also need to resist the temptation to delegate responsibility to machines simply because they appear to perform better in narrow metrics.
AlphaGo has taught us that machines can discover strategies beyond human imagination. MoltBook hints that AI agents can construct internal worlds of meaning without us. In defense and critical infrastructures, the stakes are far higher than winning a game.
In the rush toward autonomy, technological leadership will not only be measured by how quickly we deploy AI, but also by how clearly we define the lines it must not cross.