In the age of Artificial Intelligence, victory belongs not to the one who strikes fastest, but to the one who understands best
The war in the wider Middle East region, now entering its fifth week, is characterized by many experts and analysts as the first military conflict where Artificial Intelligence (AI) is being utilized on such a massive scale. As reported, algorithms analyze millions of data points in seconds, selecting targets and accelerating decisions at speeds that no human thought can match, serving as a decisive factor in the evolution of warfare. And yet, the US and Israel—two countries with extremely powerful militaries and access to the best military technology worldwide—have failed over the past 35 days to prevail against the militarily and technologically "weaker" Iran. In fact, Iran manages not only to resist firmly and endure but also to deliver crushing blows, leading many to believe it has brought Trump’s powerful army to humiliation and strategic defeat. The question that naturally arises is: how is such a thing possible?
Tactical superiority: Speed, scale, and precision
Artificial Intelligence, particularly in the form of systems like Maven, is essentially designed to solve one problem: reducing time and increasing precision in the operational decision-making cycle. These systems can analyze vast volumes of data, from drone imagery to electronic signals, in a very short period, identifying patterns and suggesting actions. In the recent war, this capability allowed the United States and Israel to:
-
Identify more targets in less time,
-
Carry out strikes with greater precision,
-
And minimize the interval between "target detection" and "action."
The limitations
To be more precise, Artificial Intelligence has transformed warfare at the tactical level into a nearly automated, high-speed process. This is what is referred to in military literature as the "compression of decision time." But this is exactly where the first fundamental limitation of AI appears: the inability to understand strategy. AI, even in its most advanced form, lacks what can be termed "strategic understanding." This weakness stems from several key characteristics: a) Dependence on past data: AI makes decisions based on historical patterns rather than an understanding of the future. b) Lack of understanding of intentions and will: AI cannot correctly analyze "political will" or "social psychology." c) Tendency for short-term optimization: AI often seeks immediate efficiency rather than the achievement of long-term goals.
Big questions
For this reason, as mentioned in a discussion at Carnegie, the use of Artificial Intelligence at the strategic level faces serious difficulties. Questions such as:
-
Is escalating attacks to our advantage, or will it lead to greater adversary cohesion?
-
Will a specific military action act as a deterrent or, conversely, intensify the crisis?
-
What level of intensity is optimal and manageable? These are not questions to which AI can provide reliable answers.

Iran and "natural intelligence" at the strategic level
Conversely, what is observed in Iran's behavior in this war is a type of "natural intelligence" at the strategic level. This intelligence is based on a combination of:
-
Historical experience,
-
Deep knowledge of the regional environment,
-
Understanding the opponent's psychology,
-
And the ability to manage political and social complexities. For example, actions such as:
-
Managing the level of intensity so it does not spiral out of control,
-
Creating economic pressure through the threat or control of the Strait of Hormuz,
-
Avoiding the trap of uncontrolled escalation of the conflict, show that decision-making at the strategic level is not based merely on operational data but on a kind of "holistic judgment." This is precisely where the gap between artificial and human intelligence becomes apparent.

The paradox of modern warfare: Tactical victories, strategic deadlock
One of the most significant results of introducing Artificial Intelligence into warfare is the creation of a new paradox: the ability to significantly increase tactical effectiveness without guaranteeing strategic success. In reality, AI can:
-
Increase the number of destroyed targets,
-
Improve the accuracy of strikes,
-
And even reduce operational costs. But this does not necessarily mean victory in the war.
Strategy’s weight cannot be substituted
Historical experiences, from Vietnam to Iraq, have shown that firepower and operational precision do not substitute for strategy. If political and strategic goals have not been correctly defined, even the most successful military operations can lead to a deadlock.
The danger of the "illusion of control" in AI wars
One of the serious risks posed in such wars is what can be called the "illusion of control." When commanders:
-
Have access to more accurate data,
-
Can make decisions in shorter timeframes,
-
And can monitor operational results in real-time, they may believe they have full control over the battlefield. However, this control is often superficial and limited to the tactical level. At deeper levels—politics, society, and the psychology of war—unpredictable factors continue to play a decisive role.

The most important lesson
The current war shows that Artificial Intelligence, despite all its capabilities, still functions at a level that can be characterized as "advanced tactical" rather than "strategic." Instead, what ultimately determines the course of the war remains in a field that depends on human judgment, the understanding of complexity, and the ability to manage uncertainty. This is why the observation that "one side is successful at the tactical level but does not have the upper hand at the strategic level" is not a contradiction but an accurate reflection of the nature of war in the age of AI. Ultimately, perhaps the most important lesson of this war is this: in the age of AI, victory belongs not to the one who strikes fastest, but to the one who understands best.
www.bankingnews.gr
Readers’ Comments