Publications
INSS Insight No. 2116, March 19, 2026
In recent years, the integration of artificial intelligence into defense systems has evolved from a decision-support tool into a strategic infrastructure shaping the conduct of war. Within this trend, the Pentagon’s adoption of the AI-First doctrine marks a significant conceptual shift from the limited integration of artificial intelligence systems to a systemic approach in which AI becomes a foundational component in the chain of command, in intelligence collection and analysis, and in the planning of multi-theater operations. This article examines this new American doctrine and its implications for the nature of warfare in the algorithmic age. It then presents a recent case study—the use of AI systems during the war between the United States and Iran—which demonstrates how these concepts are being implemented on the battlefield in practice. Finally, it presents policy implications for Israel, pointing to the need to move from an approach focused on developing discrete AI technologies to a systemic approach in which AI is systematically integrated into the defense establishment. In addition, there is a need to strengthen strategic cooperation with the United States in this field, alongside shaping governance frameworks and standards for the responsible use of AI in military systems.
The Pentagon’s AI-First Doctrine
The US Department of War has in recent months adopted a strategic concept aimed at making artificial intelligence a central pillar of military activity. The AI-first doctrine is based on the assumption that strategic advantage in future wars will derive to a large extent from states’ ability to integrate advanced algorithms into the core of military decision-making systems.
According to the Pentagon’s AI strategy, competition in this field is seen as part of the broader geostrategic competition among great powers. Within this framework, the United States seeks to preserve and even expand what the document defines as Military AI Dominance—military superiority grounded in the combination of technological innovation, operational data, and an advanced civilian AI industry. Accordingly, the strategy instructs the US defense branches to become an “AI-based warfighting force” by accelerating experimentation with advanced models, removing bureaucratic barriers to the integration of new technologies, and prioritizing asymmetric advantage in the areas of data and computing power.
The document emphasizes that the United States has unique structural advantages in this field, including a leading innovation ecosystem, an advanced technology industry, capital markets that support the development of breakthrough technologies, and operational data repositories accumulated over decades of military and intelligence activity. The integration of these advantages is intended to enable the United States to outpace its rivals in the algorithmic arms race.
A central component of this concept is the integration of AI into the operational decision-making process—from intelligence processing to the planning of complex combat systems. As part of the strategic roadmap, several leading “Pace-Setting Projects” were defined to demonstrate the new pace of AI technology implementation. These projects include AI-based battle management systems, the development of capabilities for coordinating swarms of unmanned systems, and the extensive use of AI-based operational simulations for planning military systems.
Implementation of the concept is not limited to strategic declarations alone. The Pentagon has begun deploying dedicated platforms for the use of artificial intelligence within the defense establishment, such as GenAI.mil—a secure platform that enables the integration of generative models and analytics tools on both classified and unclassified networks. This move is intended to expand access to AI tools to millions of military personnel and government employees and to embed AI capabilities in the daily work processes of the defense system.
The doctrine reflects an understanding that the speed of information processing and the shortening of decision cycles—from sensor to commander—will become decisive factors in future conflicts. In this context, AI is seen as a force multiplier that makes it possible to cope with the growing information overload on the modern battlefield.
From Intelligence Support to Operational Acceleration
The integration of AI in defense systems initially took root mainly in the fields of predictive maintenance, intelligence analysis, and administrative support. Under the AI-first concept, however, the role of these systems is expanding, and they are becoming tools that enable the acceleration of operational processes. Advanced models are now capable of synthesizing vast quantities of data from a variety of sensors, intelligence systems, and open-source information, and producing real-time insights from them. These capabilities allow commanders to prioritize targets, examine different operational scenarios, and conduct situation assessments at significantly greater speed than traditional human analytical processes. This development changes the nature of military decision-making. Instead of systems serving solely as analytical support, AI becomes an active component that enhances the planning and management of complex combat systems.
Case Study: The Use of AI in the Conflict with Iran
The confrontation between the United States and Iran provides a tangible example of the translation of the AI-first concept into operational activity. During strikes against Iranian targets, it was reported that the US military used AI systems—including Anthropic’s large language model, Claude—for intelligence analysis, target identification, and the running of operational simulations. According to reports in the American media, US Central Command (CENTCOM) integrated the model alongside conventional weapons systems, including Tomahawk missiles, stealth aircraft, and AI-based drones. The system helped process in real-time large-scale data received from various sensing systems, thereby shortening the time required for intelligence analysis and the generation of operational insights. AI was also used to run “what if” scenarios, enabling operation planners to examine different courses of action within a relatively short time. These capabilities underscore the potential of AI to accelerate decision-making processes in complex combat situations.
Between the Battlefield and Silicon Valley: The Ethical-Legal Dispute
The accelerated adoption of AI systems in the US defense establishment has been accompanied by significant disputes between the government and technology companies. Anthropic, which supplied the model used in combat operations, opposed some of the Pentagon’s demands to remove safety mechanisms related to uses such as autonomous weapons and large-scale surveillance systems—and in particular refused to remove all safety measures so that the model would be available to the military for any lawful use. The company argued that AI systems are not sufficiently reliable for the operation of fully autonomous weapons and that the use of AI for mass surveillance of civilians is not morally or regulatorily legitimate; it therefore drew a “red line” on these demands. The Pentagon, by contrast, issued an ultimatum to remove those restrictions and even threatened to designate Anthropic as a “supply chain risk” if it did not yield, which was an especially unusual step.
These disputes reflect broader tensions between national security considerations and ethical, legal, and governance-related issues in the field of AI. Within the technology industry itself, internal discussions have developed, along with opposition by employees to certain military uses of AI systems.
Strategic Implications: Toward Algorithmic Warfare
The use of AI systems in military conflicts marks a new stage in the development of modern warfare. Whereas AI systems once served primarily as support tools, they are now becoming force multipliers that enable information processing at a scale and speed impossible for human systems alone. At the same time, this integration also raises complex questions regarding responsibility, oversight, and legal frameworks. As AI systems increasingly influence operational decisions—including decisions concerning the use of force—adjustments will be required in international law, rules of engagement, and accountability mechanisms.
Policy and Security Implications for Israel
The American experience shows that the systematic integration of AI into the core of military activity changes the rules of the game on the battlefield. For Israel, which has a significant advantage in the field of defense innovation, this context carries several strategic implications.
First, there is a need to move from an approach centered on the development of discrete AI technologies to a systemic approach similar to the American AI-first doctrine, in which artificial intelligence is systematically integrated into the chain of command, intelligence-processing processes, and the planning of multi-theater operations. Such a transition requires deeper integration among the defense community, defense industries, and the civilian high-tech sector, alongside investment in data infrastructures and advanced computing power.
Second, in light of the global acceleration of the AI arms race, Israel must strengthen its strategic cooperation with the United States in this field. Such cooperation can include research and development, integration between operational AI systems, and a deepening of the strategic dialogue surrounding the responsible use of artificial intelligence in military systems.
Finally, alongside the operational advantages, the integration of AI systems into warfare also raises complex legal and ethical issues. Israel, which stands at the forefront of coping with both security and technological threats, can play a significant role in shaping governance frameworks and international standards for the responsible use of AI in defense systems while preserving its technological and operational edge.
Conclusion
The Pentagon’s AI-first doctrine reflects a profound change in the American concept of warfare. AI is no longer perceived as a supplementary technological tool but rather as a strategic infrastructure shaping the way militaries plan and conduct military campaigns. The confrontation with Iran demonstrates how this concept is beginning to be realized in practice. As artificial intelligence is integrated more deeply into the chain of command, intelligence synthesis, and operational planning, a new reality is taking shape in which the boundary between human judgment and algorithmic decision support becomes more dynamic and flexible. For Israel, this represents both an opportunity and a challenge: There is not only a need to develop discrete AI capabilities but also to adopt a systemic approach, integrate intelligence, command, and combat systems, and create an advanced infrastructure of data and computing power. At the same time, Israel must strengthen strategic cooperation with the United States and advance its influence on international standards for the safe use of military AI. In this way, Israel will be able to preserve its technological and operational advantage while maintaining legitimacy and preparing for an era in which AI becomes a strategic force multiplier on the battlefield.
In any case, the way relations evolve among defense institutions, policymakers, and the technology industry will largely determine the rules of the game in the age of AI-based warfare.
