Copyright thediplomat

Robert Jervis’s offense-defense balance theory argues that a technology’s strategic impact depends on whether it makes attacking or defending easier compared to an opponent. For example, tanks and fast-moving blitzkrieg tactics once gave attackers the upper hand by overwhelming static defenses. Today, artificial intelligence (AI) raises similar questions. Many assume that AI automatically benefits aggressors, but its military impact depends on how states choose to develop, deploy, and interpret it. There is ongoing debate over whether AI will revolutionize the character of warfare or merely represent an evolutionary augmentation of existing capabilities. While definitive conclusions are difficult due to rapid technological change and limited transparency, AI is clearly transforming the conduct of war. Faster decision cycles, expanded force projection, and new human-machine dynamics are already altering the character of conflict. By integrating real-time data from drones, satellites, and cyber systems, AI compresses decision-making from minutes to seconds. U.S. tools like FIRESTORM and ABMS offer troops a speed advantage, but also raise risks. Compressed timelines reduce space for diplomacy or verification, increasing the chance that misperceptions – such as mistaking surveillance for aggression – could spark unintended conflict before human intervention is possible. Beyond speed, AI-powered unmanned systems – whether in the air, on the surface, or underwater – are significantly enhancing military reach and firepower. When integrated with human forces, these autonomous platforms enable longer missions, greater coverage, and more effective coordination on the battlefield. This added scale and efficiency can provide a major edge in conflicts between evenly matched powers. Importantly, as these technologies become more affordable and widely available, smaller or less advanced militaries may also adopt them, potentially reshaping global power dynamics and making high-tech warfare more accessible. More importantly, as AI systems assume more battlefield roles, militaries are relying less on human soldiers for dangerous missions, reducing casualties and accelerating a shift toward post-heroic warfare. Yet this growing dependence on machines raises serious ethical concerns around accountability and the morality of delegating life-and-death decisions to algorithms. AI is reshaping military power, but its ultimate effect on the offense-defense balance remains unclear. Jervis’ framework suggest the key question isn’t who has superior technology, but how easily each side can achieve its objectives – whether through attack or defense – relative to its opponent. AI could tip the balance either way: it may fuel escalation by enabling faster, cheaper strikes, or bolster stability by improving surveillance and defense. China’s unmanned vessel, the Zhu Hai Yun, highlights the blurred line between offense and defense in AI-enabled warfare. Designed to deploy over 50 autonomous aerial, surface, and underwater vehicles, the Zhu Hai Yun can strengthen maritime surveillance and anti-submarine defenses – but it could just as easily be used to launch coordinated drone swarms and project force into contested waters. This example shows that the strategic impact of such systems isn't fixed. Whether the Zhu Hai Yun shifts the balance toward offense or defense depends not only on how China employs it, but also on how rivals adapt – through counter-drone technologies, stealth platforms, or cyber defenses. As Jervis argued, the offense-defense balance is not determined by capabilities alone, but by how those capabilities interact – making it a constantly evolving equation. Another crucial factor is whether AI-enabled systems are optimally deployed. Autonomous weapon systems can favor defense if they’re hardened against cyberattacks, built with trusted supply chains, and retain meaningful human oversight. But if left vulnerable – through compromised hardware, data poisoning, or poor cyber hygiene – they may expose weaknesses exploitable by adversaries. In such cases, the offensive side gains the upper hand by targeting flaws in AI systems rather than confronting their strength. Both China and the United States are acutely aware of this. China is working toward technological self-sufficiency and expanding quantum-secured networks. The U.S. has banned Chinese-made drones and is investing in trusted microelectronics. These efforts reflect a shared understanding: AI’s strategic impact on offense and defense balance depends not just on what it can do, but how safely and reliably it can be fielded. Perceptions also matter. Both Chinese and U.S. defense planners see AI not just as a technological step forward, but as a pathway to future military dominance. China has described AI as ushering in an “intelligentized” era of warfare, prompting doctrinal shifts akin to past responses to nuclear and digital revolutions. U.S. officials similarly emphasize rapid dominance via autonomous systems. But this shared belief in AI’s offensive potential risks fueling an action-reaction cycle that could reduce crisis stability – especially if both sides believe they must strike first to prevail. Historically, military technologies that favored offense did not always lead to more war. Sometimes, the fear of destabilization spurred diplomatic restraint. A similar pattern may be emerging today. Concerns about AI’s potential to lower the threshold for conflict have led to international efforts aimed at managing risk. Since 2014, the United Nations has hosted recurring discussions on lethal autonomous weapon systems. In 2023, the United States and China held high-level talks on AI safety. Meanwhile, researchers and civil society leaders are building informal “Track 2” channels to explore cooperative approaches to AI governance. Still, no binding agreements exist. An intensifying arms race, especially between the U.S. and China, continues to push investment in increasingly advanced and potentially destabilizing systems. This makes it essential to approach AI military innovation with caution. While AI’s strategic impact remains uncertain, taking time to build trust, strengthen oversight, and encourage cooperation could help prevent