By Xu Weidi, Former Researcher at the Strategic Studies Institute, National Defense University
Recently, an American friend shared the article "Artificial Intelligence and Nuclear Stability," which inspired deep reflection. Below are some questions and thoughts for the authors, Michael Depp and Paul Scharre.
1. "Don’t Hand Nuclear Weapons to AI"—Is This Really the Case?
Recently, many in the U.S. and the West have insisted: "Don’t hand nuclear weapons to AI!" Yet, after reading the article, it seems the authors do not entirely reject AI in nuclear systems. They argue that proper AI integration could enhance stability, while misuse could undermine it.
The U.S. emphasizes "human-in-the-loop" controls for AI in nuclear operations. However, the authors acknowledge varying degrees of human involvement—active vs. passive—raising questions about the concept’s clarity. Can "human-in-the-loop" truly prevent AI errors?
Humans err, AI (as human-designed systems) can also err. Why assume human-AI collaboration (human × AI) is infallible? True intelligence lies in self-correction—a hallmark of advanced AI.
The authors suggest AI’s role in nuclear targeting, command, and communications is inevitable. If so, the Western stance against AI in nuclear systems may be disingenuous. Both the U.S. and Russia are integrating AI into nuclear weapons—just in different ways.
2. Safe vs. Unsafe AI in Nuclear Systems
Depp and Scharre argue Russia’s AI-nuclear integration (e.g., "Perimeter/Dead Hand" and "Poseidon" nuclear torpedoes) is destabilizing and should be banned. They endorse the U.S. approach—using AI for image recognition, targeting, and decision support—as responsible.
While the U.S. leads in AI (machine learning, big data, generative AI), Russia has adapted based on its technological constraints. If the authors were Russian, might they argue: "Different paths can achieve security—why must all follow the U.S.?"
The authors express doubts about their own recommendations: How deep should "human-in-the-loop" go? Should all algorithms and data be human-reviewed? Is accepting AI-generated nuclear strategies equivalent to handing control to AI? They deem "Poseidon" riskier than U.S. systems like "Trident D-5," but such debates are likely unresolvable between adversarial experts.
3. Moving Beyond Nuclear Deterrence
Nuclear safety depends less on AI than on strategic relations. If two states view each other as existential threats, no amount of "correct" AI use will ensure stability. The real issue lies with policymakers—not whether humans are "in," "on," or "outside" the loop.
Depp, Scharre, and many in the U.S. and Russia remain trapped in a deterrence paradox: fearing nuclear war while relying on it. Cold War stability emerged from mutual restraint—today, NATO’s expansion and Russia’s nuclear threats heighten risks. As one retired U.S. general privately asked: If Russia uses a tactical nuke in Ukraine, would the U.S. retaliate against Moscow?
The solution lies in policy, not technology. If all nuclear states adopted China’s "No First Use" (NFU) policy, arsenals and alert levels could shrink, reducing accidental or unauthorized launches.
China’s NFU means retaliation follows any nuclear strike—whether ordered, accidental, or AI-caused. Past U.S. "mistakes" (e.g., the 1999 Belgrade embassy bombing) will not be tolerated in the nuclear realm. The best way to avoid catastrophic errors is to restrict nuclear use to retaliation only.
The views expressed are the author’s alone and do not represent any institution. Any errors are his own.