Henry Kissinger and Eric Schmidt on the Perils of Military Brinkmanship in the Age of AI

In ways large and small, artificial intelligence (AI) has become ubiquitous. Search engines, maps and social media are analyzing our histories to make predictions and tailor relevant responses. Text and email applications, which know the words and phrases we use most often, are trying to complete our sentences. AI programs like AlphaGo and AlphaZero are winning games—in their cases, Go and chess—by playing themselves and, in so doing, developing their own not-quite-human concepts of the games. At MIT, an AI program discovered a new antibiotic by identifying patterns in data that humans did not—or possibly never could. Struck by these and other breakthroughs, the all-star team of former Secretary of State Henry Kissinger, Schmidt Futures co-founder and former Google CEO Eric Schmidt and MIT Schwarzman College of Computing Dean Daniel Huttenlocher have come together to analyze AI—how it evolved; where it is now; and where, eventually, it will take us. In this excerpt from their book, The Age of AI: And Our Human Future (Little, Brown & Company) they consider the transformations AI will impose on the planning, preparation and conduct of war. They conclude that humans remain essential to the equation.

FE Age of AI BANNER
Brandon Laufenberg/Getty; Ihor Svetiukha/Getty

Throughout history, a nation's political influence has correlated with its military power—in other words, its ability to inflict damage on other societies. But equilibrium based on military power is not static. It relies on consensus—importantly, among all the members of the international system—about what constitutes power; what renders power legitimate and which members have both the capability and the intention to use their power to try to impose their will. When members diverge on the nature of the power that defines their equilibrium, they risk conflict, especially conflict born of miscalculation.

In recent years, the building of consensus has been complicated by the advent of cyber weapons. Because cyber weapons have civilian applications, even their status as weapons is ambiguous. In some cases, their contributions to the exercise or augmentation of power derives from the nations who possess them refusing to acknowledge their existence or, at the very least, their full range of capabilities. So traditional strategic verities—what constitutes conflict or its belligerents; what rivals can do or how quickly they can do it—do not translate directly to the digital world. Indeed, a central paradox of the age is that the more digital a society is, the more vulnerable it is. Communications networks, power plants, electricity grids, financial markets, universities, hospitals, airlines and public transit systems—even the mechanics of democratic politics—have come to rely on systems that are, to varying degrees, vulnerable to manipulation or attack. As a result, advanced economies—the greatest users of servers and cloud systems—have become richer sets of targets. Conversely, in the event of digital disruption, the low-tech state, the terrorist organization or even individual attackers may assess that they have less to lose.

No major cyber actor, governmental or nongovernmental, has disclosed the full range of its capabilities or activities—not even to deter the actions of others. So strategy and doctrine have evolved in the shadows, even as new capabilities have emerged. Now, artificial intelligence (AI) is mapping onto already-complex cyber weaponry, further complicating strategy-making, risking pushing it beyond human intention—or even human comprehension.

War has always been uncertain and contingent. But it has also been guided by one logic, as well as one set of limitations: that of humans. But AI is powered by algorithms that can identify patterns and make predictions beyond those humans have. Consequently, AI is able to solve problems or develop strategies that humans alone have not and perhaps cannot. In gaming, achievements like AlphaGo and AlphaZero—Google DeepMind programs that mastered Go and chess by playing themselves, then defeated human experts by employing strategies that surprised, even befuddled, those experts—have proven this principle. In the security realm, it is possible, even probable, that the mapping of AI onto the planning for or simulation of war will yield similarly surprising results. If militaries and security services that train or partner with AI achieve insights and influence they did not expect, they will confront the question: should that which surprises also unsettle?

FE Age of AI 01
China's 19-year-old Go player Ke Jie prepares to make a move during the second match against Google's artificial intelligence programme AlphaGo in Wuzhen, eastern China's Zhejiang province on May 25, 2017. STR/AFP/Getty

If AI consumes data, learns from it and responds to it by adapting and evolving, even those countries creating or wielding AI-designed or AI-operated weapons may not know exactly how—or how powerfully—the weapons will behave. How does one develop strategy—offensive or defensive—for something that perceives aspects of the environment that humans may not, or may not as quickly, and that can learn and change through processes that, in some cases, exceed the pace or range of human thought?

AI's potential defensive functions operate on several levels and may soon prove indispensable. Already, AI-piloted fighter jets have shown a substantial ability to dominate human pilots in simulated dogfights. Using some of the same general principles that enabled AlphaZero's victories in chess and the discovery of the bacteria-fighting antibiotic halicin, AI may identify patterns of conduct that even an adversary did not plan or notice, then recommend methods to counteract them.

In a traditional conflict, the psychology of the adversary is a critical focal point at which strategic action is aimed. AI knows only its instructions and objectives, not morale or doubt. But because it adapts in response to the phenomena it encounters, if two AI weapons systems are deployed against each other, neither is likely to precisely predict the results their interaction will generate or their collateral effects. They may only discern imprecisely their respective capabilities and relatedly, their mutual penalties for entering into conflict. For engineers and builders, these limitations may put premiums on speed, breadth of effects and endurance—attributes that may make conflicts more intense and widely felt, and above all, more unpredictable.

But the most unpredictable effect may occur at the point where AI and human intelligence encounter each other. Historically, countries planning for battle have been able to understand, if imperfectly, their adversaries' doctrines, tactics and psychologies. This has permitted the development of adversarial strategies and tactics, as well as a symbolic language of demonstrative military actions, such as intercepting a plane on a border or a boat in a contested waterway. Yet where a military uses AI to plan or target—or even to assist dynamically during a patrol or conflict—familiar concepts and interactions may become newly strange as they come to involve communication with, and interpretation of, an "intelligence" that is unfamiliar in its methods and tactics.

Fundamentally, the shift to AI and AI-assisted weapons and defense systems involves a measure of reliance on—and, in extreme cases, delegation to—an intelligence of considerable analytic potential operating on a fundamentally different experiential paradigm. Such reliance will introduce unknown or poorly understood risks. For this reason, human operators must be involved in and positioned to monitor and control AI actions that have potentially lethal effects. Even if this does not avoid all error, it will at least ensure moral agency and accountability.

The deepest challenge, however, may be philosophical. If aspects of strategy come to operate in conceptual and analytical realms that are accessible to AI but not to human reason, they will become opaque—in their processes, reach and ultimate significance. If policy makers conclude that AI's assistance in scouring the deepest patterns of reality is necessary to understand the capabilities and intentions of adversaries (who may field their own AI) and respond to them in a timely manner, delegation of critical decisions to machines may grow inevitable. Societies are likely to reach differing instinctive limits on what to delegate and what risks and consequences to accept. Major countries should not wait for a crisis to initiate a dialogue about the implications—strategic, doctrinal and moral—of these evolutions. If they do, their impact is likely to be irreversible. An international attempt to limit these risks is imperative.

FE Age of AI 04
U.S. Marine Corps Lance Cpl. Skyler Stevens, an infantryman with 3rd Battalion, 4th Marine Regiment, 1st Marine Division, uses new night optics technology during Urban Advanced Naval Technology Exercise 2018 (ANTX-18) at Marine Corps Base... Lance Cpl. Rhita Daniel/U.S. Marine Corps

The quest for reassurance and restraint will have to contend with the dynamic nature of AI. Once they are released into the world, AI-facilitated cyber weapons may be able to adapt and learn well beyond their intended targets; indeed, the very capabilities of the weapons may change as AI reacts to circumstances. If weapons are able to change in ways different in scope or kind from what their creators anticipated or threatened, calculations of deterrence and escalation may become illusory. Because of this, the range of activities an AI is capable of undertaking, both in the initial design phase and the deployment phase, may need to be adjusted so that a human retains the ability to monitor and turn off or redirect a system that has begun to stray. To avoid unexpected and potentially catastrophic outcomes, such restraints must be reciprocal.

Limitations on AI and cyber capabilities will be challenging to define, and proliferation will be difficult to arrest. Capabilities developed and used by major powers have the potential to fall into the hands of terrorists and rogue actors. Likewise, smaller nations that do not possess nuclear weapons and have limited conventional weapons capability have the capacity to wield outsize influence by investing in leading-edge AI and cyber arsenals.

Inevitably, countries will delegate discrete, non-lethal tasks to AI algorithms (some operated by private entities), including the performance of defensive functions that detect and prevent intrusions in cyberspace. The "attack surface" of a digital, highly networked society will be too vast for human operators to defend manually. As many aspects of human life shift online, and as economies continue to digitize, a rogue actor could deploy an AI cyber weapon to disrupt whole sectors. Countries, companies and even individuals should invest in fail-safes to insulate them from such scenarios.

The most extreme form of such protection will involve severing network connections and taking systems off-line. For nations, disconnection could become the ultimate form of defense. Short of such extreme measures, only AI will be capable of performing certain vital cyber defense functions, in part because of the vast extent of cyberspace and the nearly infinite array of possible actions within it. The most significant defensive capabilities in this domain will therefore likely be beyond the reach of all but a few nations.

Beyond AI-enabled defense systems lies the most vexing category of capabilities—lethal autonomous weapons systems—generally understood to include systems that, once activated, can select and engage targets without further human intervention. The key issue in this domain is human oversight and the capability of timely human intervention.

An autonomous system may have a human "on the loop," monitoring its activities passively, or "in the loop," with human authorization required for certain actions. Unless restricted by mutual agreement that is observed and verifiable, the latter form of weapons system may eventually encompass entire strategies and objectives—such as defending a border or achieving a particular outcome against an adversary—and operate without substantial human involvement.

In these arenas, it is imperative to ensure an appropriate role for human judgment in overseeing and directing the use of force. Such limitations will have only limited meaning if they are adopted only unilaterally—by a nation or a small group of nations. Governments of technologically advanced countries should explore the challenges of mutual restraint supported by enforceable verification.

AI increases the inherent risk of preemption and premature use escalating into conflict. A country fearing that its adversary is developing automatic capabilities may seek to preempt it: if the attack "succeeds," there may be no way to know whether it was justified. To prevent unintended escalation, major powers should pursue their competition within a framework of verifiable limits. Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing. But both sides must expect (and plan accordingly) that the other will withhold its most security-sensitive secrets. There will never be complete trust. But as nuclear arms negotiations during the Cold War demonstrated, that does not mean no measure of understanding can be achieved.

For all their benefits, the arms control treaties (and accompanying mechanisms of communication, enforcement and verification) that came to define the nuclear age were not historical inevitabilities. Humans brought them about—not only by recognizing our mutual peril, but our mutual responsibility.


FE Age of AI BOOK JACKET
Little, Brown and Company

Excerpted from The Age of AI by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher. Copyright © 2021. Available from Little, Brown & Company, an imprint of Hachette Book Group, Inc.

About the writer

, Eric Schmidt AND Daniel Huttenlocher
To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go