When Henry Kissinger published his essay “How the Enlightenment Ends” in the Atlantic in June 2018, many people were surprised that the elder statesman’s elder statesman had a view on the subject of artificial intelligence. Kissinger had just turned 95. AI was not yet the hot topic it would become after OpenAI released ChatGPT in late 2022.
As Kissinger’s biographer, however, I found it quite natural that the topic of AI gripped his attention. He had, after all, come to public prominence in 1957 with a book about a new and world-changing technology. Nuclear Weapons and Foreign Policy was a book so thoroughly researched that it won the approval even of Robert Oppenheimer, who described it as “extraordinarily well informed, and in this respect quite unprecedented in the field of nuclear armament … scrupulous in its regard for fact, and at once passionate and tough in argument.”
Although as a doctoral student Kissinger had immersed himself in the diplomatic history of early-nineteenth-century Europe, he was keenly aware throughout his career that the eternal patterns of great-power politics were subject to periodic disruption by technological change. Like so many members of his generation who served in World War II, he had seen for himself not only the mass death and destruction that could be inflicted by modern weapons, but also the dire consequences for his fellow Jews of what Churchill had memorably called the “perverted science” of Hitler’s Third Reich.
Contrary to his unwarranted reputation as a warmonger, Kissinger was strongly motivated throughout his adult life by the imperative to avoid World War III—the widely feared consequence if the Cold War between the United States and the Soviet Union turned hot. He understood only too well that the technology of nuclear fission would make another world war an even greater conflagration than World War II. Early in Nuclear Weapons and Foreign Policy, Kissinger estimated the destructive effects of a ten-megaton bomb dropped on New York and then extrapolated that an all-out Soviet attack on the fifty largest U.S. cities would kill between 15 and 20 million people and injure between 20 and 25 million. A further 5 to 10 million would die from the effects of radioactive fallout, while perhaps another 7 to 10 million would become sick. Those who survived would face “social disintegration.” Even after such an attack, he noted, the United States would still be able to inflict comparable devastation on the Soviet Union. The conclusion was obvious: “Henceforth the only outcome of an all-out war will be that both contenders must lose.” There could be no winner in such a conflict, Kissinger argued in his 1957 essay “Strategy and Organization,” “because even the weaker side may be able to inflict a degree of destruction which no society can support.”
Yet Kissinger’s youthful idealism did not make him a pacifist. In Nuclear Weapons and Foreign Policy, he was quite explicit that “the horrors of nuclear war [were] not likely to be avoided by a reduction of nuclear armaments” or, for that matter, by systems of weapons inspection. The question was not whether war could be avoided altogether, but whether it was “possible to imagine applications of power less catastrophic than all-out thermonuclear war.” For if it were not possible, then it would be very hard indeed for the United States and its allies to prevail in the Cold War. “The absence of any generally understood limits to war,” Kissinger warned in “Controls, Inspections, and Limited War,” an essay published in The Reporter, “undermines the psychological framework of resistance to Communist moves. Where war is considered tantamount to national suicide, surrender may appear the lesser of two evils.”
It was on this basis that Kissinger advanced his doctrine of limited nuclear war, as laid out in “Strategy and Organization”:
Against the ominous background of thermonuclear devastation, the goal of war can no longer be military victory as we have known it. Rather it should be the attainment of certain specific political conditions which are fully understood by the opponent. The purpose of limited war is to inflict losses or to pose risks for the enemy out of proportion to the objectives under dispute. The more moderate the objective, the less violent the war is likely to be.
This would necessitate understanding the other side’s psychology as well as its military capability.
At the time, many people recoiled from Kissinger’s seemingly cold-blooded contemplation of a limited nuclear war. Some scholars, such as Thomas Schelling, disputed that an unstoppable escalation could be avoided; even Kissinger himself later distanced himself from his own argument. Yet both superpowers went on to build and deploy battlefield or tactical nuclear weapons, following precisely the logic that Kissinger had outlined in Nuclear Weapons and Foreign Policy. Limited nuclear war might not have worked in theory, but military planners on both sides behaved as if it might work in practice. (Indeed, such weapons exist to this day. The Russian government has threatened to use them on more than one occasion since its invasion of Ukraine became bogged down.) The young Kissinger was more right about nuclear weapons than even he knew.
Kissinger never ceased to ponder the implications of technological change in the political realm. In a long-forgotten paper that he wrote for Nelson Rockefeller in January 1968, Kissinger looked ahead to the ways in which computerization might help officials cope with the constantly increasing flow of information generated by U.S. government agencies. As he saw it, senior officials were in grave danger of drowning in data. “The top policy-maker,” he wrote, “has so much information at his disposal that in crisis situations he finds it impossible to cope with it.” Decision-makers needed to be “consistently briefed on likely trouble spots,” Kissinger argued, including potential trouble spots “even when they have not been assigned top priority.” They also needed to be furnished with “a set of action-options … outlin[ing] the major alternatives in response to foreseeable circumstances with an evaluation of the probable consequences, domestic and foreign of each such alternative.”
To achieve such comprehensive coverage, Kissinger acknowledged, would require major investments in programming, storage, retrieval, and graphics. Fortunately, the “hardware technology” now existed to perform all four of these functions:
[W]e can now store several hundred items of information on every individual in the United States on one 2,400 foot magnetic tape. … [T]hird-generation computers are now capable of performing basic machine operation in nano seconds, i.e., billionths of a second. … [E]xperimental time-sharing systems have now demonstrated that multiple-access capability for large-scale digital computers is possible to allow for information input/ output at both the executive and operator stations distributed around the world. … [And] very shortly color cathode ray tube display will be available for computer output.
Later, after his first year in the White House as Richard Nixon’s national security adviser, Kissinger attempted to obtain such a computer for his own use. The CIA denied the request, presumably because Kissinger without a computer was as much as the intelligence community could handle.
Henry Kissinger never retired. Nor did he ever stop worrying about the future of humanity. Such a man was hardly going to ignore one of the most consequential technological breakthroughs of his later life: the development and deployment of generative artificial intelligence. Indeed, the task of understanding the implications of this nascent technology consumed a significant portion of Kissinger’s final years.
Genesis, Kissinger’s final book, was co-authored with two eminent technologists, Craig Mundie and Eric Schmidt, and it bears the imprint of those innovators’ innate optimism. The authors look forward to the “evolution of Homo technicus—a human species that may, in this new age, live in symbiosis with machine technology.” AI, they argue, could soon be harnessed “to generate a new baseline of human wealth and wellbeing … [that] would at least ease if not eliminate the strains of labor, class, and conflict that previously have torn humanity apart.” The adoption of AI might even lead to “profound equalizations … across race, gender, nationality, place of birth, and family background.”
Nevertheless, the eldest author’s contribution is detectable in the series of warnings that are the book’s leitmotif. “The advent of artificial intelligence is,” the authors observe, “a question of human survival. … An improperly controlled AI could accumulate knowledge destructively. … The convulsions that will soon bend the collective reality of the planet…mark a fundamental break from the past.” Here, rephrased for Genesis but immediately recognizable, is Kissinger’s original question from his 2018 Atlantic essay “How the Enlightenment Ends”:
[AI’s] objective capacity to reach new and accurate conclusions about our world by inhuman methods not only disrupts our reliance on the scientific method as it has been pursued continuously for five centuries but also challenges the human claim to an exclusive or unique grasp of reality. What can this mean? Will the age of AI not only fail to propel humanity forward but instead catalyze a return to a premodern acceptance of unexplained authority? In short: are we, might we be, on the precipice of a great reversal in human cognition—a dark enlightenment?
In what struck this reader as the book’s most powerful section, the authors contemplate a deeply troubling AI arms race. “If … each human society wishes to maximize its unilateral position,” the authors write, “then the conditions would be set for a psychological death-match between rival military forces and intelligence agencies, the likes of which humanity has never faced before. Today, in the years, months, weeks, and days leading up to the arrival of the first superintelligence, a security dilemma of existential nature awaits.”
If we are already witnessing “a competition to reach a single, perfect, unquestionably dominant intelligence,” then what are the likely outcomes? The authors envision six scenarios, by my count, none of them enticing:
1. Humanity will lose control of an existential race between multiple actors trapped in a security dilemma.
2. Humanity will suffer the exercise of supreme hegemony by a victor unharnessed by the checks and balances traditionally needed to guarantee a minimum of security for others.
3. There will not be just one supreme AI but rather multiple instantiations of superior intelligence in the world.
4. The companies that own and develop AI may accrue totalizing social, economic, military, and political power.
5. AI might find the greatest relevance and most widespread and durable expression not in national structures but in religious ones.
6. Uncontrolled, open-source diffusion of the new technology could give rise to smaller gangs or tribes with substandard but still substantial AI capacity.
Kissinger was deeply concerned about scenarios such as these, and his effort to avoid them did not end with the writing of this book. It is no secret that the final effort of his life—which sapped his remaining strength in the months after his 100th birthday—was to initiate a process of AI arms limitation talks between the United States and China, precisely in the hope of averting such dystopian outcomes.
The conclusion of Genesis is unmistakably Kissingerian:
As AI accelerates the timeline of evolution beyond comprehension, humanity will become divided into warring factions. We foresee this struggle as one between the present and the future. There will be those who wish to keep humanity fixed at our current stage, and whose preference would be to use technology to progress but not to become beholden to it. As the dividing line appears to draw too near for some, individuals in this group could resort to sabotage or possibly terrorism in an attempt to spark a global exile to our simpler past. Another faction, perhaps overconfident in their creation’s capacities or their own, will uninhibitedly seek to accelerate us into an uncertain future. Crises unsurvivable without higher aid will engulf us.
The technologist’s habitual response to such forebodings is to remind us of the tangible benefits of AI, which are already very obvious in the realm of medical science. I do not disagree. In my view, AlphaFold—a neural-network-based model that predicts three-dimensional protein structures—was a far more important breakthrough than Chat-GPT. Yet medical science made comparable advances in the twentieth century. The world wars and the Holocaust nevertheless occurred, even as antibiotics, new vaccines, and countless other therapeutics were discovered and made widely available.
The central problem of technological progress manifested itself in Henry Kissinger’s lifetime. Nuclear fission was discovered in Berlin by two German chemists, Otto Hahn and Fritz Strassmann, in 1938. It was explained theoretically (and named) by the Austrian-born physicists Lise Meitner and her nephew Otto Robert Frisch in 1939. The possibility of a nuclear chain reaction leading to “large-scale production of energy and radioactive elements, unfortunately also perhaps to atomic bombs” was the insight of the Hungarian physicist Leó Szilárd. The possibility that such a chain reaction might also be harnessed in a nuclear reactor to generate heat was also recognized at that time. Yet it took little more than five years to build the first atomic bomb, whereas it was not until 1951 that the first nuclear power station was opened.
Ask yourself: Which did human beings build more of in the past eighty years: nuclear warheads or nuclear power stations? Today there are approximately 12,500 nuclear warheads in the world, and the number is currently rising as China adds rapidly to its nuclear arsenal. By contrast, there are 436 nuclear reactors in operation. In absolute terms nuclear electricity generation peaked in 2006, with the share of total world electricity production that is nuclear declining from 15.5% in 1996 to 8.6% in 2022, partly as a result of political overreactions to a small number of nuclear accidents whose impacts on human health and the environment were negligible compared to the effects of carbon dioxide emissions from fossil fuels.
The lesson of Henry Kissinger’s lifetime is clear. Technological advances can have both benign and malign consequences, depending on how we collectively decide to exploit them. Artificial intelligence is of course different from nuclear fission in a host of ways. But it would be a grave error to assume that we shall use this new technology more for productive than for potentially destructive purposes.
It was this kind of insight, born of historical as well as personal experience, that inspired Henry Kissinger to devote so much of his life to the study of world order, and the avoidance of world war. It was what made him react with such alacrity—and concern—to the recent breakthroughs in artificial intelligence. And it is why this posthumous publication is as important as anything he wrote in the course of his long and consequential life.
Oxford, July 2024
Excellent piece. As that eminent technologist, T.S. Eliot, wrote in 1934,
"They constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good."
Thus with visions of beneficent AI everywhere.
On a somewhat darker note, in "Coming of Age in the Milky Way" Timothy Ferris talks about the possibility of extraterrestrial life, and wonders whether our own virtually contemporary mastery of rockets and nuclear weapons generalized, dramatically lowering the odds that any civilization got off its own planet.
This is Niall Ferguson at his absolute best. In this one article, he answers through direct experiential and observable answers, the serious questions which are facing all humanity today. By referencing Henry Kissinger in depth, as well as others who enlighten the history beautifully, he explains why this is a perfect place to be in a Time Machine.
Thank you Niall.