Go to home page

This article appears in the September 8, 2023 issue of Executive Intelligence Review.

John von Neumann and Hiroshima

Don’t Let AI Play Games with Nuclear Bombs

[Print version of this article]

I once asked him, when he knew he was close to his final hour and that it was tormenting him greatly: “You contemplate the elimination of millions of individuals without a qualm, but you can’t admit your own death.” He replied, “That’s not the point.”

View full size
LANL
John von Neumann
(1903–1957)

The recent release of Christopher Nolan’s film Oppenheimer, about the man who led the Los Alamos scientists of the Manhattan Project from 1942–1945 to build the atomic bomb, reminds us, probably not coincidentally, that humanity today once again faces the specter of open warfare between nuclear powers.

A seemingly unrelated event hit the headlines a few months ago: a father suffering from eco-anxiety committed suicide after a 6-week “dialogue” with ELIZA, an online generative artificial intelligence chatbot with whom he had allegedly fallen in love.

These two distinct cases undoubtedly reflect a mental illness afflicting our society. Have we lost our sense of what makes us human? Let’s not make the mistake of blaming this on the technologies that mankind has developed through science, and which it needs to ensure the sustainable existence of future generations, whether it be the use of nuclear energy or information technology. The fault lies not in our tools, nor even “in our stars,” but in ourselves. We are increasingly neglecting our most precious asset: our natural intelligence. How did this happen?

To help answer this question, let’s take a look at a man who, according to all appearances, was exceptionally intelligent, but who is little known to the general public despite having had a profound impact on the history of the bomb, and of the computer in the last century: John von Neumann (1903–1957). In doing so, we’ll also fill in a gap in the Oppenheimer film: Von Neumann is not mentioned in this story, even though he played a leading role in the Manhattan Project.

A very interesting French-language biography, John von Neumann, l’homme qui venait du future (John von Neumann, the Man Who Came from the Future), just published at the beginning of this year by Ananyo Bhattacharya, will serve as a reference. While we don’t entirely share the author’s view of the man he calls “one of the greatest geniuses of the century,” we will show how von Neumann’s influence poses a problem for science and for our way of thinking.

Can Intelligence Be Artificial?

Shouldn’t we start by asking ourselves what this mysterious notion of intelligence is? Or, if that’s too difficult a question, what about stupidity? For reasons that should become clear in what follows, it is probably impossible to give a complete formal definition of intelligence. But much of what has contributed to the confusion surrounding this notion in the minds of most people, stems from the discussions that have taken place over the last 80 years around what is known as “artificial intelligence.”

Basically, what does artificial intelligence do? It collects a large amount of data, according to a predefined protocol; it processes this data, according to another predefined protocol; and based on this, it performs a certain number of actions, according to a protocol just as predefined as the previous ones.

We all remember a number of spectacular technical achievements, such as the victory over a Go player, by a machine that had “learned” to play by itself; or the fact that a machine is capable of recognizing a cat when presented with a picture of one, or the production of texts or speeches by chatbots like ELIZA or ChatGPT that look almost exactly like texts or speeches coming from real human beings.

But is all this really intelligent? The fact that a machine can outperform us in the mechanical tasks for which we built it, is nothing new in the history of mankind: Otherwise, it’s hard to see what point there would have been in us building machines in the first place.

View full size
passport photo
Alan Turing (1912–1954)

Today, experts are obliged to admit that the famous “Turing test,” formulated around 1950 by Alan Turing, is insufficient to say whether a machine is intelligent or not. This test consisted of a blind dialogue between a human being and a machine: When the human being is no longer able to recognize that his or her interlocutor is not a human being, then we could say that the machine’s intelligence has surpassed ours. But let’s face it: The tragic story of ELIZA’s user doesn’t show that the chatbot is intelligent, but rather that his interlocutor’s ability to think had been led astray by his depression.

Nevertheless, the “philosophy” underlying the Turing test remains. A machine is said to be “intelligent” because, first and foremost, it is capable of “recognizing” something: image, speech, text, etc. It recognizes that the data presented to it belongs to a category in which other data it already has in its memory can be found. So, would intelligence consist in knowing how to classify new experiences among those of the past? That’s what the Turing test seems to suggest.

To approach the notion of intelligence, we should probably start with common sense, which tells us that “only fools never change their minds.” We recognize an intelligent individual by his ability to put forward new ideas—which come as a surprise to those around him—and to abandon old ones when they seem wrong. This is obviously the case for those who make scientific discoveries, as they have to test many false hypotheses before finding a good one.

From there, it should be easy to show that an artificial intelligence cannot make a fundamental discovery, and that only a human being is capable of doing so.

View full size
Ivan N. Kramskoy, 1878
Dmitri Mendeleyev (1834–1907)

Let’s take the example of Dmitri Mendeleev (1834–1907), the man who discovered the periodic classification of chemical elements. If he had simply processed the vast amount of empirical data available at the time, he would never have been able to show that there is a certain periodicity linking the chemical properties of elements to their atomic mass. The existence of atoms had not yet been demonstrated! It was precisely Mendeleev’s discovery that made this possible. Secondly, Mendeleev’s hypothesis seemed to be contradicted by existing data: These suggested that argon should be heavier than potassium, but this was not compatible with the idea of periodicity he was trying to establish.

Mendeleev therefore considered that his hypothesis was correct, and that the interpretation of the data was misleading; and the outcome proved him right. This decision paved the way for the discovery of atomic physics, without which we wouldn’t have computers, or artificial intelligence.

Knowing only how to process data from the past—as if they were “objective realities”—no artificial intelligence or mechanism could make such a discovery. An intelligent human being, on the other hand, is able to recognize the fundamentally subjective character of data, axioms and pre-established rules, and can always imagine something relevant that cannot be deduced from past knowledge.

Von Neumann’s Axiomatic-Deductive Approach

For von Neumann, everything hinged on his passion for mathematics. He was interested in the physical world only insofar as it gave him the opportunity to solve mathematical problems. Thanks to the war, for example, he was able to use his talents to calculate the trajectories of ballistic missiles. Von Neumann’s principal contribution to the Manhattan Project was the design of the explosive shells or “lenses” used to surround the atomic bombs and to implode, triggering the bombs.

Von Neumann was also on the target selection committee which chose Hiroshima and Nagasaki as the first targets. He oversaw the computations related to the expected size of the bomb blasts; he calculated death tolls; he calculated the distance above the ground at which the bombs should be detonated for maximum destruction.

The young von Neumann greatly admired the man who reigned over the University of Göttingen’s prestigious mathematics faculty at the beginning of the 20th Century, David Hilbert, who saw the complete axiomatization of mathematics as the great project that would crown his career. Hilbert dreamed of a language built on a finite number of axioms, postulates and rules, from which any mathematical theorem could be demonstrated in a finite number of steps. In this way, he hoped to give mathematics an unshakeable foundation. Von Neumann was a welcome support in this work.

View full size
Life/Alfred Eisenstaedt
Kurt Gödel (1906–1978)

Beginning his research program with the specific field of arithmetic, Hilbert announced that his success would depend on proving three propositions he considered fundamental. His dream was shattered in 1931 by the young Kurt Gödel, who showed that the first of these propositions was false (Gödel and Turing would later show that the other two were also false). More precisely, Hilbert’s proposition stated that it was possible to construct a complete set of axioms in arithmetic, from which any arithmetical theorem could be proved. But Gödel showed that it is always possible to produce at least one arithmetic proposition that cannot be deduced from the starting set of axioms, but is nonetheless true. Any finite set of axioms is therefore necessarily incomplete.

Though he admired Gödel’s “incompleteness theorem,” von Neumann never abandoned the axiomatic-deductive approach to all the fields of science with which he was subsequently associated—and they are many. Since Gödel had shown that mathematics could not be confined within any axiomatic framework, it would have been legitimate to assume that the same would apply, a fortiori, to the physical laws of the universe.

Any scientific theory is necessarily based on a certain number of axioms or presuppositions. For example, the idea of a black hole in astrophysics is deduced from the theory of general relativity, and was therefore stated long before it was confirmed experimentally. There are, however, fundamental discoveries that contradict existing theories—such as the discovery of relativity—and trigger scientific revolutions. It then becomes necessary to reject certain accepted axioms, replacing them with more advanced ones. This requires recognition of the essentially provisional nature of axioms, and it would be better to call them “hypotheses”.

Trying To Compute a Human Brain

Unfortunately, with the spectacular development of computers and then artificial intelligence, von Neumann convinced many scientists to accept arbitrary axioms in their disciplines, and this remains true to this day.

This applies in particular to the sciences of life, thought, and social relations. In a caricatural way, scientific publications in these fields today are riddled with words and expressions borrowed from the vocabulary of computer science. The DNA molecule in the nucleus of our cells, for example, is said to contain our genetic “code.” Similarly, the brain sciences speak of neuronal “circuits,” “information storage,” “signal processing,” “coding” and so on.

This reductionist approach has led researchers down a number of blind alleys. Seeing DNA as a linear series of codes constituting a kind of computer program, completely ignores its remarkable geometric double-helix shape. This shape suggests that the molecule should be seen as a whole in its organic environment, and not simply as a series of individual elements.

As far as the brain is concerned, we know—or should know—that biological neurons have little in common with artificial neurons, which receive logical signals (“0” or “1”) as input, and transmit binary responses to other neurons via passive connections. In reality, the activity of biological neurons is not simply electrical, but also chemical; their “signals” are not binary but analog; and dendrites are not passive cables between neurons, but are themselves active within the whole that we call the brain.

This has been known for a long time, but it hasn’t stopped the launch of the absurd “Human Brain Project,” aimed at simulating a human brain with a supercomputer, at exorbitant cost.

War Is Not a Game

View full size
Von Neumann’s wartime badge photo at Los Alamos National Laboratory.

John von Neumann, whose cynicism is legendary, seemed to consider that relationships between human beings are necessarily based on conflict. Starting from the fundamental axiom that an individual is said to be “rational” if his social relations are reduced to a search for personal gain, von Neumann developed a “Game Theory” that he wanted to apply to economics as well as to geopolitics and the art of war.

In the economic sphere, this is a more modern version of the old theories of British “liberalism” inspired by Bernard Mandeville in the 18th century, which tried in various ways to pass off egoism as a virtue. In the following century, the economist Friedrich List attacked this system, showing that the economic success of the British Empire stemmed from the fact that it had imposed free trade rules on the rest of the world, while practicing the opposite at home.

Later still, the American System economist Lyndon H. LaRouche, Jr. showed that economists who had relied on von Neumann’s game theory had proved unable to see the systemic crises coming after the end of the Bretton Woods system on August 15, 1971. For LaRouche, the source of economic growth lies not in the algebraic sum of individual profits, but in the development of the creative capacities of the members of society.

Let’s look at the usual example through which game theory is presented, known as “the Prisoners’ Dilemma.” Two members of a gang have been arrested by the police and are locked in separate cells, unable to communicate with each other. Each of them is offered a deal. Each can either testify against his accomplice, or keep quiet. If both remain silent, they will each get one year in prison; if both speak, they will each get two years in prison; if one speaks and the other remains silent, the one who spoke will be released, while the other will get three years in prison.

The “solution” to this simple game-theoretic example is obvious: Since each of the prisoners is “rational,” he will denounce the other and get two years in prison, whereas their real common interest would have been for them both to change the basic axiom and keep quiet. Obviously, two thugs won’t easily change axioms, but this example gives us an idea of von Neumann’s conception of the human being.

With this in mind, it’s easy to understand why this great mathematician behaved so inhumanely when it comes to war. Among other things, when brought into the Manhattan Project von Neumann calculated the optimal altitude at which the Hiroshima bomb should detonate, in order to produce the maximum number of casualties. Robert Oppenheimer realized afterwards that the two bombs were militarily useless because Japan had already been defeated; he therefore campaigned against the development of the H-bomb and used his prestige to defend the idea of a common security agreement with the Soviets. As a result, he became the victim of a witch-hunt.

Von Neumann had no such problem. He became, after the war, a strong supporter of “preventive nuclear war” against the Soviet Union. Von Neumann wanted to apply game theory to the full-blown war with the Soviet Union he decided was inevitable. According to a 2022 review in The New Republic of the new biography, von Neumann was sure Soviet spies had obtained atomic secrets, and the Soviet Union would become a nuclear power. He promoted “preventing” this by launching a nuclear strike against Moscow. An often-quoted remark of his is this one:

With the Russians it is not a question of whether, but of when [to strike]. If you say why not bomb them tomorrow, I say, why not today? If you say today at 5 o’clock, I say, why not one o’clock?

This nuclear war plan, also pushed by Bertrand Russell, was never carried out, but their declarations of the intention for it gave Russia’s nuclear program a major boost.

Changing Axioms

Kennedy and Khrushchev didn’t launch the Cold War, but in the 1962 Cuban Missiles Crisis they did find themselves trapped at the head of two antagonistic nuclear powers. What’s more, as you can see from the film 13 Days, both were surrounded by advisors who thought their side could win a nuclear war. Seeking peace could just as easily provoke a coup d’état at home. It was a case study that would certainly have triggered von Neumann to put his game theories into practice. He had many disciples in the military staffs.

How then was disaster avoided? The Russian and American leaders simply changed their axioms: They used an unofficial but direct channel of communication between themselves that bypassed the hawks on both sides. Each was able to assess his nation’s own true interests and those of the other, and they reached a compromise that saved face for everyone: the withdrawal of Soviet missiles from Cuba in exchange for the withdrawal of U.S. missiles from Turkey.

Emerging from this crisis, Kennedy gave a magnificent speech June 10, 1963 calling for the end of the Cold War, in which he said:

[B]oth the United States and its allies, and the Soviet Union and its allies, have a mutually deep interest in a just and genuine peace and in halting the arms race. Agreements to this end are in the interests of the Soviet Union as well as ours—and even the most hostile nations can be relied upon to accept and keep those treaty obligations, and only those treaty obligations, which are in their own interest.

So, let us not be blind to our differences—but let us also direct attention to our common interests and to the means by which those differences can be resolved….

Back to top    Go to home page

clear
clear
clear