More crucially, the study marked the beginnings of a formal separation of strategic thinking from questions of national objectives. Bernard Brodie had observed that, given the enormously destructive power of the atomic bomb and especially the hydrogen bomb, a relatively small number of weapons could accomplish whatever strategic mission one might have in mind, and that beyond a certain point more weapons made increasingly little difference. Wohlstetter did not know of the feasibility of H-bombs when he began work on the overseas-base study, but both the United States and the Soviet Union had tested some by the time it was published. The analysis by Plesset, Hitch and Brodie on the implications of the H-bomb had revealed that a mere fifty-five H-bombs could destroy the fifty largest Soviet cities and kill 35 million civilians.
In the worst case of Wohlstetter’s analysis, the Soviet Union was estimated as being able to destroy 85 percent of the SAC bomber force. But SAC would have, by 1956, 1,900 B-47 and B-36 bombers, and 15 percent of 1,900 is 285 bombers, each carrying from two to four bombs. Thus, after absorbing a Soviet first-strike, and with no alterations in the SAC basing system at all, the United States could still have destroyed roughly 600 Soviet targets.
Wohlstetter said that he was more interested in deterring nuclear war than in figuring out how to fight one. Yet nowhere did he or any of his associates ask if, in the 1950s when the Soviets had very few strategic weapons, destroying 600 targets was enough to deter—or whether one really needed to destroy, as Wohlstetter’s preferred basing system would have allowed, 1,200 targets.
Albert Wohlstetter made much of the Pearl Harbor analogy. It inspired the scenario that made his discovery of SAC vulnerability so realistic. But he never did consider whether the Japanese would have attacked Pearl Harbor—no matter how temptingly vulnerable the fleet there might have appeared—if they thought that the United States could have retaliated with 600 or even 100 or even 10 Nagasaki-size, much less multimegaton, nuclear explosions.
The Wohlstetter study set the model of what strategic analysis should be. It was the study that, for years after, almost everyone in the quantitative quarters of RAND would instantly cite when asked to name the most impressive systems analysis they had produced. It imposed a much higher premium than had previously existed on the claim that good strategic analysis meant quantitative analysis with elaborate calculations and “hard” data. And since most of this data came from Air Force Intelligence—which, curiously, this group of analytic skeptics accepted without much question—quantitative analysis tended to paint a very scary picture of the Soviet threat.
Aside from contributing to and intellectually justifying the Air Force Intelligence estimates (which were consistently more pessimistic than those of the CIA and the other military services), the study was among the first attempts to abstract the “nuclear exchange,” to place it, like one of Wohlstetter’s exercises in mathematical logic from earlier days, in a rarefied universe all its own, apart from the world of political leaders and their appreciation of horrible risk, apart from the broader issues of how nuclear weapons fit into an overall strategy and of how many weapons were needed before their further accumulation no longer much mattered.
Specifically, Wohlstetter made the issue of calculated vulnerability the central focus of strategic analysis generally. In the early 1950s, SAC’s overseas bases were bizarrely complicated, horrendously inefficient and vulnerable. Whether the vulnerability was truly critical, given the enormous size of the American arsenal, is another question; but there can be little dispute that it was a sound idea to rely much less on these bases, since the mere act of mobilization, of preparation for war, would put the SAC force in greater danger than it had been prior to mobilization. There is no question that the Wohlstetter study helped reduce this reliance.
But Wohlstetter then made his reputation on examining vulnerability and, in later years, he would continue to impose the concept on nearly everything he analyzed. As the theory trickled down not just through the corridors of RAND but also in Washington and other sectors of the “strategic community,” the concern about vulnerability grew into an infatuation, then an obsession and finally a fetish of sorts. Eventually, it would wend its way into the political realm and—apart from Wohlstetter’s original intentions or logic—become entangled with claims of a “missile gap”; it would sit at the center of grisly scenarios about Soviet first-strikes and American weakness; it would provide the rationale for a host of new weapons that the military wanted to build; and it would serve as a powerful engine driving at least the American side of a nuclear arms race over the next quarter century and beyond.
7
THE HYDRA-HEADED MONSTER
IN THE SUMMER of 1953, while Albert Wohlstetter was away in Washington, pushing his overseas-base briefing on anyone in the Pentagon who would listen, some of the RAND physicists in Santa Monica were fiddling with calculations about the hydrogen bomb that would eventually alter the entire picture of nuclear weaponry and, on the face of things, invest in Wohlstetter’s worries about the vulnerability of SAC a still greater sense of urgency.
From their sources at the Los Alamos weapons lab, Ernie Plesset and David Griggs were learning that there was something highly significant about the hydrogen bomb other than its tremendous explosive power. The physics of the weapon meant that a relatively small amount of fissile uranium or plutonium was all one would need to trigger the fusion implosion of hydrogen isotopes that produced the bomb’s extremely high yield. Meanwhile, the weapons scientists were designing bombs of astonishing efficiency. They were getting to the point where they could pack a bomb with the explosive power of 500 kilotons into a device substantially smaller than the bulky atom bomb dropped over Nagasaki, which had set off an explosion only 4 percent as powerful.
Plesset and Griggs realized that this meant that the hydrogen bomb, along with the new weapons technology that accompanied it, could be made small enough and light enough to fit on top of an intercontinental ballistic missile. The ICBM—that had been General Hap Arnold’s dream when he conceived of Project RAND, that was what RAND’s mission was all about.
In the fall of 1950, RAND had produced a series of studies for the Air Force, concluding that long-range ballistic missiles would be of military value. In January 1951, the Defense Department reactivated Project MX-774, which had been eliminated from the budget in 1949, and renamed it the Atlas Missile Project. It was still a program with low priority, the aim of which was to determine whether a large, 5,000-mile ballistic rocket was within the state of the art. Up until this point, the technical obstacles were too great. The pure fission weapon was too heavy and large for a worthwhile payload to be positioned on the nosecone of a rocket and fired to some point halfway around the globe. But now, the H-bomb—small and light yet massively destructive—might make Hap Arnold’s dream come true.
Yet there were still some major problems. The most troublesome involved meeting the requirements that had been imposed on the missile’s technical performance by its program managers at Convair Missile Division, the aerospace company that had won the Atlas contract. They wanted Atlas to have a 3,000-pound warhead that would re-enter the atmosphere at six times the speed of sound (without burning up) and that would land within one-quarter to one-half mile of the target. All these requirements were clearly far beyond the “state of the art” in 1953. Even the Convair managers were not expecting such a missile to be ready for operation until 1965.