The method can be improved upon, but all methods of factorization currently in use have this exponential-increase property. The largest number that has been factorized ‘in anger’, as it were — a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians — had 129 digits. The factorization was achieved, after an appeal on the Internet, by a global cooperative effort involving thousands of computers. The computer scientist Donald Knuth has estimated that the factorization of a 250-digit number, using the most efficient known methods, would take over a million years on a network of a million computers. Such things are difficult to estimate, but even if Knuth is being too pessimistic one need only consider numbers with a few more digits and the task will be made many times harder. This is what we mean by saying that the factorization of large numbers is intractable. All this is a far cry from multiplication where, as we have seen, the task of multiplying a pair of 250-digit numbers is a triviality on anyone’s home computer. No one can even conceive of how one might factorize thousand-digit numbers, or million-digit numbers.
At least, no one could conceive of it, until recently.
In 1982 the physicist Richard Feynman considered the computer simulation of quantum-mechanical objects. His starting-point was something that had already been known for some time without its significance being appreciated, namely that predicting the behaviour of quantum-mechanical systems (or, as we can describe it, rendering quantum-mechanical environments in virtual reality) is in general an intractable task. One reason why the significance of this had not been appreciated is that no one expected the computer prediction of interesting physical phenomena to be especially easy. Take weather forecasting or earthquake prediction, for instance. Although the relevant equations are known, the difficulty of applying them in realistic situations is notorious. This has recently been brought to public attention in popular books and articles on chaos and the ‘butterfly effect’. These effects are not responsible for the intractability that Feynman had in mind, for the simple reason that they occur only in classical physics — that is, not in reality, since reality is quantum-mechanical. Nevertheless, I want to make some remarks here about ‘chaotic’ classical motions, if only to highlight the quite different characters of classical and quantum unpredictability.
Chaos theory is about limitations on predictability in classical physics, stemming from the fact that almost all classical systems are inherently unstable. The ‘instability’ in question has nothing to do with any tendency to behave violently or disintegrate. It is about an extreme sensitivity to initial conditions. Suppose that we know the present state of some physical system, such as a set of billiard balls rolling on a table. If the system obeyed classical physics, as it does to a good approximation, we should then be able to determine its future behaviour — say, whether a particular ball will go into a pocket or not — from the relevant laws of motion, just as we can predict an eclipse or a planetary conjunction from the same laws. But in practice we are never able to measure the initial positions and velocities perfectly. So the question arises, if we know them to some reasonable degree of accuracy, can we also predict to a reasonable degree of accuracy how they will behave in the future? And the answer is, usually, that we cannot. The difference between the real trajectory and the predicted trajectory, calculated from slightly inaccurate data, tends to grow exponentially and irregularly (‘chaotically’) with time, so that after a while the original, slightly imperfectly known state is no guide at all to what the system is doing. The implication for computer prediction is that planetary motions, the epitome of classical predictability, are untypical classical systems. In order to predict what a typical classical system will do after only a moderate period, one would have to determine its initial state to an impossibly high precision. Thus it is said that in principle, the flap of a butterfly’s wing in one hemisphere of the planet could cause a hurricane in the other hemisphere. The infeasibility of weather forecasting and the like is then attributed to the impossibility of accounting for every butterfly on the planet.
However, real hurricanes and real butterflies obey quantum theory, not classical mechanics. The instability that would rapidly amplify slight mis-specifications of an initial classical state is simply not a feature of quantum-mechanical systems. In quantum mechanics, small deviations from a specified initial state tend to cause only small deviations from the predicted final state. Instead, accurate prediction is made difficult by quite a different effect.
The laws of quantum mechanics require an object that is initially at a given position (in all universes) to ‘spread out’ in the multiverse sense. For instance, a photon and its other-universe counterparts all start from the same point on a glowing filament, but then move in trillions of different directions. When we later make a measurement of what has happened, we too become differentiated as each copy of us sees what has happened in our particular universe. If the object in question is the Earth’s atmosphere, then a hurricane may have occurred in 30 per cent of universes, say, and not in the remaining 70 per cent. Subjectively we perceive this as a single, unpredictable or ‘random’ outcome, though from the multi-verse point of view all the outcomes have actually happened. This parallel-universe multiplicity is the real reason for the unpredictability of the weather. Our inability to measure the initial conditions accurately is completely irrelevant. Even if we knew the initial conditions perfectly, the multiplicity, and therefore the unpredictability of the motion, would remain. And on the other hand, in contrast to the classical case, an imaginary multiverse with only slightly different initial conditions would not behave very differently from the real multiverse: it might suffer hurricanes in 30.000001 per cent of its universes and not in the remaining 69.999 999 per cent.
The flapping of butterflies’ wings does not, in reality, cause hurricanes because the classical phenomenon of chaos depends on perfect determinism, which does not hold in any single universe. Consider a group of identical universes at an instant at which, in all of them, a particular butterfly’s wings have flapped up. Consider a second group of universes which at the same instant are identical to the first group, except that in them the butterfly’s wings are down. Wait for a few hours. Quantum mechanics predicts that, unless there are exceptional circumstances (such as someone watching the butterfly and pressing a button to detonate a nuclear bomb if it flaps its wings), the two groups of universes, nearly identical at first, are still nearly identical. But each group, within itself, has become greatly differentiated. It includes universes with hurricanes, universes without hurricanes, and even a very tiny number of universes in which the butterfly has spontaneously changed its species through an accidental rearrangement of all its atoms, or the Sun has exploded because all its atoms bounced by chance towards the nuclear reaction at its core. Even so, the two groups still resemble each other very closely. In the universes in which the butterfly raised its wings and hurricanes occurred, those hurricanes were indeed unpredictable; but the butterfly was not causally responsible, for there were near-identical hurricanes in universes where everything else was the same but the wings were lowered.