Выбрать главу

Let us start by imagining some parallel universes stacked like a pack of cards, with the pack as a whole representing the multiverse. (Such a model, in which the universes are arranged in a sequence, greatly understates the complexity of the multiverse, but it suffices to illustrate my point here.) Now let us alter the model to take account of the fact that the multiverse is not a discrete set of universes but a continuum, and that not all the universes are different. In fact, for each universe that is present there is also a continuum of identical universes present, comprising a certain tiny but non-zero proportion of the multiverse. In our model, this proportion may be represented by the thickness of a card, where each card now represents all the universes of a given type. However, unlike the thickness of a card, the proportion of each type of universe changes with time under quantum-mechanical laws of motion. Consequently, the proportion of universes having a given property also changes, and it changes continuously. In the case of a discrete variable changing from 0 to 1, suppose that the variable has the value 0 in all universes before the change begins, and that after the change, it has the value 1 in all universes. During the change, the proportion of universes in which the value is 0 falls smoothly from 100 per cent to zero, and the proportion in which the value is 1 rises correspondingly from zero to 100 per cent. Figure 9.4 shows a multiverse view of such a change.

It might seem from Figure 9.4 that, although the transition from 0 to 1 is objectively continuous from the multiverse perspective, it remains subjectively discontinuous from the point of view of any individual universe — as represented, say, by a horizontal line halfway up Figure 9.4. However, that is merely a limitation of the diagram, and not a real feature of what is happening. Although the diagram makes it seem that there is at each instant a particular universe that ‘has just changed’ from 0 to 1 because it has just ‘crossed the boundary’, that is not really so. It cannot be, because such a universe is strictly identical with every other universe in which the bit has value 1 at that time. So if the inhabitants of one of them were experiencing a discontinuous change, then so would the inhabitants of all the others. Therefore none of them can have such an experience. Note also that, as I shall explain in Chapter 11, the idea of anything moving across a diagram such as Figure 9.4, in which time is already represented, is simply a mistake. At each instant the bit has value 1 in a certain proportion of universes and 0 in another. All those universes, at all those times, are already shown in Figure 9.4. They are not moving anywhere!

FIGURE 9.4 Multiverse view of how a bit changes continuously from 0 to 1.

Another way in which quantum physics is implicit in classical computation is that all practical implementations of Turing-type computers rely on such things as solid matter or magnetized materials, which could not exist in the absence of quantum-mechanical effects. For example, any solid body consists of an array of atoms, which are themselves composed of electrically charged particles (electrons, and protons in the nuclei). But because of classical chaos, no array of charged particles could be stable under classical laws of motion. The positively and negatively charged particles would simply move out of position and crash into each other, and the structure would disintegrate. It is only the strong quantum interference between the various paths taken by charged particles in parallel universes that prevents such catastrophes and makes solid matter possible.

Building a universal quantum computer is well beyond present technology. As I have said, detecting an interference phenomenon always involves setting up an appropriate interaction between all the variables that have been different in the universes that contribute to the interference. The more interacting particles are involved, therefore, the harder it tends to be to engineer the interaction that would display the interference — that is, the result of the computation. Among the many technical difficulties of working at the level of a single atom or single electron, one of the most important is that of preventing the environment from being affected by the different interfering sub-computations. For if a group of atoms is undergoing an interference phenomenon, and they differentially affect other atoms in the environment, then the interference can no longer be detected by measurements of the original group alone, and the group is no longer performing any useful quantum computation. This is called decoherence. I must add that this problem is often presented the wrong way round: we are told that ‘quantum interference is a very delicate process, and must be shielded from all outside influences’. This is wrong. Outside influences could cause minor imperfections, but it is the effect of the quantum computation on the outside world that causes decoherence.

Thus the race is on to engineer sub-microscopic systems in which information-carrying variables interact among themselves, but affect their environment as little as possible. Another novel simplification, unique to the quantum theory of computation, partly offsets the difficulties caused by decoherence. It turns out that, unlike classical computation, where one needs to engineer specific classical logic elements such as and, or and not, the precise form of the interactions hardly matters in the quantum case. Virtually any atomic-scale system of interacting bits, so long as it does not decohere, could be made to perform useful quantum computations.

Interference phenomena involving vast numbers of particles, such I as superconductivity and superfluidity, are known, but it seems that none of them can be used to perform any interesting computations. At the time of writing, only single-bit quantum computations can be easily performed in the laboratory. Experimentalists are confident, however, that two- and higher-bit quantum gates (the quantum equivalent of the classical logical elements) will be constructed within the next few years. These are the basic components of quantum computers. Some physicists, notably Rolf Landauer of IBM Research, are pessimistic about the prospects for further advances after that. They believe that decoherence will never be reduced to the point where more than a few consecutive quantum-computational steps can be performed. Most researcher in the field are much more optimistic (though perhaps that is because only optimistic researchers choose to work on quantum computation!). Some special-purpose quantum computers have already been built (see below), and my own opinion is that more complex ones will appear in a matter of years rather than decades. As for the universal quantum computer, I expect that its construction too is only a matter of time, though I should not like to predict whether that time will be decades or centuries.

The fact that the repertoire of the universal quantum computer contains environments whose rendering is classically intractable implies that new classes of purely mathematical computations must have become tractable too. For the laws of physics are, as Galileo said, expressed in mathematical language, and rendering an environment is tantamount to evaluating certain mathematical functions. And indeed, many mathematical tasks have now been discovered which could be efficiently performed by quantum computation where all known classical methods are intractable. The most spectacular of these is the task of factorizing large numbers. The method, known as Shor’s algorithm, was discovered in 1994 by Peter Shor of Bell Laboratories. (While this book was in proof further spectacular quantum algorithms have been discovered, including Graver’s algorithm for searching long lists very rapidly.)