Выбрать главу

This is the intractability that was exercising Feynman. We see that it has nothing to do with unpredictability: on the contrary, it is most clearly manifested in quantum phenomena that are highly predictable. That is because in such phenomena the same, definite outcome occurs in all universes, but that outcome is the result of interference between vast numbers of universes that were different during the experiment. All this is in principle predictable from quantum theory and is not overly sensitive to the initial conditions. What makes it hard to predict that in such experiments the outcome will always be the same is that doing so requires inordinately large amounts of computation.

Intractability is in principle a greater impediment to universality than unpredictability could ever be. I have already said that a perfectly accurate rendering of a roulette wheel need not — indeed should not — give the same sequence of numbers as the real one. Similarly, we cannot prepare in advance a virtual-reality rendering of tomorrow’s weather. But we can (or shall, one day, be able to) make a rendering of weather which, though not the same as the real weather conditions prevailing on any historical day, is nevertheless so realistic in its behaviour that no user, however expert, will be able to distinguish it from genuine weather. The same is true of any environment that does not show the effects of quantum interference (which means most environments). Rendering such an environment in virtual reality is a tractable computational task. However, it would appear that no practical rendering is possible for environments that do show the effects of quantum interference. Without performing exponentially large amounts of computation, how can we be sure that in those cases our rendered environment will not do things which the real environment strictly never does because of some interference phenomenon?

It might seem natural to conclude that reality does not, after all, display genuine computational universality, because interference phenomena cannot be usefully rendered. Feynman, however, correctly drew the opposite conclusion! Instead of regarding the intractability of the task of rendering quantum phenomena as an obstacle Feynman regarded it as an opportunity. If it requires so much computation to work out what will happen in an interference experiment, then the very act of setting up such an experiment and measuring its outcome is tantamount to performing a complex computation. Thus, Feynman reasoned, it might after all be possible to render quantum environments efficiently, provided the computer were allowed to perform experiments on a real quantum-mechanical object. The computer would choose what measurements to make on an auxiliary piece of quantum hardware as it went along, and would incorporate the results of the measurements into its computations.

The auxiliary quantum hardware would in effect be a computer too. For example, an interferometer could act as such a device and, like any other physical object, it can be thought of as a computer. We would nowadays call it a special-purpose quantum computer. We ‘program’ it by setting up the mirrors in a certain geometry, and then projecting a single photon at the first mirror. In a non-random interference experiment the photon will always emerge in one particular direction, determined by the settings of the mirrors, and we could interpret that direction as indicating the result of the computation. In a more complex experiment, with several interacting particles, such a computation could easily, as I have explained, become ‘intractable’. Yet since we could readily obtain its result just by performing this experiment, it is not really intractable after all. We must now be more careful with our terminology. Evidently there are computational tasks that are ‘intractable’ if we attempt to perform them using any existing computer, but which would be tractable if we were to use quantum-mechanical objects as special-purpose computers. (Notice that the fact that quantum phenomena can be used to perform computations in this way depends on their not being subject to chaos. If the outcome of computations were an inordinately sensitive function of the initial state, ‘programming’ the device by setting it in a suitable initial state would be an impossibly difficult task.)

Using a quantum auxiliary device in this way might be considered cheating, since any environment is obviously much easier to render if one has access to a spare copy of it to measure during the rendering! However, Feynman conjectured that it would not be necessary to use a literal copy of the environment being rendered: that it would be possible to find a much more easily constructed auxiliary device whose interference properties were nevertheless analogous to those of the target environment. Then a normal computer could do the rest of the rendering, working through the analogy between the auxiliary device and the target environment. And, Feynman expected, that would be a tractable task. Furthermore, he conjectured, correctly as it turned out, that all the quantum-mechanical properties of any target environment could be simulated by auxiliary devices of a particular type that he specified (namely an array of spinning atoms, each interacting with its neighbours). He called the whole class of such devices a universal quantum simulator.

But it was not a single machine, as it would have to be in order to qualify as a universal computer. The interactions that the simulator’s atoms would have to undergo could not be fixed once and for all, as in a universal computer, but would have to be re-engineered for the simulation of each target environment. But the point of universality is that it should be possible to program single machine, specified once and for all, to perform any possible computation, or render any physically possible environment. In 1985 I proved that under quantum physics there is a universal quantum computer. The proof was fairly straightforward. All I had to do was mimic Turing’s constructions, but using quantum theory to define the underlying physics instead of the classical mechanics that Turing had implicitly assumed. A universal quantum computer could perform any computation that any other quantum computer (or any Turing-type computer) could perform, and it could render any finite physically possible environment in virtual reality. Moreover, it has since been shown that the time and other resources that it would need to do these things would not increase exponentially with the size or detail of the environment being rendered, so the relevant computations would be tractable by the standards of complexity theory.

The classical theory of computation, which was the unchallenged foundation of computing for half a century, is now obsolete except, like the rest of classical physics, as an approximation scheme. The theory of computation is now the quantum theory of computation. I said that Turing had implicitly used ‘classical mechanics’ in his construction. But with the benefit of hindsight we can now see that even the classical theory of computation did not fully conform to classical physics, and contained strong adumbrations of quantum theory. It is no coincidence that the word bit, meaning the smallest possible amount of information that a computer can manipulate, means essentially the same as quantum, a discrete chunk. Discrete variables (variables that cannot take a continuous range of values) are alien to classical physics. For example, if a variable has only two possible values, say 0 and 1, how does it ever get from 0 to 1? (I asked this question in Chapter 2.) In classical physics it would have to jump discontinuously, which is incompatible with how forces and motions work in classical mechanics. In quantum physics, no discontinuous change is necessary — even though all measurable quantities are discrete. It works as follows.