Выбрать главу

Thus the fact that there are complex organisms, and that there has been a succession of gradually improving inventions and scientific theories (such as Galilean mechanics, Newtonian mechanics, Einsteinian mechanics, quantum mechanics,…) tells us something more about what sort of computational universality exists in reality. It tells us that the actual laws of physics are, thus far at least, capable of being successively approximated by theories that give ever better explanations and predictions, and that the task of discovering each theory, given the previous one, has been computationally tractable, given the previously known laws and the previously available technology. The fabric of reality must be, as it were, layered, for easy self-access. Likewise, if we think of evolution itself as a computation, it tells us that there have been sufficiently many viable organisms, coded for by DNA, to allow better-adapted ones to be computed (i.e. to evolve) using the resources provided by their worse-adapted predecessors. So we can infer that the laws of physics, in addition to mandating their own comprehensibility through the Turing principle, ensure that the corresponding evolutionary processes, such as life and thought, are neither too time-consuming nor require too many resources of any other kind to occur in reality.

So, the laws of physics not only permit (or, as I have argued, require) the existence of life and thought, they require them to be, in some appropriate sense, efficient. To express this crucial property of reality, modern analyses of universality usually postulate computers that are universal in an even stronger sense than the Turing principle would, on the face of it, require: not only are universal virtual-reality generators possible, it is possible to build them so that they do not require impracticably large resources to render simple aspects of reality. From now on, when I refer to universality I shall mean it in this sense, unless otherwise stated.

Just how efficiently can given aspects of reality be rendered? What computations, in other words, are practicable in a given time and under a given budget? This is the basic question of computational complexity theory which, as I have said, is the study of the resources that are required to perform given computational tasks. Complexity theory has not yet been sufficiently well integrated with physics to give many quantitative answers. However, it has made a fair amount of headway in defining a useful, rough-and-ready distinction between tractable and intractable computational tasks. The general approach is best illustrated by an example. Consider the task of multiplying together two rather large numbers, say 4,220,851 and 2,594,209. Many of us remember the method we learned in childhood for performing such multiplications. It involves multiplying each digit of one number in turn by each digit of the other, while shifting and adding the results together in a standard way to give the final answer, in this case 10,949,769,651,859. Many might be loath to concede that this wearisome procedure makes multiplication ‘tractable’ in any ordinary sense of the word. (Actually there are more efficient methods for multiplying large numbers, but this one provides a good enough illustration.) But from the point of view of complexity theory, which deals in massive tasks carried out by computers that are not subject to boredom and almost never make mistakes, this method certainly does fall into the ‘tractable’ category.

What counts for ‘tractability’, according to the standard definitions, is not the actual time taken to multiply a particular pair of numbers, but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. Perhaps surprisingly, this rather indirect way of defining tractability work very well in practice for many (though not all) important classes of computational tasks. For example, with multiplication we can easily see that the standard method can be used to multiply numbers that are, say, about ten times as large, with very little extra work. Suppose, for the sake of argument, that each elementary multiplication of one digit by another takes a certain computer one microsecond (including the time taken to perform the additions, shift and other operations that follow each elementary multiplication. When we are multiplying the seven-digit numbers 4,220,851 an 2,594,209, each of the seven digits in 4,220,851 has to be multiplied by each of the seven digits in 2,594,209. So the total time require for the multiplication (if the operations are performed sequential) will be seven times seven, or 49 microseconds. For inputs rough ten times as large as these, which would have eight digits each, the time required to multiply them would be 64 microseconds, an increase of only 31 per cent.

Clearly, numbers over a huge range — certainly including any numbers that have ever been measured as the values of physical variables — can be multiplied in a tiny fraction of a second. So multiplication is indeed tractable for all purposes within physics (or, at least, within existing physics). Admittedly, practical reasons for multiplying much larger numbers can arise outside physics. For instance, products of prime numbers of 125 digits or so are of great interest to cryptographers. Our hypothetical machine could multiply two such prime numbers together, making a 250-digit product, in just over a hundredth of a second. In one second it could multiply two 1000-digit numbers, and real computers available today can easily improve upon those timings. Only a few researchers in esoteric branches of pure mathematics are interested in performing such incomprehensibly vast multiplications, yet we see that even they have no reason to regard multiplication as intractable.

By contrast, factorization, essentially the reverse of multiplication, seems much more difficult. One starts with a single number as input, say 10,949,769,651,859, and the task is to find two factors — smaller numbers which when multiplied together make 10,949,769,651,859. Since we have just multiplied them, we know that the answer in this case is 4,220,851 and 2,594,209 (and since those are both primes, it is the only correct answer). But without such inside knowledge, how would we have found the factors? You will search your childhood memories in vain for an easy method, for there isn’t one.

The most obvious method of factorization is to divide the input number by all possible factors, starting with 2 and continuing with every odd number, until one of them divides the input exactly. At least one of the factors (assuming the input is not a prime) can be no larger than the input’s square root, and that provides an estimate of how long the method might take. In the case we are considering, our computer would find the smaller of the two factors, 2,594,209, in just over a second. However, an input ten times as large would have a square root that was about three times as large, so factorizing it by this method would take up to three times as long. In other words, adding one digit to the input would now triple the running time. Adding another would triple it again, and so on. So the running time would increase in geometrical proportion, that is, exponentially, with the number of digits in the number we are factorizing. Factorizing a number with 25-digit factors by this method would occupy all the computers on Earth for centuries.