On the other hand, if you find even one difference between the rendering and the intended environment, you can immediately certify that the rendering is inaccurate. Unless, that is, the rendered environment has some intentionally unpredictable features. For example, a roulette wheel is designed to be unpredictable. If we make a film of roulette being played in a casino, that film may be laid to be accurate if the numbers that are shown coming up in the film are the same numbers that actually came up when the film was made. The film will show the same numbers every time it is played: it is totally predictable. So an accurate image of an unpredictable environment must be predictable. But what does it mean for a virtual-reality rendering of a roulette wheel to be accurate? As before, it means that a user should not find it perceptibly different from the original. But this implies that the rendering must not behave identically to the originaclass="underline" if it did, either it or the original could be used to predict the other’s behaviour, and then neither would be unpredictable. Nor must it behave in the same way every time it is run. A perfectly rendered roulette wheel must be just as usable for gambling as a real one. Therefore it must be just as unpredictable. Also, it must be just as fair; that is, all the numbers must come up purely randomly, with equal probabilities.
How do we recognize unpredictable environments, and how do we confirm that purportedly random numbers are distributed fairly? We check whether a rendering of a roulette wheel meets its specifications in the same way that we check whether the real thing does: by kicking (spinning) it, and seeing whether it responds as advertised. We make a large number of similar observations and perform statistical tests on the outcomes. Again, however many tests we carry out, we cannot certify that the rendering is accurate, or even that it is probably accurate. For however randomly the numbers seem to come up, they may nevertheless fall into a secret pattern that would allow a user in the know to predict them. Or perhaps if we had asked out loud the date of the battle of Waterloo, the next two numbers that came up would invariably show that date: 18, 15. On the other hand, if the sequence that comes up looks unfair, we cannot know for sure that it is, but we might be able to say that the rendering is probably inaccurate. For example, if zero came up on our rendered roulette wheel on ten consecutive spins, we should conclude that we probably do not have an accurate rendering of a fair roulette wheel.
When discussing image generators, I said that the accuracy of a rendered image depends on the sharpness and other attributes of the user’s senses. With virtual reality that is the least of our problems. Certainly, a virtual-reality generator that renders a given environment perfectly for humans will not do so for dolphins or extraterrestrials. To render a given environment for a user with given types of sense organs, a virtual-reality generator must be physically adapted to such sense organs and its computer must be programmed with their characteristics. However, the modifications that have to be made to accommodate a given species of user are finite, and need only be carried out once. They amount to what I have called constructing a new ‘connecting cable’. As we consider environments of ever greater complexity, the task of rendering environments for a given type of user becomes dominated by writing the programs for calculating what those environments will do; the species-specific part of the task, being of fixed complexity, becomes negligible by comparison. This discussion is about the ultimate limits of virtual reality, so we are considering arbitrarily accurate, long and complex renderings. That is why it makes sense to speak of ‘rendering a given environment’ without specifying who it is being rendered for.
We have seen that there is a well-defined notion of the accuracy of a virtual-reality rendering: accuracy is the closeness, as far as is perceptible, of the rendered environment to the intended one. But it must be close for every possible way in which the user might behave, and that is why, no matter how observant one is when experiencing a rendered environment, one cannot certify that it is accurate (or probably accurate). But experience can sometimes show that a rendering is inaccurate (or probably inaccurate).
This discussion of accuracy in virtual reality mirrors the relationship between theory and experiment in science. There too, it is possible to confirm experimentally that a general theory is false, but never that it is true. And there too, a short-sighted view of science is that it is all about predicting our sense-impressions. The correct view is that, while sense-impressions always play a role, what science is about is understanding the whole of reality, of which only an infinitesimal proportion is ever experienced.
The program in a virtual-reality generator embodies a general, predictive theory of the behaviour of the rendered environment. The other components deal with keeping track of what the user is doing and with the encoding and decoding of sensory data; these, as I have said, are relatively trivial functions. Thus if the environment is physically possible, rendering it is essentially equivalent to finding rules for predicting the outcome of every experiment that could be performed in that environment. Because of the way in which scientific knowledge is created, ever more accurate predictive rules can be discovered only through ever better explanatory theories. So accurately rendering a physically possible environment depends on understanding its physics.
The converse is also true: discovering the physics of an environment depends on creating a virtual-reality rendering of it. Normally one would say that scientific theories only describe and explain physical objects and processes, but do not render them. For example, an explanation of eclipses of the Sun can be printed in a book. A computer can be programmed with astronomical data and physical laws to predict an eclipse, and to print out a description of it. But rendering the eclipse in virtual reality would require both further programming and further hardware. However, those are already present in our brains! The words and numbers printed by the computer amount to ‘descriptions’ of an eclipse only because someone knows the meanings of those symbols. That is, the symbols evoke in the reader’s mind some sort of likeness of some predicted effect of the eclipse, against which the real appearance of that effect will be tested. Moreover, the ‘likeness’ that is evoked is interactive. One can observe an eclipse in many ways: with the naked eye, or by photography, or using various scientific instruments; from some positions on Earth one will see a total eclipse of the Sun, from other positions a partial eclipse, and from anywhere else no eclipse at all. In each case an observer will experience different images, any of which can be predicted by the theory. What the computer’s description evokes in a reader’s mind is not just a single image or sequence of images, but a general method of creating many different images, corresponding to the many ways in which the reader may contemplate making observations. In other words, it is a virtual-reality rendering. Thus, in a broad enough sense, taking into account the processes that must take place inside the scientist’s mind, science and the virtual-reality rendering of physically possible environments are two terms denoting the same activity.
Now, what about the rendering of environments that are not physically possible? On the face of it, there are two distinct types of virtual-reality rendering: a minority that depict physically possible environments, and a majority that depict physically impossible environments. But can this distinction survive closer examination? Consider a virtual-reality generator in the act of rendering a physically impossible environment. It might be a flight simulator, running a program that calculates the view from the cockpit of an aircraft that can fly faster than light. The flight simulator is rendering that environment. But in addition the flight simulator is itself the environment that the user is experiencing, in the sense that it is a physical object surrounding the user. Let us consider this environment. Clearly it is a physically possible environment. Is it a renderable environment? Of course. In fact it is exceptionally easy to render: one simply uses a second flight simulator of the same design, running the identical program. Under those circumstances the second flight simulator can be thought of as rendering either the physically impossible aircraft, or a physically possible environment, namely the first flight simulator. Similarly, the first flight simulator could be regarded as rendering a physically possible environment, namely the second flight simulator. If we assume that any virtual-reality generator that can in principle be built, can in principle be built again, then it follows that every virtual-reality generator, running any program in its repertoire, is rendering some physically possible environment. It may be rendering other things as well, including physically impossible environments, but in particular there is always some physically possible environment that it is rendering.