As it turns out, you can simulate reality fast enough to do vivid VR, but getting signal into and out of the brain requires enormous bandwidth, and thus though a VR simulator could deliver enough signal to produce a vivid reality for one person or a few people, a single long-distance VR call would theoretically have required more bandwidth than the whole United States had used for radio, phone, and telegraph to get through the Second World War. Though there was an enormous market for VR worlds that could be shared as real, living, breathing experiences, to provide for that market would seem to require the construction of enormous facilities to provide enough bandwidth for all that signals traffic.
There was another enormous demand for bandwidth lurking in the wings too. Self-piloting vehicles would work best if every car on the road could share information with every other car. You wanted a car that could think and look fast enough to figure out that a ball rolling into the street was apt to be pursued by a child, or to dodge around—and alert every other car to—a board with nails lying on the pavement. Once again, bandwidth needed was just much, much bigger than the bandwidth that could even conceivably be made available.
Computer speed depends in part upon internal bandwidth— because the size of the chunks of data moving around inside the computer determines how fast the computer can rearrange information, and therefore “think.” Since VR communication had to move through the computer anyway, putting it through a faster system was highly desirable, and the fastest systems of all, by the mid-twenty-first century, were quantum computers—systems that took advantage of the peculiarity of quantum physics that a single object could behave as if it were in a distribution of several mutually exclusive states all at once. In effect, you could solve the problem of the dead cat and the problem of the live cat simultaneously, and each bit in the computer’s memory and each operation in its registers could be in parallel with itself; a single machine could be made to act like many thousands all at once, with a tremendous gain in speed.
But massively parallel processing had another use—it was exactly how computer engineers had been able to simulate many human brain functions. The ability to construct a face for recognition, or fill in the lacunae in a partial text, or smooth out a partially degraded hologram from fragments required the massive and fast parallelism that only the quantum computer could provide. The quantum computer, then, made real-time VR communication possible, for it made it possible to transmit a small fraction of the needed information for the simulation, and from that information to construct a full simulation at the other end.
But the uncertainty principle limited the user’s control of the information; you couldn’t know which state any of the quantum processors was “really” in without preventing the parallel processing you needed. If you bought into the Copenhagen Interpretation, this was no problem; you simply treated it as a computational trick that allowed you to get away with sending less information than the other side received. Likewise, in the Aphysical Interpretation, the problem was no problem—it was as if you had two ponds, with a stick in each one, and the two sticks connected by a string: a wave in one would make a wave in the other, and multiple and complex wave patterns went through because the stick could move in multiple and complex ways.
But in the Many Worlds Interpretation, what you were doing was solving the problem by using all of the neighboring worlds plus your own—and the uncertainty principle would not let you know which answer was going to which address. Those who thought about it at all, in those terms—my friends and I, in grad school, had often argued about it over beer—had always assumed that the solution must be that in a quantum computer network, there must be a great many “cosmic wrong numbers”—i.e. messages that went to the wrong universe. That always led us to argue that we had found a reason the Many Worlds Interpretation could not be the right one—because the uncertainty principle, applied to the addressing problem, seemed to say that you couldn’t know whether or not a number was wrong, and thus all those “wrong numbers” would violate it.
We had never considered that Nature might solve the problem by not allowing anyone on the receiving end of a message to know what universe he was in. And yet that solution now seemed exactly the sort of thing you might expect of Nature in her better moods. Whoever was receiving the message would exist in a suspended state, like Schrödinger’s cat, for as long as they were on the line; unlike the cat, they would not be half alive and half dead, but fractionally in many different worlds. Hanging up or briefly disconnecting—and the line-sharing protocols in any modern network guaranteed brief disconnects many times per second— was exactly the equivalent of opening the box and collapsing the probability distribution onto a singular state—living or dead for the cat, some universe or other for you.
Once the wide-band quantum network had come into use for VR, everything else had been piggybacked onto it, because it had so much room for everything else—transportation signals, fax, television, telephone, and all the rest. Whenever you went on-line in the quantum communication system, you oscillated through many of the possible system states, many times per second. This was true whether you were a person, a bale of hay, or an e-mail message.
“So,” I concluded, at the end of it all, four sandwiches, six cups of coffee, and too many arguments and diagrams to count later, “basically we’ve been reshuffling all the worlds at a faster and faster rate, and as each big family of event sequences gets VR and quantum computing, the number that we can interchange with has increased polyfold. By now nobody is in the world they began in. Mostly the worlds are enough alike so that people adapt, though I’m sure there are more street crazies and mental patients than there used to be in most worlds, and many of them probably spend all of their time trying to tell anyone who will listen that something is terribly wrong.”
Iphwin nodded, and said, “And that brings me to who or what I am. As systems grow, as you know we have to decentralize control more and more to keep them functioning. That ends up implying, among other things, that instead of a central administration governing everything, you get by with roving pieces of software that just look for whatever isn’t working as it should. That is, the system administration stops looking like a police and court system, and starts to look more and more like an immune system. Systems administration becomes a matter of operating a population of cyberphages—benign viruses that keep users from doing things that damage the system. It’s easier and cheaper than keeping everything tied to a central program that has to know everything.
“A few years ago, one cyberphage began to notice that there was a common problem in every one of the parallel universes, all at once. And that problem was the disappearance of the billions of nodes found in the United States, American Reich, Purified Christian Commonwealth—whatever you called that piece of land between San Diego and the St. Lawrence or Puget Sound and the Everglades. Once it noticed that there was no traffic at all, for several seconds, it began to track this—only to discover no traffic for periods of months or years, across all the event sequences to which it had any access—that is, across an enormous number of worlds.