Выбрать главу

Even those who don’t aspire to becoming the next C. B. DeMille or Lina Wertmuller will routinely include multi-media in the documents they construct every day. Someone might start by typing, handwriting, or speaking an electronic mail message: “Lunch in the park may not be such a great idea. Look at the forecast.” To make the message more informative, he could then point his cursor at an icon representing a local television weather forecast and drag it across his screen to move the icon inside his document. When his friends get the message, they will be able to look at the forecast right on their screens—a professional-looking communication.

Kids in school will be able to produce their own albums or movies and make them available to friends and family on the information highway. When I have time, I enjoy making special greeting cards and invitations. If I’m making a birthday card for my sister, for instance, to personalize it I sometimes add pictures reminding her of fun events of the past year. In the future I’ll be able to include movie clips that I’ve

customized with only a few minutes’ work. It will be simple to create an interactive “album” of photographs, videos, or conversations. Businesses of all types and sizes will communicate using multi-media. Lovers will use special effects to blend some text, a video clip from an old movie, and a favorite song to create a personal valentine.

As the fidelity of visual and audio elements improves, reality in all its aspects will be more closely simulated. This “virtual reality,” or VR, will allow us to “go” places and “do” things we never would be able to otherwise.

Vehicle simulators for airplanes, race cars, and spacecraft already provide a taste of virtual reality. Some of the most popular rides at Disneyland are simulated voyages. Software vehicle simulators, such as Microsoft Flight Simulator, are among the most popular games ever created for PCs, but they force you to use your imagination. Multimillion-dollar flight simulators at companies such as Boeing give you a much better ride. Viewed from the outside, they’re boxy, stilt-legged mechanical creatures that would look at home in a Star Wars movie. Inside, the cockpit video displays offer sophisticated data. Flight and maintenance instruments are linked to a computer that simulates flight characteristics—including emergencies—with an accuracy pilots say is remarkable.

A couple of friends and I “flew” a 747 simulator a couple years ago. You sit down to a control panel in a cockpit identical to one in a real plane. Outside the windows, you see computer-generated color video images. When you “take off” in the simulator, you see an identifiable airport and its surroundings. The simulation of Boeing Field, for instance, might show a fuel truck on the runway and Mount Rainier in the distance. You hear the rush of air around wings that aren’t there, the clunk of nonexistent landing gear retracting. Six hydraulic systems under the simulator tilt and shake the cockpit. It’s pretty convincing.

The main purpose of these simulators is to give pilots a chance to gain experience in handling emergencies. When I was using the simulator my friends decided to give me a surprise by having a small plane fly by. While I sat in the pilot’s seat the all-too-real-looking image of a Cessna flashed into view. I wasn’t prepared for the “emergency” and I crashed into it.

A number of companies, from entertainment giants to small start-ups, are planning to put smaller-scale simulator rides into shopping malls and urban sites. As the price of technology comes down, entertainment simulators may become as common as movie theaters are today. And it won’t be too many years until you’ll be able to have a high-quality simulation in your own living room.

Want to explore the surface of Mars? It’s a lot safer to do it via VR. How about visiting somewhere humans never will be able to go? A cardiologist might be able to swim through the heart of a patient to examine it in a way she never would have been able to with conventional instrumentation. A surgeon could practice a tricky operation many times, including simulated catastrophes, before she ever touches a scalpel to a real patient. Or you could use VR to wander through a fantasy of your own design.

In order to work, VR needs two different sets of technology—software that creates the scene and makes it respond to new information, and devices that allow the computer to transmit the information to our senses. The software will have to figure out how to describe the look, sound, and feel of the artificial world down to the smallest detail. That might sound overwhelmingly difficult, but actually it’s the easy part. We could write the software for VR today, but we need a lot more computer power to make it truly believable. At the pace technology is moving, though, that power will be available soon. The really hard part about VR is getting the information to convince the user’s senses.

Hearing is the easiest sense to fool; all you have to do is wear headphones. In real life, your two ears hear slightly different things because of their location on your head and the directions they point. Subconsciously you use those differences to tell where a sound is coming from. Software can re-create this by calculating for a given sound what each ear would be hearing. This works amazingly well. You can put on a set of headphones connected to a computer and hear a whisper in your left ear or footsteps walking up behind you.

Your eyes are harder to fool than your ears, but vision is still pretty straightforward to simulate. VR equipment almost always includes a special set of goggles with lenses that focus each eye on its own small computer display. A head-tracking sensor allows the computer to figure out which direction your head is facing, so the computer can synthesize what you would be seeing. Turn your head to the right, and the scene portrayed by the goggles is farther to the right. Lift your face, and the goggles show the ceiling or sky. Today’s VR goggles are too heavy, too expensive, and don’t have enough resolution. The computer systems that drive them are still a bit too slow. If you turn your head quickly, the scene lags somewhat behind. This is very disorienting and after a short period of time causes most people to get headaches. The good news is that size, speed, weight, and cost are precisely the kinds of things that technology following Moore’s Law will correct soon.

Other senses are much more difficult to fool, because there are no good ways of connecting a computer to your nose or tongue, or to the surface of your skin. In the case of touch, the prevailing idea is that a full bodysuit could be made lined with tiny sensor and force feedback devices that would be in contact with the whole surface of your skin. I don’t think bodysuits will be common, but they’ll be feasible.

There are between 72 and 120 tiny points of color (called pixels) per inch on a typical computer monitor, for a total of between 300,000 and 1 million. A full bodysuit would presumably be lined with little touch sensor points—each of which could poke one specific tiny spot. Let’s call these little touch elements “tactels.”

If the suit had enough of these tactels, and if they were controlled finely enough, any touch sensation could be duplicated. If a large number of tactels poked all together at precisely the same depth, the resulting “surface” would feel smooth, as if a piece of polished metal were against your skin. If they pushed with a variety of randomly distributed depths, it would feel like a rough texture.

Between 1 million and 10 million tactels—depending on how many different levels of depth a tactel had to convey—would be needed for a VR bodysuit. Studies of the human skin show that a full bodysuit would have to have about 100 tactels per inch—a few more on the fingertips, lips, and a couple of other sensitive spots. Most skin actually has poor touch resolution. I’d guess that 256 tactels would be enough for the highest-quality simulation. That’s the same number of colors most computer displays use for each pixel.