Выбрать главу

One consequence of increased interaction between us and our artifacts is a celebration of an artifact’s embodiment. The more interactive it is, the more it should sound and feel beautiful. Since we might spend hours holding it, craftsmanship matters. Apple was the first to recognize that this appetite applies to interactive goods. The gold trim on the Apple Watch is to feel. We end up caressing an iPad, stroking its magic surface, gazing into it for hours, days, weeks. The satin touch of a device’s surface, the liquidity of its flickers, the presence or lack of its warmth, the quality of its build, the temperature of its glow will come to mean a great deal to us.

What could be more intimate and interactive than wearing something that responds to us? Computers have been on a steady march toward us. At first computers were housed in distant air-conditioned basements, then they moved to nearby small rooms, then they crept closer to us perched on our desks, then they hopped onto our laps, and recently they snuck into our pockets. The next obvious step for computers is to lay against our skin. We call those wearables.

We can wear special spectacles that reveal an augmented reality. Wearing such a transparent computer (an early prototype was Google Glass) empowers us to see the invisible bits that overlay the physical world. We can inspect a cereal box in the grocery store and, as the young boy suggested, simply click it within our wearable to read its meta-information. Apple’s watch is a wearable computer, part health monitor, but mostly a handy portal to the cloud. The entire super-mega-processing power of the entire internet and World Wide Web is funneled through that little square on your wrist. But wearables in particular mean smart clothes. Of course, itsy-bitsy chips can be woven into a shirt so that the shirt can alert a smart washing machine to its preferred washing cycles, but wearables are more about the wearer. Experimental smart fabrics such as those from Project Jacquard (funded by Google) have conductive threads and thin flexible sensors woven into them. They will be sewn into a shirt you interact with. You use fingers of one hand to swipe the sleeve of your other arm the way you’d swipe an iPad, and for the same reason: to bring up something on a screen or in your spectacles. A smart shirt like the Squid, a prototype from Northeastern University, can feel—in fact measure—your posture, recording it in a quantified way, and then actuating “muscles” in the shirt that contract precisely to hold you in the proper posture, much as a coach would. David Eagleman, a neuroscientist at Baylor College, in Texas, invented a supersmart wearable vest that translates one sense into another. The Sensory Substitution Vest takes audio from tiny microphones in the vest and translates those sound waves into a grid of vibrations that can be felt by a deaf person wearing it. Over a matter of months, the deaf person’s brain reconfigures itself to “hear” the vest vibrations as sound, so by wearing this interacting cloth, the deaf can hear.

You may have seen this coming, but the only way to get closer than wearables over our skin is to go under our skin. Jack into our heads. Directly connect the computer to the brain. Surgical brain implants really do work for the blind, the deaf, and the paralyzed, enabling the handicapped to interact with technology using only their minds. One experimental brain jack allowed a quadriplegic woman to use her mind to control a robotic arm to pick up a coffee bottle and bring it to her lips so she could drink from it. But these severely invasive procedures have not been tried to enhance a healthy person yet. Brain controllers that are noninvasive have already been built for ordinary work and play, and they do work. I tried several lightweight brain-machine interfaces (BMIs) and I was able to control a personal computer simply by thinking about it. The apparatus generally consists of a hat of sensors, akin to a minimal bicycle helmet, with a long cable to the PC. You place it on your head and its many sensor pads sit on your scalp. The pads pick up brain waves, and with some biofeedback training you can generate signals at will. These signals can be programmed to perform operations such as “Open program,” “Move mouse,” and “Select this.” You can learn to “type.” It’s still crude, but the technology is improving every year.

In the coming decades we’ll keep expanding what we interact with. The expansion follows three thrusts.

1. More senses

We will keep adding new sensors and senses to the things we make. Of course, everything will get eyes (vision is almost free), and hearing, but one by one we can add superhuman senses such as GPS location sensing, heat detection, X-ray vision, diverse molecule sensitivity, or smell. These permit our creations to respond to us, to interact with us, and to adapt themselves to our uses. Interactivity, by definition, is two way, so this sensing elevates our interactions with technology.

2. More intimacy

The zone of interaction will continue to march closer to us. Technology will get closer to us than a watch and pocket phone. Interacting will be more intimate. It will always be on, everywhere. Intimate technology is a wide-open frontier. We think technology has saturated our private space, but we will look back in 20 years and realize it was still far away in 2016.

3. More immersion

Maximum interaction demands that we leap into the technology itself. That’s what VR allows us to do. Computation so close that we are inside it. From within a technologically created world, we interact with each other in new ways (virtual reality) or interact with the physical world in a new way (augmented reality). Technology becomes a second skin.

Recently I joined some drone hobbyists who meet in a nearby park on Sundays to race their small quadcopters. With flags and foam arches they map out a course over the grass for their drones to race around. The only way to fly drones at this speed is to get inside them. The hobbyists mount tiny eyes at the front of their drones and wear VR goggles to peer through them for what is called a first-person view (FPV). They are now the drone. As a visitor I don an extra set of goggles that piggyback on their camera signals and so I find myself sitting in the same pilots’ seats and see what each pilot sees. The drones dart in, out, and around the course obstacles, chasing each other’s tails, bumping into other drones, in scenes reminiscent of a Star Wars pod race. One young guy who’s been flying radio control model airplanes since he was a boy said that being able to immerse himself into the drone and fly from inside was the most sensual experience of his life. He said there was almost nothing more pleasurable than actually, really free flying. There was no virtuality. The flying experience was real.

 • • •

The convergence of maximum interaction plus maximum presence is found these days in free-range video games. For the past several years I’ve been watching my teenage son play console video games. I am not twitchy enough myself to survive more than four minutes in a game’s alterworld, but I find I can spend an hour just watching the big screen as my son encounters dangers, shoots at bad guys, or explores unknown territories and dark buildings. Like a lot of kids his age, he’s played the classic shooter games like Call of Duty, Halo, and Uncharted 2, which have scripted scenes of engagement. However, my favorite game as a voyeur is the now dated game Red Dead Redemption. This is set in the vast empty country of the cowboy West. Its virtual world is so huge that players spend a lot of time on their horses exploring the canyons and settlements, searching for clues, and wandering the land on vague errands. I’m happy to ride alongside as we pass through frontier towns in pursuit of his quests. It’s a movie you can roam in. The game’s open-ended architecture is similar to the very popular Grand Theft Auto, but it’s a lot less violent. Neither of us knows what will happen or how things will play out.