But here’s the even more surprising part: The advent of AI didn’t diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computerlike of all human chess players. He also has the highest human grand master rating of all time.
If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.
Yet most of the commercial work completed by AI will be done by nonhuman-like programs. The bulk of AI will be special purpose software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube, but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdly narrow, supersmart specialists.
In fact, robust intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in finance instead. What we want instead of conscious intelligence is artificial smartness. As AIs develop, we might have to engineer ways to prevent consciousness in them. Our most premium AI services will likely be advertised as consciousness-free.
Nonhuman intelligence is not a bug; it’s a feature. The most important thing to know about thinking machines is that they will think different.
Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose,” because compared with other kinds of minds we have met, it can solve more types of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.
The kind of thinking done by the emerging AIs today is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans could do, they don’t do it in a humanlike fashion. I recently uploaded 130,000 of my personal snapshots—my entire archive—to Google Photo, and the new Google AI remembers all the objects in all the images from my life. When I ask it to show me any image with a bicycle in it, or a bridge, or my mother, it will instantly display them. Facebook has the ability to ramp up an AI that can view a photo portrait of any person on earth and correctly identify them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this artificial ability very unhuman. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.
In a superconnected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial-strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences and entirely new ways of thinking—in the way a calculator is a genius in arithmetic. Calculation is only one type of smartness. We don’t know what the full taxonomy of intelligence is right now. Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans’, greater, or deeper. In some cases it will be simpler.
The variety of potential minds in the universe is vast. Recently we’ve begun to explore the species of animal minds on earth, and as we do we have discovered, with increasing respect, that we have met many other kinds of intelligences already. Whales and dolphins keep surprising us with their intricate and weirdly different intelligence. Precisely how a mind can be different or superior to our minds is very difficult to imagine. One way that would help us to imagine what greater yet different intelligences would be like is to begin to create a taxonomy of the variety of minds. This matrix of minds would include animal minds, and machine minds, and possible minds, particularly transhuman minds, like the ones that science fiction writers have come up with.
The reason this fanciful exercise is worth doing is because, while it is inevitable that we will manufacture intelligences in all that we make, it is not inevitable or obvious what their character will be. Their character will dictate their economic value and their roles in our culture. Outlining the possible ways that a machine might be smarter than us (even in theory) will assist us in both directing this advance and managing it. A few really smart people, like astronomer Stephen Hawking and genius inventor Elon Musk, worry that making supersmart AIs could be our last invention before they replace us (though I don’t believe this), so exploring possible types is prudent.
Imagine we land on an alien planet. How would we measure the level of the intelligences we encounter there? This is an extremely difficult question because we have no real definition of our own intelligence, in part because until now we didn’t need one.
In the real world—even in the space of powerful minds—trade-offs rule. One mind cannot do all mindful things perfectly well. A particular species of mind will be better in certain dimensions, but at a cost of lesser abilities in other dimensions. The smartness that guides a self-driving truck will be a different species than the one that evaluates mortgages. The AI that will diagnose your illness will be significantly different from the artificial smartness that oversees your house. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the intelligence woven into your clothes. The taxonomy of minds must reflect the different ways in which minds are engineered with these trade-offs. In the short list below I include only those kinds of minds that we might consider superior to us; I’ve omitted the thousands of species of mild machine smartness—like the brains in a calculator—that will cognify the bulk of the internet of things.
Some possible new minds:
A mind like a human mind, just faster in answering (the easiest AI mind to imagine).
A very slow mind, composed primarily of vast storage and memory.
A global supermind composed of millions of individual dumb minds in concert.
A hive mind made of many very smart minds, but unaware it/they are a hive.
A borg supermind composed of many smart minds that are very aware they form a unity.