Brains exist because the distribution of resources necessary for survival and the hazards that threaten survival vary in space and time.
The modern geography of the brain has a deliciously antiquated feel to it—rather like a medieval map with the known world encircled by terra incognita where monsters roam.
In mathematics you don’t understand things. You just get used to them.
E ver since the emergence of the computer in the middle of the twentieth century, there has been ongoing debate not only about the ultimate extent of its abilities but about whether the human brain itself could be considered a form of computer. As far as the latter question was concerned, the consensus has veered from viewing these two kinds of information-processing entities as being essentially the same to their being fundamentally different. So is the brain a computer?
When computers first became a popular topic in the 1940s, they were immediately regarded as thinking machines. The ENIAC, which was announced in 1946, was described in the press as a “giant brain.” As computers became commercially available in the following decade, ads routinely referred to them as brains capable of feats that ordinary biological brains could not match.
A 1957 ad showing the popular conception of a computer as a giant brain.
Computer programs quickly enabled the machines to live up to this billing. The “general problem solver,” created in 1959 by Herbert A. Simon, J. C. Shaw, and Allen Newell at Carnegie Mellon University, was able to devise a proof to a theorem that mathematicians Bertrand Russell (1872–1970) and Alfred North Whitehead (1861–1947) had been unable to solve in their famous 1913 work Principia Mathematica. What became apparent in the decades that followed was that computers could readily significantly exceed unassisted human capability in such intellectual exercises as solving mathematical problems, diagnosing disease, and playing chess but had difficulty with controlling a robot tying shoelaces or with understanding the commonsense language that a five-year-old child could comprehend. Computers are only now starting to master these sorts of skills. Ironically, the evolution of computer intelligence has proceeded in the opposite direction of human maturation.
The issue of whether or not the computer and the human brain are at some level equivalent remains controversial today. In the introduction I mentioned that there were millions of links for quotations on the complexity of the human brain. Similarly, a Google inquiry for “Quotations: the brain is not a computer” also returns millions of links. In my view, statements along these lines are akin to saying, “Applesauce is not an apple.” Technically that statement is true, but you can make applesauce from an apple. Perhaps more to the point, it is like saying, “Computers are not word processors.” It is true that a computer and a word processor exist at different conceptual levels, but a computer can become a word processor if it is running word processing software and not otherwise. Similarly, a computer can become a brain if it is running brain software. That is what researchers including myself are attempting to do.
The question, then, is whether or not we can find an algorithm that would turn a computer into an entity that is equivalent to a human brain. A computer, after all, can run any algorithm that we might define because of its innate universality (subject only to its capacity). The human brain, on the other hand, is running a specific set of algorithms. Its methods are clever in that it allows for significant plasticity and the restructuring of its own connections based on its experience, but these functions can be emulated in software.
The universality of computation (the concept that a general-purpose computer can implement any algorithm)—and the power of this idea—emerged at the same time as the first actual machines. There are four key concepts that underlie the universality and feasibility of computation and its applicability to our thinking. They are worth reviewing here, because the brain itself makes use of them. The first is the ability to communicate, remember, and compute information reliably. Around 1940, if you used the word “computer,” people assumed you were talking about an analog computer, in which numbers were represented by different levels of voltage, and specialized components could perform arithmetic functions such as addition and multiplication. A big limitation of analog computers, however, was that they were plagued by accuracy issues. Numbers could only be represented with an accuracy of about one part in a hundred, and as voltage levels representing them were processed by increasing numbers of arithmetic operators, errors would accumulate. If you wanted to perform more than a handful of computations, the results would become so inaccurate as to be meaningless.
Anyone who can remember the days of recording music with analog tape machines will recall this effect. There was noticeable degradation on the first copy, as it was a little noisier than the original. (Remember that “noise” represents random inaccuracies.) A copy of the copy was noisier still, and by the tenth generation the copy was almost entirely noise. It was assumed that the same problem would plague the emerging world of digital computers. We can understand such concerns if we consider the communication of digital information through a channel. No channel is perfect and each one will have some inherent error rate. Suppose we have a channel that has a .9 probability of correctly transmitting each bit. If I send a message that is one bit long, the probability of accurately transmitting it through that channel will be .9. Suppose I send two bits? Now the accuracy is .92 = .81. How about if I send one byte (eight bits)? I have less than an even chance (.43 to be exact) of sending it correctly. The probability of accurately sending five bytes is about 1 percent.
An obvious solution to circumvent this problem is to make the channel more accurate. Suppose the channel makes only one error in a million bits. If I send a file consisting of a half million bytes (about the size of a modest program or database), the probability of correctly transmitting it is less than 2 percent, despite the very high inherent accuracy of the channel. Given that a single-bit error can completely invalidate a computer program and other forms of digital data, that is not a satisfactory situation. Regardless of the accuracy of the channel, since the likelihood of an error in a transmission grows rapidly with the size of the message, this would seem to be an intractable barrier.
Analog computers approached this problem through graceful degradation (meaning that users only presented problems in which they could tolerate small errors); however, if users of analog computers limited themselves to a constrained set of calculations, the computers did prove somewhat useful. Digital computers, on the other hand, require continual communication, not just from one computer to another, but within the computer itself. There is communication from its memory to and from the central processing unit. Within the central processing unit, there is communication from one register to another and back and forth to the arithmetic unit, and so forth. Even within the arithmetic unit, there is communication from one bit register to another. Communication is pervasive at every level. If we consider that error rates escalate rapidly with increased communication and that a single-bit error can destroy the integrity of a process, digital computation was doomed—or so it seemed at the time.