Выбрать главу

I imagined nonsensical conversations around a future office watercooler: “How much information do you have?” “Switzerland is a great country because of all the information they have there!” “I hear the Information Price Index is going up!”

It sounds nonsensical because information isn’t as tangible or measurable as the materials that defined previous ages, but information has become increasingly important to us. The information revolution is just beginning. The cost of communications will drop as precipitously as the cost of computing already has. When it gets low enough and is combined with other advances in technology, “information highway” will no longer be just a phrase for eager executives and excited politicians. It will be as real and as far-reaching as “electricity.” To understand why information is going to be so central, it’s important to know how technology is changing the ways we handle information.

The majority of this chapter is devoted to such an explanation. The material that follows is to give less-informed readers without a background in computer principles and history sufficient information to enjoy the rest of the material in the book. If you understand how digital computers work, you probably already know the material cold, so feel free to skip to chapter 3.

The most fundamental difference we’ll see in future information is that almost all of it will be digital. Whole printed libraries are already being scanned and stored as electronic data on disks and CD-ROMs. Newspapers and magazines are now often completely composed in electronic form and printed on paper as a convenience for distribution. The electronic information is stored permanently—or for as long as anyone wants it—in computer databases: giant banks of journalistic data accessible through on-line services. Photographs, films, and videos are all being converted into digital information. Every year, better methods are being devised to quantify information and distill it into quadrillions of atomistic packets of data. Once digital information is stored, anyone with access and a personal computer can instantaneously recall, compare, and refashion it. What characterizes this period in history is the completely new ways in which information can be changed and manipulated, and the increasing speeds at which we can handle it. The computer’s abilities to provide low-cost, high-speed processing and transmission of digital data will transform the conventional communication devices in homes and offices.

The idea of using an instrument to manipulate numbers isn’t new. The abacus had been in use in Asia for nearly 5,000 years by 1642, when the nineteen-year-old French scientist Blaise Pascal invented a mechanical calculator. It was a counting device. Three decades later, the German mathematician Gottfried von Leibniz improved on Pascal’s design. His “Stepped Reckoner” could multiply, divide, and calculate square roots. Reliable mechanical calculators, powered by rotating dials and gears, descendants of the Stepped Reckoner, were the mainstay of business until their electronic counterparts replaced them. When I was a boy, a cash register was essentially a mechanical calculator linked to a cash drawer.

More than a century and a half ago, a visionary British mathematician glimpsed the possibility of the computer and that glimpse made him famous even in his day. Charles Babbage was a professor of mathematics at Cambridge University who conceived the possibility of a mechanical device that would be able to perform a string of related calculations. As early as the 1830s, he was drawn to the idea that information could be manipulated by a machine if the information could be converted into numbers first. The steam-powered machine Babbage envisioned would use pegs, toothed wheels, cylinders, and other mechanical parts, the apparatus of the then-new Industrial Age. Babbage believed his “Analytical Engine” would be used to take the drudgery and inaccuracy out of calculating.

He lacked the terms we now use to refer to the parts of his machine. He called the central processor, or working guts of his machine, the “mill.” He referred to his machine’s memory as the “store.” Babbage imagined information being transformed the way cotton was—drawn from a store (warehouse) and milled into something new.

His Analytical Engine would be mechanical, but he foresaw how it would be able to follow changing sets of instructions and thus serve different functions. This is the essence of software. It is a comprehensive set of rules a machine can be given to “instruct” it how to perform particular tasks. Babbage realized that to create these instructions he would need an entirely new kind of language, and he devised one using numbers, letters, arrows, and other symbols. The language was designed to let Babbage “program” the Analytical Engine with a long series of conditional instructions, which would allow the machine to modify its actions in response to changing situations. He was the first to see that a single machine could serve a number of different purposes.

For the next century mathematicians worked with the ideas Babbage had outlined and finally, by the mid-1940s, an electronic computer was built based on the principles of his Analytical Engine. It is hard to sort out the paternity of the modern computer, because much of the thinking and work was done in the United States and Britain during World War II under the cloak of wartime secrecy. Three major contributors were Alan Turing, Claude Shannon, and John von Neumann.

In the mid-1930s, Alan Turing, like Babbage a superlative Cambridge-trained British mathematician, proposed what is known today as a Turing machine. It was his version of a completely general-purpose calculating machine that could be instructed to work with almost any kind of information.

In the late 1930s, when Claude Shannon was still a student, he demonstrated that a machine executing logical instructions could manipulate information. His insight, the subject of his master’s thesis, was about how computer circuits—closed for true and open for false—could perform logical operations, using the number 1 to represent “true” and 0 to represent “false.”

This is a binary system. It’s a code. Binary is the alphabet of electronic computers, the basis of the language into which all information is translated, stored, and used within a computer. It’s simple, but so vital to the understanding of the way computers work that it’s worth pausing here to explain it more fully.

Imagine you have a room that you want illuminated with as much as 250 watts of electric lighting and you want the lighting to be adjustable, from 0 watt of illumination (total darkness) to the full wattage. One way to accomplish this is with a rotating dimmer switch hooked to a 250-watt bulb. To achieve complete darkness, turn the knob fully counterclockwise to Off for 0 watt of light. For maximum brightness, turn the knob fully clockwise for the entire 250 watts. For some illumination level in between, turn the knob to an intermediate position.

This system is easy to use but has limitations. If the knob is at an intermediate setting—if lighting is lowered for an intimate dinner, for example—you can only guess what the lighting level is. You don’t really know how many watts are in use, or how to describe the setting precisely. Your information is approximate, which makes it hard to store or reproduce.

What if you want to reproduce exactly the same level of lighting next week? You could make a mark on the switch plate so that you know how far to turn it, but this is hardly exact, and what happens when you want to reproduce a different setting? What if a friend wants to reproduce the same level of lighting? You can say, “Turn the knob about a fifth of the way clockwise,” or “Turn the knob until the arrow is at about two o’clock” but your friend’s reproduction will only approximate your setting. What if your friend then passes the information on to another friend, who in turn passes it on again? Each time the information is handed on, the chances of its remaining accurate decrease.