Text is particularly easy to store because it is very compact in digital form. The old saying that a picture is worth a thousand words is more than true in the digital world. High-quality photographic images take more space than text, and video (which you can think of as a sequence of up to thirty new images appearing every second) takes even more. Nevertheless, the cost of distribution for these kinds of data is still quite low. A feature film takes up about 4 gigabytes (4,000 megabytes) in compressed digital format, which is about $1,600 worth of hard-disk space.
Sixteen hundred dollars to store a single film doesn’t sound low-cost. However, consider that the typical local video-rental store usually buys at least eight copies of a hot new movie for about $80 a copy. With these eight copies the store can supply only eight customers per day.
Once the disk and the computer that manages it are connected up to the highway, only one copy of the information will be necessary for everyone to have access. The most popular documents will have copies made on different servers to avoid delays when an unusual number of users want access. With one investment, roughly what a single shop today spends for a popular videotape title, a disk-based server will be able to serve thousands of customers simultaneously. The extra cost for each user is simply the cost of using the disk storage for a short period of time and the communications charge. And that is becoming extremely cheap. So the extra per-user cost will be nearly zero.
This doesn’t mean that information will be free, but the cost of distributing it will be very small. When you buy a paper book, a good portion of your money pays for the cost of producing and distributing it, rather than for the author’s work. Trees have to be cut down, ground into pulp, and turned into paper. The book must be printed and bound. Most publishers invest capital in a first printing that reflects the largest number of copies they think will sell right away, because the printing technology is efficient only if lots of books are made at once. The capital tied up in this inventory is a financial risk for the publishers: They may never sell all the copies, and even if they do, it will take a while to sell them all. Meanwhile, the publisher has to store the books and ship them to wholesalers and ultimately to retail bookstores. Those folks also invest capital in their inventory and expect a financial return from it.
By the time the consumer selects the book and the cash register rings, the profit for the author can be a pretty small piece of the pie compared to the money that goes to the physical aspect of delivering information on processed wood pulp. I like to call this the “friction” of distribution, because it holds back variety and dissipates money away from the author and to other people.
The information highway will be largely friction free, a theme I will explore further in chapter 8. This lack of friction in information distribution is incredibly important. It will empower more authors, because very little of the customer’s dollar will be used to pay for distribution.
Gutenberg’s invention of the printing press brought about the first real shift in distribution friction—it allowed information on any subject to be distributed quickly and relatively cheaply. The printing press created a mass medium because it offered low-friction duplication. The proliferation of books motivated the general public to read and write, but once people had the skills there were many other things that could be done with the written word. Businesses could keep track of inventory and write contracts. Lovers could exchange letters. Individuals could keep notes and diaries. By themselves these applications were not sufficiently compelling to get large numbers of people to make the effort to learn to read and write. Until there was a real reason to create an “installed base” of literate people, the written word wasn’t really useful as a means for storing information. Books gave literacy critical mass, so you can almost say that the printing press taught us to read.
The printing press made it easy to make lots of copies of a document, but what about something written for a few people? New technology was required for small-scale publishing. Carbon paper was fine if you wanted just one or two more copies. Mimeographs and other messy machines could make dozens, but to use any of these processes you had to have planned for them when you prepared your original document.
In the 1930s, Chester Carlson, frustrated by how difficult it was to prepare patent applications (which involved copying drawings and text by hand), set out to invent a better way to duplicate information in small quantities. What he came up with was a process he called “xerography” when he patented it in 1940. In 1959, the company he had hooked up with—later known as Xerox—released its first successful production-line copier. The 914 copier, by making it possible to reproduce modest numbers of documents easily and inexpensively, set off an explosion in the kinds and amount of information distributed to small groups. Market research had projected that Xerox would sell at most 3,000 of their first copier model. They actually placed about 200,000. A year after the copier was introduced, 50 million copies a month were being made. By 1986, more than 200 billion copies were being made each month, and the number has been rising ever since. Most of these copies would never be made if the technology wasn’t so cheap and easy.
The photocopier and its later cousin, the desktop laser printer—along with PC desktop publishing software—facilitated newsletters, memos, maps to parties, flyers, and other documents intended for modest-sized audiences. Carlson was another who reduced the distribution friction of information. The wild success of his copier demonstrates that amazing things happen once you reduce distribution friction.
Of course, it’s easier to make copies of a document than it is to make it worth reading. There is no intrinsic limit to the number of books that can be published in a given year. A typical bookstore has 10,000 different titles, and some of the new superstores might carry 100,000. Only a small fraction, under 10 percent, of all trade books published make money for their publishers, but some succeed beyond anybody’s wildest expectations.
My favorite recent example is A Brief History of Time, by Stephen W. Hawking, a brilliant scientist who has amyotrophic lateral sclerosis (Lou Gehrig’s disease), which confines him to a wheelchair and allows him to communicate only with great difficulty. What are the odds that his treatise on the origins of the universe would have been published if there were only a handful of publishers and each of them could produce only a few books a year? Suppose an editor had one spot left on his list and had to choose between publishing Hawking’s book and Madonna’s Sex? The obvious bet would be Madonna’s book, because it would likely sell a million copies. It did. But Hawking’s book sold 5.5 million copies and is still selling.
Every now and then this sort of sleeper best-seller surprises everyone (but the author). A book I enjoyed greatly, The Bridges of Madison County, was the first published novel by a business-school teacher of communications. It wasn’t positioned by the publisher to be a bestseller, but nobody really knows what will appeal to the public’s taste. Like most examples of central planning trying to outguess a market decision, this is fundamentally a losing proposition. There are almost always a couple of books on The New York Times best-seller list that have bubbled up from nowhere, because books cost so relatively little to publish—compared to other media—that publishers can afford to give them a chance.