Выбрать главу

If one looks hard and honestly, even the supposed paragon of user-generated content—Wikipedia itself—is far from pure bottom-up. In fact, Wikipedia’s open-to-anyone process contains an elite in the back room. The more articles someone edits, the more likely their edits will endure and not be undone, which means that over time veteran editors find it easier to make edits that stick, which means that the process favors those few editors who devote lots of time over many years. These persistent old hands act as a type of management, supplying a thin layer of editorial judgment and continuity to this open ad hocracy. In fact, this relatively small group of self-appointed editors is why Wikipedia continues to work and grow into its third decade.

When a community cooperates to write an encyclopedia, as it does in Wikipedia, no one is held responsible if it fails to reach consensus on an article. That gap is simply an imperfection that may or may not get fixed in time. These failures don’t endanger the enterprise as a whole. The aim of a collective, on the other hand, is to engineer a system where self-directed peers take responsibility for critical processes and where difficult decisions, such as sorting out priorities, are decided by all participants. Throughout history, countless small-scale collectivist groups have tried this decentralized operating mode in which the executive function is not held at the top. The results have not been encouraging; very few communes have lasted longer than a few years.

Indeed, a close examination of the governing kernel of, say, Wikipedia, Linux, or OpenOffice shows that these efforts are a bit further from the collectivist nirvana than appears from the outside. While millions of writers contribute to Wikipedia, a smaller number of editors (around 1,500) are responsible for the majority of the editing. Ditto for collectives that write code. A vast army of contributions is managed by a much smaller group of coordinators. As Mitch Kapor, founding chair of the Mozilla open source code factory, observed, “Inside every working anarchy, there’s an old-boy network.”

This isn’t necessarily a bad thing. Some types of collectives benefit from a small degree of hierarchy while others are hurt by it. Platforms like the internet, Facebook, or democracy are intended to serve as an arena for producing goods and delivering services. These infrastructural courtyards benefit from being as nonhierarchical as possible, minimizing barriers to entry and distributing rights and responsibilities equally. When powerful actors dominate in these systems, the entire fabric suffers. On the other hand, organizations built to create products rather than platforms often need strong leaders and hierarchies arranged around timescales: Lower-level work focuses on hourly needs; the next level on jobs that need to be done today. Higher levels focus on weekly or monthly chores, and levels above (often in the CEO suite) need to look out ahead at the next five years. The dream of many companies is to graduate from making products to creating a platform. But when they do succeed (like Facebook), they are often not ready for the required transformation in their role; they have to act more like governments than companies in keeping opportunities “flat” and equitable, and hierarchy to a minimum.

In the past, constructing an organization that exploited hierarchy yet maximized collectivism was nearly impossible. The costs of managing so many transactions was too dear. Now digital networking provides the necessary peer-to-peer communication cheap. The net enables a product-focused organization to function collectively by keeping its hierarchy from fully taking over. For instance, the organization behind MySQL, an open source database, is not without some hierarchy, but it is far more collectivist than, say, the giant database corporation Oracle. Likewise, Wikipedia is not exactly a bastion of equality, but it is vastly more collectivist than the Encyclopaedia Britannica. The new collectives are hybrid organizations, but leaning far more to the nonhierarchical side than most traditional enterprises.

It’s taken a while but we’ve learned that while top down is needed, not much of it is needed. The brute dumbness of the hive mind is the raw food ingredients that smart design can chew on. Editorship and expertise are like vitamins for the food. You don’t need much of them, just a trace even for a large body. Too much will be toxic, or just flushed away. The proper dosage of hierarchy is just barely enough to vitalize a very large collective.

The exhilarating frontier today is the myriad ways in which we can mix large doses of out-of-controlness with small elements of top-down control. Until this era, technology was primarily all control, all top down. Now it can contain both control and messiness. Never before have we been able to make systems with as much messy quasi-control in them. We are rushing into an expanding possibility space of decentralization and sharing that was never accessible before because it was not technically possible. Before the internet there was simply no way to coordinate a million people in real time or to get a hundred thousand workers collaborating on one project for a week. Now we can, so we are quickly exploring all the ways in which we can combine control and the crowd in innumerable permutations.

However, a massively bottom-up effort will take us only partway to our preferred destination. In most aspects of life we want expertise. But we are unlikely to get the level of expertise we want with no experts at all.

That’s why it should be no surprise to learn that Wikipedia continues to evolve its process. Each year more structure is layered in. Controversial articles can be “frozen” by top editors so they can no longer be edited by any random person, only designated editors. There are more rules about what is permissible to write, more required formatting, more approval needed. But the quality improves too. I would guess that in 50 years a significant portion of Wikipedia articles will have controlled edits, peer review, verification locks, authentication certificates, and so on. That’s all good for us readers. Each of these steps is a small amount of top-down smartness to offset the dumbness of a massively bottom-up system.

Yet if the hive mind is so dumb, why bother with it at all?

Because as dumb as it is, it is smart enough for a lot of work.

In two ways: First, the bottom-up hive mind will always take us much further than we imagine. Wikipedia, though not ideal, is far, far better than anyone believed it could be. It keeps surprising us in this regard. Netflix’s personal recommendations derived from what millions of other people watch succeeded beyond what most experts expected. In terms of range of reviews, depth, and reliability, they are more useful than the average human movie critic. EBay’s swap meet of virtual strangers was not supposed to work at all, but while not perfect, it is much better than most retailers believed was possible. Uber’s peer-to-peer on-demand taxi service works so well it surprised even some of its funders. Given enough time, decentralized connected dumb things can become smarter than we think.

Second, even though a purely decentralized power won’t take us all the way, it is almost always the best way to start. It’s fast, cheap, and out of control. The barriers to start a new crowd-powered service are low and getting lower. A hive mind scales up wonderfully smoothly. That is why there were 9,000 startups in 2015 trying to exploit the sharing power of decentralized peer-to-peer networks. It does not matter if they morph over time. Perhaps a hundred years from now these shared processes, such as Wikipedia, will be layered up with so much management that they’ll resemble the old-school centralized businesses. Even so, the bottom up was still the best way to start.