The terms “open” and “closed” code will suggest to many a critically important debate about how software should be developed. What most call the “open source software movement”, but which I, following Richard Stallman, call the “free software movement”, argues (in my view at least) that there are fundamental values of freedom that demand that software be developed as free software. The opposite of free software, in this sense, is proprietary software, where the developer hides the functionality of the software by distributing digital objects that are opaque about the underlying design.
I will describe this debate more in the balance of this chapter. But importantly, the point I am making about “open” versus “closed” code is distinct from the point about how code gets created. I personally have very strong views about how code should be created. But whatever side you are on in the “free vs. proprietary software” debate in general, in at least the contexts I will identify here, you should be able to agree with me first, that open code is a constraint on state power, and second, that in at least some cases, code must, in the relevant sense, be “open.”
To set the stage for this argument, I want to describe two contexts in which I will argue that we all should agree that the kind of code deployed matters. The balance of the chapter then makes that argument.
Bytes That Sniff
In Chapter 2, I described technology that at the time was a bit of science fiction. In the five years since, that fiction has become even less fictional. In 1997, the government announced a project called Carnivore. Carnivore was to be a technology that sifted through e-mail traffic and collected just those e-mails written by or to a particular and named individual. The FBI intended to use this technology, pursuant to court orders, to gather evidence while investigating crimes.
In principle, there’s lots to praise in the ideals of the Carnivore design. The protocols required a judge to approve this surveillance. The technology was intended to collect data only about the target of the investigation. No one else was to be burdened by the tool. No one else was to have their privacy compromised.
But whether the technology did what it was said to do depends upon its code. And that code was closed[2]. The contract the government let with the vendor that developed the Carnivore software did not require that the source for the software be made public. It instead permitted the vendor to keep the code secret.
Now it’s easy to understand why the vendor wanted its code kept secret. In general, inviting others to look at your code is much like inviting them to your house for dinner: There’s lots you need to do to make the place presentable. In this case in particular, the DOJ may have been concerned about security[3]. But substantively, however, the vendor might want to use components of the software in other software projects. If the code is public, the vendor might lose some advantage from that transparency. These advantages for the vendor mean that it would be more costly for the government to insist upon a technology that was delivered with its source code revealed. And so the question should be whether there’s something the government gains from having the source code revealed.
And here’s the obvious point: As the government quickly learned as it tried to sell the idea of Carnivore, the fact that its code was secret was costly. Much of the government’s efforts were devoted to trying to build trust around its claim that Carnivore did just what it said it did. But the argument “I’m from the government, so trust me” doesn’t have much weight. And thus, the efforts of the government to deploy this technology — again, a valuable technology if it did what it said it did — were hampered.
I don’t know of any study that tries to evaluate the cost the government faced because of the skepticism about Carnivore versus the cost of developing Carnivore in an open way[4]. I would be surprised if the government’s strategy made fiscal sense. But whether or not it was cheaper to develop closed rather than open code, it shouldn’t be controversial that the government has an independent obligation to make its procedures — at least in the context of ordinary criminal prosecution — transparent. I don’t mean that the investigator needs to reveal the things he thinks about when deciding which suspects to target. I mean instead the procedures for invading the privacy interests of ordinary citizens.
The only kind of code that can do that is “open code.” And the small point I want to insist upon just now is that where transparency of government action matters, so too should the kind of code it uses. This is not the claim that all government code should be public. I believe there are legitimate areas within which the government can act secretly. More particularly, where transparency would interfere with the function itself, then there’s a good argument against transparency. But there were very limited ways in which a possible criminal suspect could more effectively evade the surveillance of Carnivore just because its code was open. And thus, again, open code should, in my view, have been the norm.
Machines That Count
Before November 7, 2000, there was very little discussion among national policy makers about the technology of voting machines. For most (and I was within this majority), the question of voting technology seemed trivial. Certainly, there could have been faster technologies for tallying a vote. And there could have been better technologies to check for errors. But the idea that anything important hung upon these details in technology was not an idea that made the cover of the front page of the New York Times.
The 2000 presidential election changed all that. More specifically, Florida in 2000 changed all that. Not only did the Florida experience demonstrate the imperfection in traditional mechanical devices for tabulating votes (exhibit 1, the hanging chad), it also demonstrated the extraordinary inequality that having different technologies in different parts of the state would produce. As Justice Stevens described in his dissent in Bush v. Gore, almost 4 percent of punch-card ballots were disqualified, while only 1.43 percent of optical scan ballots were disqualified[5]. And as one study estimated, changing a single vote on each machine would have changed the outcome of the election[6].
The 2004 election made things even worse. In the four years since the Florida debacle, a few companies had pushed to deploy new electronic voting machines. But these voting machines seemed to create more anxiety among vote rs than less. While most voters are not techies, everyone has a sense of the obvious queasiness that a totally electronic voting machine produces. You stand before a terminal and press buttons to indicate your vote. The machine confirms your vote and then reports the vote has been recorded. But how do you know? How could anyone know? And even if you’re not conspiracy-theory-oriented enough to believe that every voting machine is fixed, how can anyone know that when these voting machines check in with the central server, the server records their votes accurately? What’s to guarantee that the numbers won’t be fudged?
The most extreme example of this anxiety was produced by the leading electronic voting company, Diebold. In 2003, Diebold had been caught fudging the numbers associated with tests of its voting technology. Memos leaked to the public showed that Diebold’s management knew the machines were flawed and intentionally chose to hide that fact. (The company then sued students who had published these memos — for copyright infringement. The students won a countersuit against Diebold.)
2.
Declan McCullagh, "It's Time for the Carnivore to Spin,"
3.
Ann Harrison, "Government Error Exposes Carnivore Investigators; ACLU Blasts Team for Close Ties to Administration,"
4.
The Mitre Corporation did examine a related question for the military. See Carolyn A. Kenwood,
6.
Di Franco et al., "Small Vote Manipulations Can Swing Elections,"