In the early 2000s, following the September 11 terrorist attacks, something similar was proposed in the United States. The Defense Department set up the Information Awareness Office and green-lit the development of a program called Total Information Awareness (TIA). Pitched as the ultimate security apparatus to detect terrorist activity, TIA was designed and funded to aggregate all “transactional” data—including bank records, credit-card purchases and medical records—along with other bits of personal information to create a centralized and searchable index for law enforcement and counterterrorist agencies. Sophisticated data-mining technologies would be built to detect patterns and associations, and the “signatures” that dangerous people left behind would reveal them in time to prevent another attack.
As details of the TIA program leaked out to the public, a range of vocal critics emerged from both the right and the left, warning about the potential costs to civil liberties, privacy and long-term security. They zeroed in on the possibilities of abuse of such a massive information system, branding the program “Orwellian” in scope. Eventually, a congressional campaign to shut TIA down resulted in a provision to deny all funds for the program in the Senate’s 2004 defense appropriations bill. The Information Awareness Office was shuttered permanently, though some of its projects later found shelter in other intelligence agencies in the government’s sprawling homeland-security sector.
Fighting for privacy is going to be a long, important struggle. We may have won some early battles, but the war is far from over. Generally, the logic of security will always trump privacy concerns. Political hawks merely need to wait for some serious public incident to find the political will and support to push their demands through, steamrolling over the considerations voiced by the doves, after which the lack of privacy becomes normal. With integrated information platforms like these, adequate safeguards for citizens and civil liberties must be firmly in place from the onset, because once a serious security threat appears, it is far too easy to overstep. (The information is already there for the taking.) Governments operating surveillance platforms will surely violate restrictions placed on them (by legislation or legal ruling) eventually, but in democratic states with properly functioning legal systems and active civil societies, those errors will be corrected, whether that means penalties for perpetrators or new safeguards put into place.
Serious questions remain for responsible states. The potential for misuse of this power is terrifyingly high, to say nothing of the dangers introduced by human error, data-driven false positives and simple curiosity. Perhaps a fully integrated information system, with all manner of data inputs, software that can interpret and predict behavior and humans at the controls is simply too powerful for anyone to handle responsibly. Moreover, once built, such a system will never be dismantled. Even if a dire security situation were to improve, what government would willingly give up such a powerful law-enforcement tool? And the next government in charge might not exhibit the same caution or responsibility with its information as the preceding one. Such totally integrated information systems are in their infancy now, and to be sure they are hampered by various challenges (like consistent data-gathering) that impose limits on their effectiveness. But these platforms will improve, and there is an air of inevitability around their proliferation in the future. The only remedies for potential digital tyranny are to strengthen legal institutions and to encourage civil society to remain active and wise to potential abuses of this power.
A final note on digital content as we discuss its uses in the future: As online data proliferates and everyone becomes capable of producing, uploading and broadcasting an endless amount of unique content, verification will be the real challenge. In the past few years, major news broadcasters have shifted from using only professional video footage to including user-generated content, like videos posted to YouTube. These broadcasters typically add a disclaimer that the video cannot be independently verified, but the act of airing it is, in essence, an implicit verification of its content. Dissenting voices may claim that the video has been doctored, or is somehow misleading, but those claims when registered get a fraction of the attention and are often ignored. The trend toward trusting unverified content will eventually spur a movement toward more rigorous, technically sound verification.
Verification, in fact, will become more important in every aspect of life. We explored earlier how the need for verification will come to shape our online experiences, requiring better protections against identity theft, with biometric data changing the security landscape. Verification will also play an important role in determining which terrorist threats are actually valid. To avoid identification, most extremists will use multiple SIM cards, multiple online identities and a range of obfuscating tools to cover their tracks. The challenge for law enforcement will be finding ways to handle this information deluge without wasting man-hours on red herrings. Having “hidden people” registries in place will reduce this problem for authorities but will not solve it.
Because the general public will come to prefer, trust, depend on or insist on verified identities online, terrorists will make sure to use their own verified channels when making claims. And there will be many more ways to verify the videos, photos and phone calls that extremist groups use to communicate. Sharing a photograph of hostages with fresh daily newspapers will become an antiquated practice—the photo itself is the proof of when it was taken. Through digital forensic techniques like checking the digital watermarks, IT experts can verify not only when, but where and how.
This emphasis on verified content, however, will require terrorists to make good on their threats. If a known terrorist does not do so, the subsequent loss of credibility will hurt his and his group’s reputation. If al-Qaeda were to release an audio recording proving that one of its commanders survived a drone attack, but forensic computer experts using voice-recognition software determined that someone else’s voice was on the tape, it would weaken al-Qaeda’s position and embolden its critics. Each verification challenge would chip away at the grandiose image that many extremist groups rely on to raise funds, recruit and instill fear in others. Verification can therefore be a tremendous tool in the fight against violent extremism.
The Battle for Hearts and Minds Comes Online
While it’s true that effective hackers and computer experts will enhance terror groups’ capabilities, the broad foundation of recruits will, like today, be basic foot soldiers. They’ll be young and undereducated, and they’ll have grievances that extremists exploit to their own advantage. We believe that the most pivotal shift in counterterrorism strategy in the future will not concern raids or mobile monitoring, but instead will focus on chipping away at the vulnerability of these at-risk populations through technological engagement.