States will have to work hard to maintain the security of their shores and borders from the growing threat of enemy UAVs, which, by design, are hard to detect. As autonomous navigation becomes possible, drones will become mini cruise missiles, which, once fired, cannot be stopped by interference. Enemy surveillance drones may be more palatable than drones carrying missiles, but both will be considered a threat since it won’t be easy to tell the two apart. The most effective way to target an enemy drone might not be with brute force but electronically, by breaching the UAV’s cybersecurity defenses. Warfare then becomes, as Singer put it, a “battle of persuasion”—a fight to co-opt and persuade these machines to do something other than their mission. In late 2011, Iran proudly displayed a downed but intact American drone, the RQ-170 Sentinel, which it claimed to have captured by hacking into its defenses after detecting it in Iranian airspace. (The United States, for its part, would say only that the drone had been “lost.”) An unnamed Iranian engineer told The Christian Science Monitor that he and his colleagues were able to make the drone “land on its own where we wanted it to, without having to crack the remote-control signals and communications” from the U.S. control center because of a known vulnerability in the plane’s GPS navigation. The technique of implanting new coordinates, known as spoofing, while not impossible, is incredibly difficult (the Iranians would have had to get past the military’s encryption to reach the GPS, by spoofing the signals and jamming the communications channels).
Diplomatic solutions might involve good-faith treaties between states not to send surveillance drones into each other’s airspace or implicit agreements that surveillance drones are an acceptable offense. It’s hard to say. Perhaps there might emerge international requirements that surveillance drones be easily distinguishable from bomber drones. Some states might join together in a sort of “drone shield,” not unlike the nuclear alliance of the Cold War, in which case we would see the world’s first drone-based no-fly zone. If a small and poor country cannot afford to build or buy its own bomber drones, yet it fears aerial attacks from an aggressive neighbor, it might seek an alliance with a superpower to guarantee some measure of protection. It seems unlikely, however, that states without drones will remain bereft for long: The Sentinel spy drone held by the Iranians cost only around $6 million to make.
The proliferation of robots and UAVs will increase conflict around the world—whenever states acquire them, they’ll be eager to test out their new tools—but it will decrease the likelihood of all-out war. There are a few reasons for this. For one, the phenomenon is still too new; the international treaties around weapons and warfare—the Nuclear Nonproliferation Treaty, the Anti-Ballistic Missile Treaty, and the Chemical Weapons Convention, to name a few—have not caught up to the age of drones. Boundaries need to be drawn, legal frameworks need to be developed and politicians must learn how to use these tools responsibly and strategically. There are serious ethical considerations that will be aired in public discourse (as is taking place in the United States currently). These important issues will lead states to exhibit caution in the early years of drone proliferation.
We must also consider the possibility of a problem with loose drones, similar to what we see with nuclear weapons today. In a country such as Pakistan, for example, there are real concerns about the state’s capacity to safeguard its nuclear stockpiles (estimated to be a hundred nuclear weapons) from theft. As states develop large fleets of drones, there will be a greater risk that one of these could fall into the wrong hands and be used against a foreign embassy, military base or cultural center. Imagine a future 9/11 committed not by hijackers on commercial airliners, but instead by drones that have fallen out of state hands. These fears are sufficient to spur future treaties focused on establishing requirements for drone protection and safeguarding.
States will have to determine, separately or together, what the rules around UAVs will be, whether they will be subject to the same rules as regular planes regarding violating sovereign airspace. States’ mutual fears will guard against a rapid escalation of drone warfare. Even when it was revealed that the American Sentinel drone had violated Iranian airspace, the reaction in Tehran was boasting and display, not retaliation.
The public will react favorably to the reduced lethality of drone warfare, and that will forestall outright war in the future. We already have a few years of drone-related news cycles in America from which to learn. Just months before the 2012 presidential election, government leaks resulted in detailed articles about President Obama’s secret drone operations. Judging by the reaction to drone strikes in both official combat theaters and unofficial ones like Somalia, Yemen and Pakistan, lethal missions conducted by drones are far more palatable to the American public than those carried out by troops, generating fewer questions and less outrage. Some of the people who advocate a reduced American footprint overseas even support the expansion of the drone program as a legitimate way to accomplish it.
We do not yet understand the consequences—political, cultural and psychological—of our newfound ability to exploit physical and emotional distance and truly “dehumanize” war to such a degree. Remote warfare is taking place more than at any other time in history and it is only going to become a more prominent feature of conflict. Historically, remote warfare has been thought of mostly in terms of weapons delivered via missiles, but in the future it will be both commonplace and acceptable to further separate the actor from the scene of battle. Judging from current trends, we can assume that one effect of these changes will be less public involvement on the emotional and political levels. After all, casualties on the other side are rarely the driving factor behind foreign policy or public sentiment; if American troops are not seen to be in harm’s way, the public interest level drops dramatically. This, in turn, means a more muted population on matters of national security; both hawks and doves become quieter with a smaller threat to their own soldiers on the horizon. With more combat options that do not inflame public opinion, the government can pursue its security objectives without having to consider declaring war or committing troops, decreasing the possibility of outright war.
The forecast of fewer civilian casualties, less collateral damage and the reduced risk of human injury are welcome, but the shift toward a more automated battlefield will introduce significant new vulnerabilities and challenges. Chief among them will be maintaining the cybersecurity of equipment and systems. The data flow between devices, ground robots and UAVs, and their human-directed command-and-control centers must be fast, secure and unimpeded by poor infrastructure, just like communications between troop units and their bases. This is why militaries set up their own communications networks instead of relying on the local one. Until robots in the field have autonomous artificial intelligence, an impeded or broken connection turns these machines into expensive dead weight—possibly dangerous, too, since capture of an enemy’s robot is akin to capturing proprietary technology. There is no end to the insights such a capture could yield, particularly if the robot is poorly designed—not only information about software and drone engineering, but even more sensitive data like enemy locations gleaned through digital coordinates. (It’s also hard to imagine that countries won’t purposely crash-land or compromise a decoy UAV, filled with false information and misleading technical components, as part of a misinformation campaign.) In wars where robotic elements are present, both sides will employ cyber attacks to interrupt enemy activity, whether by spoofing (impersonating a network identity) or employing decoys to disrupt enemy sensor grids and degrade enemy battle networks. Manufacturers will attempt to build in fail-safe mechanisms to limit the damage of these attacks, but it will be difficult to build anything technologically bulletproof.