Выбрать главу

* Limit repair on the battlefield. Injury provides another vulnerable situation. If robots are self-repairing, their function will be impaired. Capture low-performance agents preferentially. If the injured robots are being repaired by other agents, target the repair teams.

* Evolve predatory robots. If you or the enemy employ Pell’s Principle, you’ll need to be prepared to capture or destroy swarms. For starters, you’ll need to let your evolving predators have the capacity and capability of filter feeders like baleen whales. Consider behavioral adaptation first in your predators because the shorter generation time of the prey will limit opportunities for hardware evolution in the predators.

* Make complicated robots. That’s right. Want to control your own robots? Make them complicated. Because complicated usually means expensive, you are likely to have only a few of them. You will also hesitate to send them into harm’s way, as Chuck was predicting. Furthermore, complicated robots will never take over because the laws of probability virtually guarantee their failure. If every component has, let’s say, a 99 percent chance of not failing on a given mission, that sounds pretty good, right? But what if you have two such components in your robot? By the law of independent probabilities, we take the product of the two: 0.99 x 0.99 = 0.98. Not bad. A 98 percent chance of the system, composed of those two components, not failing. Now give your system two thousand components, not unreasonable for some of the more sophisticated fish robots I’ve seen. That’s 0.992000 = 0.0000000002. You’ve got no chance—your robot will fail! The way we keep complicated machines like airliners and space shuttles in business is by building components with lower failure rates (0.99999), engineering redundancy into the machine’s critical systems, and inspecting and replacing parts before they fail. Bottom line: to ensure control of your Evolvabots, make them out of many crappy components.

COMMAND AND CONTROL

My Hollywood alarmism about evolving robots on the battlefield may have you thinking that all of this warfare stuff is just fantasy nonsense. Maybe it is. But let’s pretend that you are an admiral and you have to make that calclass="underline" are military Evolvabots nonsense or good sense? Will some other military surprise us with Evolvabots in battle? This matters because you, as admiral, need to make practical decisions that have long-term consequences. Do you put your limited resources into an offensive Evolvabot development project? Or do you put resources into Evolvabot defensive countermeasures? Keep in mind that if you do use resources on Evolvabots, you have to cut the budget of other tactical systems. How can you be sure that Evolvabots will ever be a serious risk and worth the cost of development or countermeasures?

You can take DARPA’s tack and examine feasibility. Presented with the idea of commanding a fleet of evolving robotic fish, for example, you might want to assess one of the most important aspects of any battle: communication. If no one can figure out how to communicate with a swarm of underwater robots and adjust plans as the battle commences, then you probably don’t have much to worry about.

Communication before and during battle is paramount for the simple reason that, in the words of the Helmuth von Moltke the Elder, chief of staff of the Prussian Army for thirty years, “No plan survives contact with the enemy.” Any battlefield is chaotic swarm intelligence in action. For a battle plan to adapt, each agent has to know the purpose of the mission, understand their part in it, have the ability to communicate on the battlefield to update their tactical knowledge of the enemy, coordinate with other agents and adjacent units about their positions and disposition, and make decisions quickly as information deteriorates and change accelerates.

The first element of communication and decision making on the battlefield starts prior to engagement, and it’s called the commander’s intent:

The commander’s intent describes the desired endstate. It is a concise statement of the purpose of the operation and must be understood two levels below the level of the issuing commander. It must clearly state the purpose of the mission. It is the single unifying focus for all subordinate elements. It is not a summary of the concept of the operation. Its purpose is to focus subordinates on what has to be accomplished in order to achieve success, even when the plan and concept no longer apply, and to discipline their efforts toward that end.[213]

In the best case the commander’s intent is known and understood by all sailors or soldiers so that as the plan deteriorates in battle, individuals can use adaptive behavior to advance the mission. “The commander’s intent,” suggests Chuck, “should be embodied, embedded, in the warriors.” That means that design of the military Evolvabots has to involve Command and Control during design because intelligence and intent, as we’ve seen throughout this book, are part of not just the programmable nervous system but also the type, arrangement, and quality of sensors, motors, and chassis.

After talking to Chuck I was wondering if the commander’s intent (CI) itself could serve as the fitness function for military Evolvabots. Whatever the CI—cause maximum damage to target X or guard squadron Y or rescue fleet Z—the ongoing performance of individual Evolvabots can be judged relative to it. Because the performance of each individual is compared to that of others in its population, the feedback about what works is relevant automatically. It turns out that engineers working for the US Army have already tried, in digital simulation, the idea of using the CI as the fitness function: “Evolution continues until the system produces a final population of high-performance plans which achieve the commander’s intent for the mission.”[214] Did you get that? Tactical plans, which are extremely complicated themselves, can be evolved using genetic algorithms that use the CI as the fitness function.

The fundamental communication issue on the battlefield is that small groups and isolated individuals have to make decisions without checking with their commanders. As Lieutenant Colonel Lawrence G. Shattuck, professor, Behavior and Sciences and Leadership at the US Military Academy, West Point, has written, the pace of events on the battlefield often precludes direct contact with superiors even if communication channels are open.[215] Once communication ceases, for whatever reason, soldiers need to know the CI to help frame their decisions, getting inside the commander’s head to know how she would be making the decision, according to Shattuck.

EVOLVABOTS GET A CONSCIENCE

The central technical challenge is this: get autonomous robots working, communicating, self-repairing, reproducing, and evolving in the wild, without help from humans. Proximal challenges, in addition to the ones already discussed in this chapter, include the following:

* How do we embed the fitness function in a population of freely roaming Evolvabots?

* How do we have that fitness function, which is imposed by humans, be an automatic part of the world in which the Evolvabots are working?

* Or do we let the fitness function be unspecified but emerge from the survival of the robots in the world?

* In any of these scenarios, how do we monitor and control Evolvabots in the wild?

If these issues can be solved, then everything in this chapter is feasible.

вернуться

213

US Army Field Manual (FM) 100–105, Operations (Washington, DC: Government Printing Office [GPO], 1993), 6.

вернуться

214

R. H. Kewley and M. J. Embrechts, “Computational Military Tactical Planning System,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 32, no. 2 (2002): 161–171.

вернуться

215

L. G. Shattuck, “Communicating Intent and Imparting Presence,” Military Review 80, pt. 2 (March–April 2000): 66–72.