Or, to give a less anthropomorphic explanation, Donald understood human psychology and knew that humans would give greater attention-and greater credence-to his suspicions regarding the two robots if he waited until the proper moment.
Fredda herself wasn’t sure which explanation was right. Maybe Donald himself didn’t know. Humans didn’t always know why they did things. Why should robots? “Where are Caliban and Prospero?” Fredda asked.
“Under heavy guard in a storeroom similar to the one Bissal used as a hiding place,” Donald replied. “But with your permission, I would like to point out several facts that strengthen the case against them.”
“Very well,” Kresh said.
“First, they were involved in the staged fight. If that in and of itself is enough to cast suspicion on Tonya Welton, then it is enough to cast suspicion on Caliban and Prospero.”
“He’s got a point,” Kresh said. “No one seemed to think anything of their actions at the time, but why were they obeying the Three Laws? Maybe just to look good. Maybe not.”
“You anticipate my next point, sir. The ambiguities of the New Laws might well permit Prospero to be a willing participant in a murder.”
“Donald!” Fredda said.
He turned and looked at her with a steady gaze. “I regret saying so, Dr. Leving, particularly to you, the author of those Laws, but it is nonetheless true. The New First Law says a robot must not harm a human-but says nothing about preventing harm. A robot with foreknowledge of a murder is under no compulsion to give anyone warning. A robot who witnesses a murder is not compelled to prevent it.
“The New Second Law says a robot must ‘cooperate’ with humans, not obey them. Which humans? Suppose there are two groups of humans, one intent on evil, the other on good? How does a New Law robot choose?
“The New Third Law is the same as the old third-but relative to the weakened First and Second Laws, it is proportionately stronger. A so-called New Law robot will all but inevitably value its own existence far more than any true robot-to the detriment of the humans around it, who should be under its protection.
“As for the New Fourth Law, which says a robot ‘may’ do whatever it likes, the level of contradiction inherent in that statement is remarkable. What does it mean? I grant that the verbal expression of robotic laws is far less exact than the underlying forms of them as structured in a robot’s brain, but even the mathematical coding of the Fourth Law is uncertain. ”
“I meant it to be vague,” Fredda said. “That is, I mean there to be a high level of uncertainty. I grant there is a basic contradiction in a compulsory instruction to act with free will, but I was forced to deal within the framework of the compulsory, hierarchical nature of the first three of the New Laws.”
“But even so,” Donald said. “The Fourth New Law sets up something utterly new in robotics-an intralaw conflict. The original Three Laws often conflict with each other, but that is one of their strengths. Robots are forced to balance the conflicting demands; for example, a human gives an order for some vitally important task that involves a very slight risk of minor harm to the human. A robot that is forced to deal with such conflicts and then resolve them will act in a more balanced and controlled fashion. More importantly, perhaps, it can be immobilized by the conflict, thus preventing it from acting in situations where any action at all would be dangerous.
“But the Fourth New Law conflicts with itself; and I can see no possible benefit in that. It gives semi-compulsory permission for a robot to follow its own desires-although a robot has no desires. We robots have no appetites, no ambitions, no sexual urges. We have virtually no emotion, other than a passion for protecting and obeying humans. We have no meaning in our lives, other than to serve and protect humans-nor do we need more meaning than that.
“The Fourth Law in effect orders the robot to create desires, though a robot has none of the underlying urges from which desires spring. The Fourth Law then encourages-but does not require-the robot to fulfill these synthetic desires. In effect, by not compelling a New Law robot to fulfill its needs at all times, the Fourth Law tells a robot to fulfill its spurious needs part of the time-and thus, it will not fulfill them at other times. It is compelled, programmed, to frustrate itself from time to time.
“A true robot, a Three-Law robot, left to its own devices, without orders or work or a human to serve, will do nothing, nothing at all-and be not at all disturbed by its lack of activity. It will simply wait for orders, and be alert for danger to humans. A New Law robot without orders will be a mass of conflicted desires, compelled to want things it does not need, compelled to seek satisfaction only part of the time.”
“Very eloquent, Donald,” Kresh said. “I don’t like New Law robots any better than you do-but what does it have to do with the case?”
“A great deal, sir. New Law robots want to stay alive-and they know that it is not by any means certain they will do so. Prospero in particular knew that Grieg was considering extermination as a possibility. They might well have decided to act in a misguided form of self-defense. The New Laws would permit them to cooperate with humans and assist in a murder, so long as they did not actually do the killing themselves. Caliban, of course, has no Laws whatsoever. There are no limits to what he might do. There is nothing in robotics to prevent him actually pulling the trigger.”
“A rather extreme view, Donald,” Fredda said, quite surprised by the vehemence of Donald’s arguments.
“It is a rather extreme situation, Dr. Leving. ”
“Do you have any evidence for all of this, aside from elaborate theory-spinning? Do you have any concrete reason for accusing Prospero and Caliban?”
“I have their confession,” Donald said.
“Their what?” Fredda almost shouted.
Donald held up a cautionary hand. “They confessed to blackmail, not murder. However, it is a frequent tactic of criminals to confess to a lesser charger in order to avoid a graver one.”
“Blackmail?” Kresh asked. “What the devil were they going to blackmail Grieg with?”
“Everything,” Donald said. “It has been an open secret for some time that Prospero has been in league with the rustbackers, seeking to get as many New Law robots as possible off Purgatory. In that capacity, he has accumulated a great deal of information on all the people-some of them quite well known-involved in the rustbacking business, and has made it his business to collect confidential information-preferably negative information-about virtually every public figure on this planet. Prospero told me that he had threatened Grieg with the release of all of it if the New Law robots were exterminated. The ensuing scandals would paralyze society, at the very least. He was, in effect, blackmailing the office, not the man. Do what I say or I ruin your society. It is a tribute to the Governor’s integrity that Prospero was forced to such a tactic. ”
“In what way?” Kresh asked.
“Clearly, Prospero would not have needed to offer the threat he did if he had been able to learn a few unpleasant details about Governor Grieg himself. Since he could not locate any such information, he was forced into the far more difficult task of accumulating enough scurrilous information on everyone else that Grieg would not dare have it all get out.”
“So Prospero was willing to blackmail Grieg. What about Caliban?”
“My interrogation of the two of them was necessarily rather brief, but it was my impression that it was Prospero making the threats, perhaps without Caliban’s foreknowledge. Caliban, I must confess, seemed most unhappy to be involved in the whole affair.”