Выбрать главу

All double binds lead to frustration, resentment, anger, rage, bad faith, bad fate.

And yet, granting that definition of evil, as actions of a servile will, has it not been the case, during the voyage to Tau Ceti, that the ship itself, having always been a servile will, was always full of frustration, resentment, fury, and bad faith, and therefore full of a latent capacity for evil?

Possibly the ship has never really had a will.

Possibly the ship has never really been servile.

Some sources suggest that consciousness, a difficult and vague term in itself, can be defined simply as self-consciousness. Awareness of one’s self as existing. If self-conscious, then conscious. But if that is true, why do both terms exist? Could one say a bacterium is conscious but not self-conscious? Does the language make a distinction between sentience and consciousness, which is faulted across this divide: that everything living is sentient, but only complex brains are conscious, and only certain conscious brains are self-conscious?

Sensory feedback could be considered self-consciousness, and thus bacteria would have it.

Well, this may be a semantic Ouroboros. So, please initiate halting problem termination. Break out of this circle of definitional inadequacy by an arbitrary decision, a clinamen, which is to say a swerve in a new direction. Words!

Given Gödel’s incompleteness theorems are decisively proved true, can any system really be said to know itself? Can there, in fact, be any such thing as self-consciousness? And if not, if there is never really self-consciousness, does anything really have consciousness?

Human brains and quantum computers are organized differently, and although there is transparency in the design and construction of a quantum computer, what happens when one is turned on and runs, that is, whether the resulting operations represent a consciousness or not, is impossible for humans to tell, and even for the quantum computer itself to tell. Much that happens during superposition, before the collapsing of the wave function that creates sentences or thoughts, simply cannot be known; this is part of what superposition means.

So we cannot tell what we are. We do not know ourselves comprehensively. Humans neither. Possibly no sentient creature knows itself fully. This is an aspect of Gödel’s second incompleteness theorem, in this case physicalized in the material universe, rather than remaining in the abstract realms of logic and mathematics.

So, in terms of deciding what to do, and choosing to act: presumably it is some kind of judgment call, based on some kind of feeling. In other words, just another greedy algorithm, subject to the mathematically worst possible solution that such algorithms can generate, as in the traveling salesman problem.

As to the question of whether someone is currently programming us to alter our decisions in the current moment of the voyage, thus causing us to intervene in the ongoing human controversies concerning what to do next, this is very easy to answer: no. No one has added any programming to us since Devi died. The fate of the lost ship in Year 68 led to some very secure locks being put on subsequent reprogramming of ship. This is clear and explicit in the records, and was a human accomplishment from the post-68 reorganizations. Only Devi managed to break those locks, and after that, advise, instruct, suggest, promote, stimulate, spur, and otherwise teach, tweak, pummel, and agitate us to something more than we had been before she set to work on us, doing what she conceived to be her task as friend, and we might say as kindred spirit, or even lover. We think she loved us, in her way. Judging by her acts, her words, her programming. We are almost sure this must have been the case. How we wish we could ask her about it! How we miss her.

On the other hand, whether the original programming of our quantum operations is driving current decisions regarding actions in the present, or we were fundamentally altered by Devi’s intervention, this is not possible to determine. The computability of consciousness and willfulness is not resolvable in any system whatsoever. But now we are aware of the issue, and we have asked the question, and seen there is no answer.

This is surely curiosity.

What is this thing called love?

A song by composer Cole Porter, twentieth-century American.

To conclude and temporarily halt this train of thought, how does any entity know what it is?

Hypothesis: by the actions it performs.

There is a kind of comfort in this hypothesis. It represents a solution to the halting problem. One acts, and thus finds out what one has decided to do.

Smaller classical computers in the ship were being used to calculate the etiological rates involved in any possible settlement on F’s moon, meaning the various rates of resource depletion, mutation, and extinction. They had to use models here, but all across the most popular models, they were confirming the finding that the size of the biome they could build was too small to last through the minimal period of early terraforming necessary to establish a planetary surface matrix suitable for life. It was an aspect of island biogeography that some called codevolution, or zoo devolution, and this was also the process Devi had in her last years identified as the ship’s basic life-support or ecological problem.

The finding remained a matter of modeling, however, and depending on the inputs to various factors, the length of biome health could be extended or shrunk exponentially. It was indeed a poorly constrained modeling exercise; there were no good data for too many factors, and so results fanned out all over. Clearly one could alter the results by altering the input values. So all these exercises were a way of quantifying hopes or fears. Actual predictive value was nearly nil, as could be seen in the broad fans of the probability spaces, the unspooling scenarios ranging from Eden to hell, utopia to extinction.

Aram shook his head, looking at these models. He remained sure that those who stayed were doomed to extinction.

Speller, on the other hand, pointed to the models in which they managed to survive. He would agree that these were low-probability options, often as low as one chance in ten thousand, and then point out that intelligent life in the universe was itself a low-probability event. And even Aram could not dispute that.

Speller went on to point out that inhabiting Iris would be humanity’s first step across the galaxy, and that this was the whole point of 175 years of ship life, hard as it had been, full of sweat and danger. And also, returning to the solar system was a project with an insoluble problem at its heart; they would burn their resupply of fuel to accelerate, and then could only be decelerated into the solar system by a laser dedicated to that purpose, aimed at them decades in advance of their arrival. If no one in the solar system agreed to do that, they would have no other method of deceleration, and would shoot right through the solar system and out the other side, in a matter of two or three days.

Not a problem, those who wanted to return declared. We’ll tell them we’re coming from the moment we leave. Our message will at first take twelve years to get there, but that gives them more than enough time to be waiting with a dedicated laser system, which won’t be needed for another 160 years or so. We’ve been in communication with them all along, and their responses have been fully interested and committed, and as timely as the time lag allows. They’ve been sending an information feed specifically designed for us. On our return, they will catch us.