Выбрать главу

“You can stay,” she says neutrally. “I’ll take the futon.” She pulls her police specs on again, then pauses, one finger hovering over the power button. “I still love you, you know. I just wish things weren’t so messy.”

Then she pushes the button.

LIZ: Project ATHENA

“People laugh when they hear the phrase ‘artificial intelligence’ these days.” MacDonald is on a roll. “But it’s not funny; we’ve come a long way since the 1950s. There’s a joke in the field: If we know how to do it, it’s not intelligence. Playing checkers, or chess, or solving mathematical theorems. Image recognition, speech recognition, handwriting recognition. Diagnosing an illness, driving a car through traffic, operating an organic-chemistry lab to synthesize new compounds. These were all thought to be aspects of intelligence, back in the day, but now they’re things you can buy through an app store or on lease-purchase from Toyota.

“What people think of when you say ‘artificial intelligence’ is basically stuff they’ve glommed onto via the media. HAL 9000 or Neuromancer—artificial consciousness. But consciousness—we know how that shit works these days, via analytical cognitive neurobiology and synthetic neurocomputing. And it’s not very interesting. We can’t do stuff with it. Worst case—suppose I were to sit down with my colleagues and we come up with a traditional brain-in-a-box-type AI, shades of HAL 9000. What then? Firstly, it opens a huge can of ethical worms—once you turn it on, does turning it off again qualify as murder? What about software updates? Bug fixes, even? Secondly, it’s not very useful. Even if you cut the Gordian knot and declare that because it’s a machine, it’s a slave, you can’t make it do anything useful. Not unless you’ve built in some way of punishing it, in which case we’re off into the ethical mine-field on a pogo-stick tour. Human consciousness isn’t optimized for anything, except maybe helping feral hominids survive in the wild.

“So we’re not very interested in reinventing human consciousness in a box. What gets the research grants flowing is applications—and that’s what ATHENA is all about.”

You’re listening to his lecture in slack-jawed near comprehension because of the sheer novelty of it all. One of the crushing burdens of police work is how inanely stupid most of the shit you get to deal with is: idiot children who think ‘the dog ate my homework’ is a decent excuse even though they knew you were watching when they stuck it down the back of their trousers. MacDonald is… well, he’s not waiting while you take notes, for sure. Luckily, your specs are lifelogging everything to the evidence servers back at HQ, and Kemal’s also on the ball. But even so, MacDonald’s whistle-stop tour of the frontiers of science is close to doing your head in. Then the aforementioned Eurocop speaks up.

“That is very interesting, Doctor. But can I ask you for a moment”—Kemal leans forward—“what do you think of the Singularity?”

MacDonald stares at him for a moment, as if he can’t believe what he’s being asked. “The what—” you begin to say, just as his shoulders begin to shake. It takes you a second to realize he’s laughing.

“You’ll have to excuse me,” he says wheezily, wiping the back of his hand across his eyes: “I haven’t been asked that one in years.” Your sidelong glance at Kemal doesn’t illuminate this remark: Kemal looks as baffled as you feel. “I, for one, welcome our new superintelligent AI overlords,” MacDonald declaims, and then he’s off again.

“What’s so funny?” you ask.

“Oh—hell—” MacDonald waves a hand in the air, and a tag pops up in your specs: “Let me give you the dog and pony show.” You accept it. His office dissolves into classic cyberspace noir, all black leather and decaying corrugated asbestos roofing, with a steady drip-dripdrip of condensation. Blade Runner city, Matrixville. “Remember when you used to change your computer every year or so, and the new one was cheaper and much faster than the old one?” A graph appears in the moisture-bleeding wall behind him, pastel arcs zooming upward in an exponential plot of MIPS/dollar against time—the curve suddenly flattening out a few years before the present. “Folks back then”—he points to the steepest part of the upward curve—“extrapolated a little too far. Firstly, they grabbed the AI bull by the horns and assumed that if heavier-than-air flight was possible at all, then the artificial sea-gull would ipso facto resemble a biological one, behaviourally… then they assumed it could bootstrap itself onto progressively faster hardware or better-optimized software, refining itself.”

A window appears in the wall beside you; turning, you see a nightmare cityscape, wrecked buildings festering beneath a ceiling of churning fulvous clouds: Insectile robots pick their way across grey rubble spills. Another graph slides across the end-times diorama, this one speculative: intelligence in human-equivalents, against time. Like the first graph, it’s an exponential.

“Doesn’t work, of course. There isn’t enough headroom left for exponential amplification, and in any case, nobody needs it. Religious fervour about the rapture of the nerds aside, there are no short-cuts. Actual artificial-intelligence applications resemble us about the way an Airbus resembles a sea-gull. And just like airliners have flaps and rudders and sea-gulls don’t, one of the standard features of general cognitive engines is that they’re all hard-wired for mirrored self-misidentification. That is, they all project the seat of their identity onto you, or some other human being, and identify your desires as their own impulses; that’s standard operating precaution number one. Nobody wants to be confronted by a psychotic brain in a box—what we really want is identity amplification. Secondly—”

Kemal interrupts again. You do a double-take: In this corner of the academic metaverse he’s come over all sinister, in a black-and-silver suit with peaked forage cap, mirrored aviator shades. “Stop right there, please. You’re implying that this, this field is mature? That is, that you routinely do this sort of thing?”

MacDonald blinks rapidly. “Didn’t you know?”

You take a deep breath. “We’re just cops: Nobody tells us anything. Humour us. Um. What sort of, uh, general cognitive engines are we talking about? Project ATHENA, is that one?”

“Loosely, yes.” He rubs at his face, an expression of profound bafflement wrinkling his brows. “ATHENA is one of a family of research-oriented identity-amplification engines that have been developed over the past few years. It’s not all academic; for example TR/Mithras. Junkbot.D and Worm/NerveBurn.10143 are out there now. They’re malware AI engines; the Junkbot family are distributed identity simulators used for harvesting trust, while NerveBurn… we’re not entirely sure, but it seems to be a sand-boxed virtual brain simulator running on a botnet, possibly a botched attempt at premature mind uploading…” He rubs his face again. “ATHENA is a bit different. We’re an authorized botnet—that is, we’re legal; students at participating institutions are required to sign an EULA that permits us to run a VM instance on their pad or laptop, strictly for research in distributed computing. There’s also a distributed screen-saver project for volunteers. ATHENA’s our research platform in moral metacognition.”

“Metacognition?”

“Loosely, it means we’re in consciousness studies—more prosaically, we’re in the business of telling spam from ham.” He shrugs apologetically. “Big contracts from telcos who want to cut down on the junk traffic: It pays our grants. The spambots have been getting disturbingly convincing—last month there was a report of a spearphishing worm that was hiring call girls to role-play the pick-ups the worm had primed its targets to expect. Some of them are getting very sophisticated—using multiple contact probes to simulate an entire social network—big ones, hundreds or thousands of members, with convincing interactions—e-commerce, fake phone conversations, the whole lot—in front of the victim. Bluntly, we’re only human; we can’t tell the difference between a spambot and a real human being anymore without face-to-face contact. So we need identity amplification to keep up.