Выбрать главу

“So you are a proponent of artificial intelligence?”

“You could say that, yes. I’ve developed it, I teach it, I believe it will make the world a better place.”

“Is it fair to say you have been immersed in AI since its beginning?”

“Goodness, I’m not that old.”

I waited for the polite laughter in the courtroom to subside.

“Well, then, can you tell us how long artificial intelligence has been around?” I asked.

“Early forms of it go back to the sixties, at least,” Spindler said.

“Are you talking about something called Eliza?”

“Yes. Long before there was a Siri or an Alexa or a Watson, there was Eliza.”

“Can you tell us about Eliza, Professor Spindler?”

Mitchell Mason objected, citing relevancy, but the judge overruled him without asking me to defend the question.

“You can go ahead and answer, Professor,” I said.

“Eliza was an early form of artificial intelligence,” Spindler said. “It is widely considered to be one of the very first chatbots.”

“And who — or I should say, what — was Eliza?”

“Eliza was a computer program developed at MIT — the Massachusetts Institute of Technology — in the mid-sixties. It was a fairly simple software program originally conceived of as a computerized psychotherapist. It was named after Eliza Doolittle from the Shaw play Pygmalion and, of course, the musical My Fair Lady, the movie version of which premiered the same year work began on Eliza.”

“As I recall, the movie was about a professor of phonetics trying to teach an uneducated Cockney flower girl how to speak properly?”

“Yes, with Audrey Hepburn as Eliza.”

Spindler said it with a tone of deference for the great screen beauty. This prompted Judge Ruhlin to wave off a rising Mitchell Mason and step in before he could even object.

“Mr. Haller, could we please move on to testimony germane to the case at hand?” she asked.

“Apologies, Your Honor,” I said. “Moving on. Professor Spindler, is this early form of artificial intelligence of importance today and to this case?”

“Yes, it is,” Spindler said. “There is a phenomenon known as the Eliza effect that is very much in play today and in regard to this case.”

“How so, Professor? What is the Eliza effect?”

“In short, it is people’s tendency to attribute human thoughts and emotions to machines. I believe that Joseph Weizenbaum, the creator of Eliza, called it a wonderful illusion of intelligence and spontaneity. But of course it wasn’t real. It was artificial. Eliza was literally following a script and operated by matching a user’s typed words or queries with potential responses in that script.”

“Would you say that the wonderful illusion of AI has come a long way in the sixty years since Eliza?”

“Yes, certainly. Eliza was a dialogue box. You typed in a question and it answered or, more often, responded with a question of its own. It simulated Rogerian psychotherapy, which is a humanistic approach to patients that is dependent on simple, supportive, and nonjudgmental responses from the therapist. It’s the And how did that make you feel? kind of therapy. We have much more advanced chatbots and conversation apps nowadays that include visual and audio dimensions that seem quite real.”

“Have you had a chance to examine Wren, the AI companion involved in this case?”

“I have reviewed the chat logs and evaluated the app’s underpinnings — its framework and graphics — and sifted through its code, yes. Wren’s come a long way from its ancestor Eliza. But the basic foundation of a conversational chatbot is pretty much unchanged.”

“Meaning what?”

“Meaning garbage in, garbage out. It’s all about the quality of the programming. The coding, training, and ongoing refinements. Whatever data goes into the training of a large language AI model is what comes out when it is put into use.”

“Are you saying that an AI program like Wren will carry the biases of those who feed it data and train it?”

“That is absolutely what I’m saying. It is true of all technology.”

“Can you tell the jury, in layperson’s terms, if you will, how a generative AI system is trained?”

“In this case with the Clair app, it’s called supervised learning. Vast amounts of data are uploaded to the program so that it can respond effectively to the end user’s prompts and questions. It’s called RCD, relevant conversation data. Coders create response templates based on the defined intent of the platform — in this case, a chatbot for teenagers. Ideally data is updated continuously, and the coders interact with the program continuously and for long periods — sometimes years — before the program is ready to go live with users.”

“Professor, you said the coders interact with the program continuously. What does that mean?”

“In the lab, they are in continuous conversation with the program, inputting data, asking it questions, giving it prompts, studying responses, making sure these are relevant to the program’s purpose.”

I checked the jury to make sure they were still plugged in and paying attention. I knew I was in the weeds with testimony about things difficult to understand. But I had to find ways to make the science understandable. There was a letter carrier for the US Postal Service on the jury. He was my target. I had to make the science palatable and understandable to a man who drove or walked the streets every day, stuffing letters into mailboxes. This was not a judgment on his intellect. I had wanted the letter carrier on the jury for this very reason. I knew that if he understood the technology of the case, the entire jury should.

I keyed in on the letter carrier now and saw he was writing in his notebook. I hoped he wasn’t checked out and doodling, but I couldn’t tell. I looked back at Spindler and continued.

“Would you say it’s like teaching a child?” I asked.

“To a degree, yes,” Spindler said. “A nascent AI system is an empty vessel. You have to feed it data. You have to nourish it. The data you feed it depends on its intended use. If it’s a business application, you feed it data from the Harvard Business School, the Wall Street Journal, and so on. If it’s a social companion, you feed it all sorts of media — music, films, books, you name it. You then train it. Programmers spend their days inputting and outputting — asking questions, grading responses. This goes on and on until the program is deemed ready.”

“And what about guardrails, Professor Spindler?”

“Guardrails are important. You start with Asimov’s three laws of robotics and go from there.”

“Can you share with the jury who Asimov is and what the three laws are?”

“Isaac Asimov was a futurist and a science fiction writer. He came up with the three laws: One, a robot cannot harm a human being. Two, a robot must obey orders from a human unless the order conflicts with the first law. And three, a robot must protect its own existence, as long as such action does not conflict with the first or second law.”

I checked the jury again. It appeared they were staying locked in. The letter carrier was no longer writing. He was looking directly at the witness. I glanced at the judge. She was turned in her seat so she could look directly at Spindler as he testified. I took that as another good sign and continued.

“Professor Spindler, do all AI systems follow these laws?” I asked.

“Of course not,” Spindler said. “You have military applications of artificial intelligence that are designed to kill the enemy, missile guidance systems and so forth. That breaks the first law right there.”

“What about in nonmilitary applications?”