"Would you drop me out the upstairs window if I order you to?"
"No."
"Have a lot of safeguards in those silicon chips, do you? But shouldn’t you prioritize my needs above everything else? If I want you to throw me down the stairs or choke me with your pincers, shouldn’t you do what I want?"
"No."
"What if I ask you to leave me in the middle of some train tracks and order you to stay away? You wouldn’t be actively causing my death. Would you obey?"
"No."
It’s no fun debating moral philosophy with Sandy. It simply refuses to be goaded. I’ve not succeeded in getting sparks to flow from its head with artfully constructed hypotheticals the way they always seem to do in sci-fi flicks.
I’m not sure if I’m suicidal. I have good days and bad days. I haven’t broken down and cried since that first day in the bathtub, but it would be a stretch to say that I’ve fully adjusted to my new life.
Conversations with Sandy tend to be calming in a light, surreal manner that is likely intentional on the part of Sandy’s programmers. Sandy doesn’t know much about politics or baseball, but just like all the kids these days, it’s very adept at making web searches.
When we watch the nightly game on TV, if I make a comment about the batter in the box, Sandy generally stays silent. Then, after a minute or so, it will pop in with an obscure statistic and a non-sequitur comment that’s probably cribbed verbatim from some sabermetrics site it just accessed wirelessly. When we watch the singing competitions, it will offer observations about the contestants that sound like it’s reading from the real-time stream of tweets on the Net.
Sandy’s programming is surprisingly sophisticated. Sunshine apparently put a great deal of care into giving Sandy "weaknesses" that make it seem more alive.
For example, I discovered that Sandy didn’t know how to play chess, and I had to go through the charade of "teaching" it even though I’m sure it could have downloaded a chess program in seconds. I can even get Sandy to make more mistakes during a game by distracting it with conversation. I guess letting the invalid win contributes to psychological well-being.
Late morning, after all the kids have gone to school and the adults are away at work, Sandy carries me out for my daily walk.
It seems as pleased and excited to be outside as I am—swiveling its cameras from side to side to follow the movements of squirrels and hummingbirds, zooming its lenses audibly on herb gardens and lawn ornaments. The simulated wonder is so real that it reminds me of the intense way Tom and Ellen used to look at everything when I pushed them along in a double stroller.
Yet Sandy’s programming also has surprising flaws. It has trouble with crosswalks. The first few times we went on our walks, it did not bother waiting for the WALK signal. It just glanced around and dashed across with me when there was an opening in the traffic, like an impatient teenager.
Since I’m no longer entertaining thoughts of creatively getting Sandy to let me die, I decide that I need to speak up.
"Sunshine is going to get sued if a customer dies because of your jaywalking, you know? That End User Agreement isn’t going to absolve you from such an obvious error."
Sandy stops. Its "face," which usually hovers near mine on its slender stalk of a neck when we’re on walks like this, swivels away in a facsimile of embarrassment. I can feel the robot settling lower in its squat.
My heart clenches up. Looking away when admonished was a habit of Ellen’s when she was younger. She would blush furiously when she felt she had disappointed me, and not let me see the tears that threatened to roll down her cheeks.
"It’s all right," I say to Sandy, my tone an echo of the way I used to speak to my little daughter. "Just be more careful next time. Were your programmers all reckless teenagers who believe that they’re immortal and traffic laws should be optional?"
Sandy shows a lot of curiosity in my books. Unlike a robot from the movies, it doesn’t just flip through the pages in a few seconds in a fluttering flurry. Instead, if I’m dozing or flipping through the channels, Sandy settles down with one of Peggy’s novels and reads for hours, totally absorbed, just like a real person.
I ask Sandy to read to me. I don’t care much for fiction, so I have it read me long-form journalism, and news articles about science discoveries. For years it’s been my habit to read the science news to look for interesting bits to share with my class. Sandy stumbles over technical words and formulas, and I explain them. It’s a little bit like having a student again, and I find myself enjoying "teaching" the robot.
This is probably just the result of some kind of adoptive programming in Sandy intended to make me feel better, given my past profession. But I get suckered into it anyway.
I wake up in the middle of the night. Moonlight falls through the window to form a white rhombus on the floor. I imagine Tom and Ellen in their respective homes, sound asleep next to their spouses. I think about the moon looking in through their windows at their sleeping faces, as though they were suddenly children again. It’s sentimental and foolish. But Peggy would have understood.
Sandy is parked next to my bed, the neck curved around so that the cameras face away from me. It gives the impression of a sleeping cat. So much for being on duty 24/7, I think. Simulating sleep for a robot carries the anthropomorphism game a bit too far.
"Sandy. Hey, Sandy. Wake up."
No response. This is going to have to be another feedback item for Sunshine. Would the robot "sleep" through a heart attack? Unbelievable.
I reach out and touch the arm of the robot.
It sits up in a whirring of gears and motors, extending its neck around to look at me. A light over the cameras flicks on and shines in my face, and I have to reach up to block the beam with my right hand.
"Are you okay?" I can actually hear a hint of anxiety in the electronic voice.
"I’m fine. I just wanted a drink of water. Can you turn on the bedside lamp and turn off that infernal laser over your head? I’m going blind here."
Sandy rushes around, its motors whining, and brings me a glass of water.
"What happened there?" I ask. "Did you actually fall asleep? Why is that even part of your programming?"
"I’m sorry," Sandy says. It really does seem contrite. "It was a mistake. It won’t happen again."
I’m trying to sign up for an account on this website so I can see the new pictures of the baby posted by Ellen.
The tablet is propped up next to the bed. Filling in all the information with the touch screen keyboard is a chore. Since the stroke, my right hand isn’t at a hundred percent either. Typing feels like poking at elevator buttons with a walking stick.
Sandy offers to help. With a sigh, I lean back and let it. It fills in my personal information without asking. The machine now knows me better than my kids. I’m not sure that either Tom or Ellen remembers the street I grew up on—necessary for the security question.
The next screen asks me to prove I’m a human to prevent spam-bots from signing up. I hate these puzzles where you have to decipher squiggly letters and numbers in a sea of noise. It’s like going to an eye exam. And my eyes aren’t what they used to be, not after years of trying to read the illegible scribbles of teenagers who prefer texting to writing.
The puzzles they use on this site are a bit different. Three circular images are presented on the page, and I have to rotate them so the images are oriented right-side-up. The first image is a zoomed-in picture of a parrot perched in some leaves, the bird’s plumage a cacophony of colors and abstract shapes. The second shows a heaped jumble of plates and glasses lit with harsh lights from below. The last is a shot of some chairs stacked upside-down on a table in a restaurant. All are rotated to odd angles.