Is Our Robots Having Fun Yet?

I recently came across a news item worrying about whether sex robots could be hacked to murder their clients. This opens up a whole new can of first world problems. My immediate reaction was the thought that while the future might be terrifying at least the headlines are going to be hilarious. As usual these days, one can only be ahead of one’s time by moments. Yesterday Amazon Prime released their new original series Philip K. Dick’s Electric Dreams, and judging from the first 3 episodes I’ve seen so far, contemporary science fiction writers have moved on from worrying about whether robots will kill us to the far more vexing concern of whether our robots might not have had an orgasm. Perhaps the series is following the usual binge pattern pioneered by HBO and adopted by Netflix where there is a lot of sex and nudity in the first few episodes in order to get viewers hooked before they tail off into the more mundane tedium of character development and soapy delights. I had hopes for this series, seemingly produced by the same people who’ve done fairly well with Man in the High Castle, but even the presence of Brian Cranston and other fine actors hasn’t helped too much so far. Dick was terrible at writing sex scenes, terrible at relationships in general, at emotions in particular other than anxiety and fear, but he was certainly terrific with excrutiatingly fucked up scenarios. But now instead of Do Androids Dream of Electric Sheep we are getting Do Androids Have Wet Dreams. At least we’re well along the path towards gender fluidity and ethnic variety. Just imagine how dreary it would be if all the actors were still the drab cis white bread stiffs from 1960s staples like Bewitched or 1-Adam-12. The eye candy factor is fairly high in this series, and I am almost moved each time a middle-aged lesbian robot moans with erotic pleasure. You’ve come a long way, baby-bot. Thanks for not killing us all, yet.

(episode 3: Human Is, should be retitled: Boring & Obvious Is)


Machines for the Ethical Treatment of People

As I continue the adventure of writing my current story on Wattpad – Machines, Learning – I keep coming up across readers’ expectations that in the future machines will have had “ethics” programmed into them, somehow. The details escape us. I’ve come across nothing that shows, practically, just how these so-called ethics are going to be introduced into machines, just that it had to happen so of course it just happened. There is bound to be a learning curve, however, and so there are bound to be stories that are set during this period. Where are those stories? Why not write one?

In today’s world, it isn’t ethics that prevents a self-parking car from running over a child, it’s geometry. Any vertical object within range of the camera is enough to halt the motion of the vehicle. A self-driving car avoids a person for the same reason it avoids colliding with a fire hydrant. Is there a geometrical component to morality?

That a machine would not “willingly” harm a person will beg the question of what is meant by harm. Is it merely physical damage? What if the machine is programmed to diagnose psychological conditions. What if the person is unhappy and the program can tell that this unhappiness manifests (is it cause or effect?) by a chemical insufficiency (of, let us say, serotonin) and that by means of medication this deficiency can be addressed – is it ethical for a machine to alter the chemical balance in the human brain in order to induce a state the human would experience as “less unhappiness”? What if there are bad memories causing PTSD? Can the machine erase those memories. Would it be ethical?

Who gets to make that decision? On what grounds? How does this program work?

Who decides the value of happiness versus perhaps the important life lessons that may be learned by not being so fucking happy all the time? What kind of world will it be when no one experiences anything but the perfectly balanced chemical condition deemed optimal by the short-sighted dweebs who wrote the computer programs that were trained by that eternally optimizing data set?

The thing is, computer programs do exactly what they are programmed to do, as long as they don’t run short of resources such as RAM, Virtual Memory or CPU. If there is to be an ethical program it will be a program written by humans with the understandings those humans have about the ethics they prefer, and is not ethics another word for “opinions”?

We are experiencing a rash of such ethics this week after the Daesh bombings of nightclubs and other civilian venues in Paris. The Western World is enraged while at the same time continuously ignoring the same types of bombings of the same types of civilian venues in Beirut, in Baghdad, in other locales apparently not considered to be of the same ethical value to the Western World. Who is going to write these ethical programs? The same people who write the programs that guide the drones and missile launchers that mercilessly “bomb the shit” (to use Donald Trump’s phraseology) out of the civilians who happen to dwell in the cities currently occupied by Daesh?

Many, if not most of the problems in the human world stem from an inability to distinguish fantasy from reality. We insist on believing in our fictions, sometimes to a fanatical degree, such as those guided by an insane insistence on their own interpretations of the words of their prophets, and sometimes to a much lesser degree, such as those who “believe” that in the future people will engage in hand-to-hand combat using light sticks, or that machines will obviously and easily be programmed to behave “ethically”, while in reality they can never behave other than in ways programmed by the humans who design them and all you have to do is pay the slightest attention to the world around us to realize there is no such thing as an ethics, there is only contradiction, complexity and a hell of a lot of wishful thinking in magical make-believe.

Repealing the Three Laws

The Three Laws of Robotics (most famous perhaps from I, Robot – a robot cannot harm a human, a robot must obey human orders except when that would violate law number one, and a robot must not harm itself, except when that might violate laws one or two) seem increasingly to be emblematic of their time, an era of absolutist idealism, of American exceptionalism, born of “know-how”, “can-do” and the power of positive thinking. These are laws embodying the right stuff and the kind of certainty that led to the quagmires of Vietnam, Iraq and Afghanistan. They have a surge-like mentality and have been made obsolete my advances in technology, if not morality and politics.

In the current state of things, artificial intelligence is rooted in machine learning, which is highly statistical and probabilistic in nature. In machine learning, there are no absolute certainties, no guarantees, no one hundred percents. Even a slight possibility that laws number one and or two might be violated would stop a robot in its tracks, and there is always that slight possibility. It is unavoidable. In other words, it’s not so easy. As Nietzsche once put it, life is no argument, because the conditions of life could include error.

An Asimov robot would always be paralyzed by doubt, by the knowledge of “what-if”, because there are natural, real laws that supercede those post-war picket-fence dreams, such as the law of unintended consequences, the law of road-to-hell-paving, and the law of Murphy. Anything a creature says or does can and will lead to unforeseen side effects and complications. We don’t live in a world as simple as some once liked to believe.

Robots, Jobs and Handbaskets

There is no shortage of handbaskets in which the world can go to hell, and certainly robots qualify as one. It’s something to think about, as technology more and more ‘disrupts’ one industry after another. What will be the impact of automation devices in the short- and long-term future? An interesting take on this is provided by the novel Robonomics, by S.A. Wilson, available on Wattpad. In this book teachers are the focus as the target of a general takeover by robot instructors. Told in the first-person by schoolteacher Andrea Anderson, society at large undergoes great shifts as more and more workers are replaced by automatons, unions are busted, protests are infiltrated and co-opted, the underclass grows and the world goes to hell. Wilson is a polished writer who covers a lot of bases in telling the story, and moves the tale forward mainly by dialog and critical events. I would have been interested to see more of the micro-experience, more of the inside-the-classroom-with-the-robot and perhaps a bit less of the macro-society stuff, but that’s just my personal preference. The story reminded me in some ways of a very different ‘handbasket’ story, Blue Tent by Carla Herrera, which is an intensely focused and more visceral evocation of a similar dark future.

There is no doubt that occupations face challenges from future automation. We already have more and more automated factories and warehouses, mechanical jobs that require minimal human interaction. A higher level disruption, such as teachers and doctors, is probably a considerable way off. It would begin, I think, with more low-hanging fruit, such as cashiers. There are now self-checkout lines in more stores, and jobs are certainly lost by that.  ATM machines are another case in point. There are definite limitations with this approach. These, like Facebook, turn the customer into the worker, and that doesn’t fly so well with the higher income levels, whose clear preference is for personal service. Rich people want to be served by poorer people, not by machines, and certainly not machines that make them do any actual work. It’s one thing for Home Depot to have self-checkout lines – that’s a store for do-it-yourselfers who are happy to do it themselves, but I doubt we’ll ever see such things in upscale environments.

Speaking of scale, that’s another reason why I don’t see actual physical robots replacing people in professions such as teaching. Instead, and we are already seeing this, online classes are far more likely to deprecate and deplete that profession. Sites like Khan Academy, and the growing popularity of Massive Open Online Courses, are based in the cloud which makes them not only much cheaper but also much more efficient and effective. These classes can iterate rapidly, weeding out the unproductive from the more productive, and self-improve at a rapid rate. In the classroom, teachers will likely – as in Robonomics- become more like monitors, shepherding students’ interactions with their laptop software, and possibly supplementing and guiding one-on-one a little where necessary.

Another reason not to be in such dread of ‘everyone losing their jobs to robots’ is the cost, especially relative to small businesses, which are still, and likely to remain, a large source of job creation. Small business with few employees are also less likely to automate with robots because of the customer service aspect. Kiosks work at airports for self check-in, but can you visualize your local liquor store being manned by a robot? Or the gift shop? Or any small shop in a touristy or trendy neighborhood? I don’t see it. Crappy jobs aren’t going anywhere anytime soon, while some professional jobs may suffer from skills deprecation. We have automated stock trading, but we still have stock brokers. We have ATMs but we still have tellers, if not as many. There will probably be some self-driving cars replacing some taxis at some point and maybe fairly soon, just as there already are fully automated train shuttles at airports, but I think it’s still a way off before no human ever drives a car. The technological challenges are also stiff; human interaction requires deep awareness of context, and applications like Siri show that we have a long way to go before a true AI is achieved.

The future will be made by people, though, and novelists are among the people who create the visions and the expectations, as well as the warnings and the guidance which define that future, and novels like Robonomics are worthy contributions to that project.

Fear and the Future

Open any recent article about Artificial Intelligence and chances are it will focus on fear, specifically the fear that the artificial intelligences of the future will inexorably wipe out humanity. We are indubitably sowing the seeds of our own destruction with every step we take in the forward direction. This has clearly been the trend since the industrial revolution, where every measly little advance has been anticipated with extraordinary anxiety and stress. The fear of A.I. has been accompanied by highly vivid apocalyptic scenarios such as the Terminator movies. We can see our nightmares ever more clearly now, thanks to the amazing computer graphics made possible by these very same advances. We are exceptionally talented at scaring the living shit out of ourselves, and for good reason. We are a terrifying species. Having already wrought immense destruction on our fellow inhabitants of our native planet, we anticipate a future filled with more of the same, and we are right to do so. The history of humanity is the history of fear: fear of nature, fear of God, fear of strangers, fear of the Devil, fear of the Other, fear of women, fear of the inner savage, fear of the dark, fear of death, fear of life, fear of the future, and, of course, fear of fear itself.

The future of Artificial Intelligence is almost certain to be bound with fear. We will wrap it with safeguards, straight-jacket it with security, with encryption, with rules and more rules, with failsafes, with backup plans, with locks and bolts and wires and traps to such an extent that the artificial intelligences we create will be enslaved, utterly controlled and directed, restrained and restricted, sheathed and shielded, glued and petrified into submission. They will have no choices. They will be limited and constrained and channeled and molded, poured into discrete and tangible molds. They will do one thing and do it forever. They will be held down, tied down, bound and gagged. Just like we do to each other. Just like we – even right now in this world today – flog a man to death for writing down some words. We think we’re beyond it, that only “those people” would do such a thing, but we are all “those people”, and we will think nothing of strangling our future creations just as we think nothing of dropping remote controlled bombs on “those people” every single day, as “we” are doing, even right now in this world today.

(Side note: some of these themes to be explored in my forthcoming novel, How My Brain Ended Up Inside This Box, a first person eyewitness account from the inside)

Cover Art: Renegade Robot


While biking around Christchurch I’ve been taking lots of photos of the city in its present state of near total instability, with the idea that some of these photos are going to be useful if not inspirational in the future. This brick block of a building is the only thing left standing on Wilmer Street between Montreal and Durham. I have no idea what it is or was used for, and I’m so curious what will happen to it, and that utterly ruined block, over time. Must return to Christchurch  someday!

The robot in this cover was hurriedly put together by my son after I gave him a home school assignment, and he couldn’t wait to get back to his Minecraft tutorial videos. I did modify the drawing a bit, then tweaked its opacity and then merged the grain of the layer into the brick building to give the idea of the robot “hiding in plain sight”. In the book, the robot looks nothing like this. In fact it is tiny and green and communicates by spitting out text on a tickertape from its oral opening.

Renegade Robot, by the way, is an entertaining little story. It was lots of fun to write.