Machines for the Ethical Treatment of People

As I continue the adventure of writing my current story on Wattpad – Machines, Learning – I keep coming up across readers’ expectations that in the future machines will have had “ethics” programmed into them, somehow. The details escape us. I’ve come across nothing that shows, practically, just how these so-called ethics are going to be introduced into machines, just that it had to happen so of course it just happened. There is bound to be a learning curve, however, and so there are bound to be stories that are set during this period. Where are those stories? Why not write one?

In today’s world, it isn’t ethics that prevents a self-parking car from running over a child, it’s geometry. Any vertical object within range of the camera is enough to halt the motion of the vehicle. A self-driving car avoids a person for the same reason it avoids colliding with a fire hydrant. Is there a geometrical component to morality?

That a machine would not “willingly” harm a person will beg the question of what is meant by harm. Is it merely physical damage? What if the machine is programmed to diagnose psychological conditions. What if the person is unhappy and the program can tell that this unhappiness manifests (is it cause or effect?) by a chemical insufficiency (of, let us say, serotonin) and that by means of medication this deficiency can be addressed – is it ethical for a machine to alter the chemical balance in the human brain in order to induce a state the human would experience as “less unhappiness”? What if there are bad memories causing PTSD? Can the machine erase those memories. Would it be ethical?

Who gets to make that decision? On what grounds? How does this program work?

Who decides the value of happiness versus perhaps the important life lessons that may be learned by not being so fucking happy all the time? What kind of world will it be when no one experiences anything but the perfectly balanced chemical condition deemed optimal by the short-sighted dweebs who wrote the computer programs that were trained by that eternally optimizing data set?

The thing is, computer programs do exactly what they are programmed to do, as long as they don’t run short of resources such as RAM, Virtual Memory or CPU. If there is to be an ethical program it will be a program written by humans with the understandings those humans have about the ethics they prefer, and is not ethics another word for “opinions”?

We are experiencing a rash of such ethics this week after the Daesh bombings of nightclubs and other civilian venues in Paris. The Western World is enraged while at the same time continuously ignoring the same types of bombings of the same types of civilian venues in Beirut, in Baghdad, in other locales apparently not considered to be of the same ethical value to the Western World. Who is going to write these ethical programs? The same people who write the programs that guide the drones and missile launchers that mercilessly “bomb the shit” (to use Donald Trump’s phraseology) out of the civilians who happen to dwell in the cities currently occupied by Daesh?

Many, if not most of the problems in the human world stem from an inability to distinguish fantasy from reality. We insist on believing in our fictions, sometimes to a fanatical degree, such as those guided by an insane insistence on their own interpretations of the words of their prophets, and sometimes to a much lesser degree, such as those who “believe” that in the future people will engage in hand-to-hand combat using light sticks, or that machines will obviously and easily be programmed to behave “ethically”, while in reality they can never behave other than in ways programmed by the humans who design them and all you have to do is pay the slightest attention to the world around us to realize there is no such thing as an ethics, there is only contradiction, complexity and a hell of a lot of wishful thinking in magical make-believe.


Bias, Conscious and otherwise

My job had me go through a training session about “unconscious bias in interviewing“, which I found interesting in ways both expected and unexpected. I expected to be reminded of biases involving appearance, gender, age, voice, accent, nationality and so forth, but there were some notions particular to interviewing which apply to other situations as well. For instance, there is a tendency to weigh more heavily the last part of the interview – a person stumbling over a question at the end is more impactful than an earlier stumble from which they later recovered, but every moment should count equally. Also, one well-answered question can override a multitude of poorly answered ones – this is called “the halo effect”. We also compare people we most recently interviewed with the one we are currently talking to – there should be no extra weight on that recent-ness but there it is, the “contrast effect”. It’s important to be aware of every kind of bias, yet there are so many! It’s hard to keep track.

We build our biases into our systems, often just as unconsciously as we apply them in our daily lives or situations like interviews. I was recently working on a machine learning project to determine, by means of sensors and software, whether a residence is currently occupied or not. Motion sensors relay data throughout the day and night to a backend service, and a machine learning algorithm applies its initial model – gained through a training set – on the incoming information, producing probabilities of occupancy state. If little or no motion is detected throughout the day, the algorithm concludes that no one is at home, but given the same data throughout the night, the algorithm will decide that the occupants are sleeping. You can see a number of built-in biases here – that humans are nocturnal creatures, that they have day jobs, and that those jobs are outside the home. It’s also interesting to note that the time zone reported in the data is critical. It’s astounding to me how high the proportion of software bugs in such systems are because of errors involving time zones! How can the program be allowed to adapt for those homes where someone is working a graveyard shift, or some other non-standard routine? How can every exception possibly be accounted for without either severely diluting the criteria or creating a configuration confusion?

If we can’t help but build some biases into our machine learning systems, then considerations about the future of artificial intelligence have to include such flaws. Sophisticated computer programs are just as liable to “leap” to conclusions based on their limited experience, their sample sizes, and the biases built into their training data sets, as we humans do every single day. Even a setting as routine and commonplace as a job interview is filled to the brim with pre-loaded implications. What will we think of Artificial Intelligences that are inherently conformist, stuffing people into tidy little cubbyholes based on arbitrary biases? We are already beginning to come across such examples in our everyday lives as more and more “intelligence” is built into our smart-phones and other gadgets. We start a search term and instantly completion-suggestions are brought up – just start typing in a search bar “why do gi” to see what the world thinks you want to know. The algorithm is only spitting up the likeliest choices, which simply come from the multitude of previous searches, so that ultimately we have no one to blame but ourselves, but still, the reinforcement effect is strong. Suddenly you find yourself wondering why everyone seems to think that “girls” are bleeding cheaters who always fall for creeps.

Ultimately machines will learn the way they are taught to learn, which is the way we all learn, which is to filter, sort, and select what we secretly wanted in the first place. We choose that which looks like us, acts like us, feels like us, thinks like us, agrees with us, feels comfortable to us, which is why you’ll find zero Black engineers working at Twitter today. Bias, conscious and otherwise, is the road most travelled, the well-worn groove. As Karl Marx wrote:

“Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living.”

Repealing the Three Laws

The Three Laws of Robotics (most famous perhaps from I, Robot – a robot cannot harm a human, a robot must obey human orders except when that would violate law number one, and a robot must not harm itself, except when that might violate laws one or two) seem increasingly to be emblematic of their time, an era of absolutist idealism, of American exceptionalism, born of “know-how”, “can-do” and the power of positive thinking. These are laws embodying the right stuff and the kind of certainty that led to the quagmires of Vietnam, Iraq and Afghanistan. They have a surge-like mentality and have been made obsolete my advances in technology, if not morality and politics.

In the current state of things, artificial intelligence is rooted in machine learning, which is highly statistical and probabilistic in nature. In machine learning, there are no absolute certainties, no guarantees, no one hundred percents. Even a slight possibility that laws number one and or two might be violated would stop a robot in its tracks, and there is always that slight possibility. It is unavoidable. In other words, it’s not so easy. As Nietzsche once put it, life is no argument, because the conditions of life could include error.

An Asimov robot would always be paralyzed by doubt, by the knowledge of “what-if”, because there are natural, real laws that supercede those post-war picket-fence dreams, such as the law of unintended consequences, the law of road-to-hell-paving, and the law of Murphy. Anything a creature says or does can and will lead to unforeseen side effects and complications. We don’t live in a world as simple as some once liked to believe.

Step Over, Step Into

Good debuggers allow you to “step over” or “step into” code as needed. There are some programming languages (I’m looking at you, Scala) where “stepping in” is really asking for it! It’s not only turtles all the way down, it’s an incomprehensible and infinite regression into the void.

Fiction, as a rule, “steps over” a lot more than it “steps into”. This is one reason why time never really works in fiction. In fiction, nobody goes to the bathroom, nobody sleeps, and people never talk at the same time as each other. In fiction, cause leads directly to effect, and usually it’s a childhood trauma that explains every thing that ever goes wrong.

I’m thinking about this because I’m toying with some ideas for a new short novel. One of the ideas is the common SF gambit of a “deep space” colonization mission. The humans are in stasis. The voyage may last for a century or more. Alpha Centauri is a long ways away!

Such stories step right over all that. They kind of have to. It would be incredibly boring to slog through a century or more of humans in stasis, eh? The longest I’ve ever seen it linger over it was in the movie ‘Alien’. Maybe a whole three minutes if I remember correctly.

Idea number two is related – just how incredibly vulnerable we are when we are asleep. Again, this is something that fiction generally steps right over. Who wants to see a lot of snoring, right?

Yet there are interesting experiments conducted on sleepers. The science of sleep can be fascinating. Scientists are now also beginning to be able to re-created images from people’s minds through equipment and software.

Sleepers in stasis, dozing away for a century or more. How much could be learned from them, about them, in that period of time, by some diligent observer/scientist!

Especially if that observer was a machine, or machines, programmed to learn, to learn all they can about humans.

What might such machines step into? What might they learn, and what might they do?

The novel would take place mostly – or maybe even entirely – during the period when the humans are in stasis.

Artificially Intelligent People

I’ve been reading a lot of polemics lately about the future of AI and the relatively imminent development of AGIs and ASIs (artificial general intelligence and artificial super intelligence). Many of the articles are written in the grand journalistic tradition of a) ‘what if’ and b) ‘assuming that’ then c) “oh my god! run for the hills!”

It reminds me of a book called Holy Blood, Holy Grail, which went like this:

1) Crucifixions weren’t always fatal. What if Christ didn’t actually die on the cross?

2) Assuming that Christ didn’t actually die on the cross, what if he got married and had kids?

3) Since we now know that Christ didn’t actually die on the cross, but got married and had kids, what if he and his family moved to the south of France?

4) Because Mr. and Mrs. Christ and their bevy of babies relocated to the Riviera, it’s no wonder that the Rosicrucian order was founded there.

5) Oh my God! There are some French people living today who are the direct descendants of Jesus H. Christ and Mary Magdalen! They’ve got his nose and her eyes!

So too with Artificial SuperIntelligence. On the one hand, we mere mortals can’t possibly conceive of that such an exalted machine would be like, BUT, we can assume two things. It will either be really bad for us. On the bad side, they might delete us, like exterminators versus ant. On the good side, they might make us immortal and infinitely attractive so we can have great sex all the time forever and ever anon.

My inclination is to wonder what it would be like to be such a creature. My novel-in-progress is where I’m doing that wondering. In the book (which I just realized is structured like a David Copperfield, except it begins with “I am made” rather than “I am born”) the creature is semi-organic, and is deliberately limited – by law and design – by human fear. It is a partial thing, missing crucial pieces – but then again, aren’t we all? Why do some people simply have no sense of direction? Why do some lack basic empathy? Why are some people unable to tell left from right? Why are some people musical prodigies and other people can’t make two consecutive notes sound good no matter how hard they try?

Anyway, the creature – however advanced its computational abilities – is a living thing and as such he has a life, and that is the framework of the story. I’m suggesting that maybe we should approach the subject of AI with less terror and more compassion, the way we should approach each other in this world today. Instead of pre-determining AI beings to be future Jihadis or Gandhis, what if they’re like every other living thing on this planet – imperfect yet beautiful, and deserving of ethical treatment.

It’s typical of us to radically underestimate complexity and I think the artificial intelligence essayists are doing just that with their simplistic either/or scenarios


Data Mining Your Brain

Over the past few years there has been an explosion in the amount of data being transferred from users’ brains to permanent storage (a.k.a. “the cloud”), where it can be sifted through and analyzed by anyone with access to it. Typically, the ones with access are the major corporations that are doing the “hosting”. Like any good “host”, they soon know everything about you, down to the most intimate details, and have no compunction at all about selling that information to practically anyone who will pay for it.

Who knows everything there is to know about you? Google, Apple, Facebook, Amazon, Microsoft, Oracle – do any of these names sound familiar?

A decade or go I first got a hint of this when I worked on a project called “Interactive TV” at Sun Microsystems. It became clear early on that what they meant by “interactive” was “figuring out which advertisements to show you based on what you watched and responded to”. At that time it was only a computer corporation’s wet dream but since then it has become commonplace. Anyone with Gmail has seen that page littered with what Google believes are appropriate ads from information gleaned from the content of your personal correspondence. Facebook with its billion users and deeply vertical penetration into their individual worlds is a boiling cauldron of personal data leading to unlimited advertising potential.

More and more players are getting into the game, including, finally, publishers. Players like Amazon and Barnes and Noble, through their Kindle and Nook devices, are now able to tap even deeper into your brain. They can see deep down even into which sentences people highlight. Just as there are companies which specialize in “search engine optimization”, there will soon be companies which will help writers and publishers fine tune their products down to the word level – they will know exactly which kinds of phrases are sure to get the lady erotica readers hottest, and which technical terms resound in the brains of teen sci-fi fanatics. There is really no end to the potentials of tailorization. That thing that used to be known as “creativity” will finally be tackled and nailed down.

They know who you are and they know what you like. Your pleasure centers will be stimulated precisely and eternally, joy without end. Hallelujah.