Machines for the Ethical Treatment of People

As I continue the adventure of writing my current story on Wattpad – Machines, Learning – I keep coming up across readers’ expectations that in the future machines will have had “ethics” programmed into them, somehow. The details escape us. I’ve come across nothing that shows, practically, just how these so-called ethics are going to be introduced into machines, just that it had to happen so of course it just happened. There is bound to be a learning curve, however, and so there are bound to be stories that are set during this period. Where are those stories? Why not write one?

In today’s world, it isn’t ethics that prevents a self-parking car from running over a child, it’s geometry. Any vertical object within range of the camera is enough to halt the motion of the vehicle. A self-driving car avoids a person for the same reason it avoids colliding with a fire hydrant. Is there a geometrical component to morality?

That a machine would not “willingly” harm a person will beg the question of what is meant by harm. Is it merely physical damage? What if the machine is programmed to diagnose psychological conditions. What if the person is unhappy and the program can tell that this unhappiness manifests (is it cause or effect?) by a chemical insufficiency (of, let us say, serotonin) and that by means of medication this deficiency can be addressed – is it ethical for a machine to alter the chemical balance in the human brain in order to induce a state the human would experience as “less unhappiness”? What if there are bad memories causing PTSD? Can the machine erase those memories. Would it be ethical?

Who gets to make that decision? On what grounds? How does this program work?

Who decides the value of happiness versus perhaps the important life lessons that may be learned by not being so fucking happy all the time? What kind of world will it be when no one experiences anything but the perfectly balanced chemical condition deemed optimal by the short-sighted dweebs who wrote the computer programs that were trained by that eternally optimizing data set?

The thing is, computer programs do exactly what they are programmed to do, as long as they don’t run short of resources such as RAM, Virtual Memory or CPU. If there is to be an ethical program it will be a program written by humans with the understandings those humans have about the ethics they prefer, and is not ethics another word for “opinions”?

We are experiencing a rash of such ethics this week after the Daesh bombings of nightclubs and other civilian venues in Paris. The Western World is enraged while at the same time continuously ignoring the same types of bombings of the same types of civilian venues in Beirut, in Baghdad, in other locales apparently not considered to be of the same ethical value to the Western World. Who is going to write these ethical programs? The same people who write the programs that guide the drones and missile launchers that mercilessly “bomb the shit” (to use Donald Trump’s phraseology) out of the civilians who happen to dwell in the cities currently occupied by Daesh?

Many, if not most of the problems in the human world stem from an inability to distinguish fantasy from reality. We insist on believing in our fictions, sometimes to a fanatical degree, such as those guided by an insane insistence on their own interpretations of the words of their prophets, and sometimes to a much lesser degree, such as those who “believe” that in the future people will engage in hand-to-hand combat using light sticks, or that machines will obviously and easily be programmed to behave “ethically”, while in reality they can never behave other than in ways programmed by the humans who design them and all you have to do is pay the slightest attention to the world around us to realize there is no such thing as an ethics, there is only contradiction, complexity and a hell of a lot of wishful thinking in magical make-believe.

Bias, Conscious and otherwise

My job had me go through a training session about “unconscious bias in interviewing“, which I found interesting in ways both expected and unexpected. I expected to be reminded of biases involving appearance, gender, age, voice, accent, nationality and so forth, but there were some notions particular to interviewing which apply to other situations as well. For instance, there is a tendency to weigh more heavily the last part of the interview – a person stumbling over a question at the end is more impactful than an earlier stumble from which they later recovered, but every moment should count equally. Also, one well-answered question can override a multitude of poorly answered ones – this is called “the halo effect”. We also compare people we most recently interviewed with the one we are currently talking to – there should be no extra weight on that recent-ness but there it is, the “contrast effect”. It’s important to be aware of every kind of bias, yet there are so many! It’s hard to keep track.

We build our biases into our systems, often just as unconsciously as we apply them in our daily lives or situations like interviews. I was recently working on a machine learning project to determine, by means of sensors and software, whether a residence is currently occupied or not. Motion sensors relay data throughout the day and night to a backend service, and a machine learning algorithm applies its initial model – gained through a training set – on the incoming information, producing probabilities of occupancy state. If little or no motion is detected throughout the day, the algorithm concludes that no one is at home, but given the same data throughout the night, the algorithm will decide that the occupants are sleeping. You can see a number of built-in biases here – that humans are nocturnal creatures, that they have day jobs, and that those jobs are outside the home. It’s also interesting to note that the time zone reported in the data is critical. It’s astounding to me how high the proportion of software bugs in such systems are because of errors involving time zones! How can the program be allowed to adapt for those homes where someone is working a graveyard shift, or some other non-standard routine? How can every exception possibly be accounted for without either severely diluting the criteria or creating a configuration confusion?

If we can’t help but build some biases into our machine learning systems, then considerations about the future of artificial intelligence have to include such flaws. Sophisticated computer programs are just as liable to “leap” to conclusions based on their limited experience, their sample sizes, and the biases built into their training data sets, as we humans do every single day. Even a setting as routine and commonplace as a job interview is filled to the brim with pre-loaded implications. What will we think of Artificial Intelligences that are inherently conformist, stuffing people into tidy little cubbyholes based on arbitrary biases? We are already beginning to come across such examples in our everyday lives as more and more “intelligence” is built into our smart-phones and other gadgets. We start a search term and instantly completion-suggestions are brought up – just start typing in a search bar “why do gi” to see what the world thinks you want to know. The algorithm is only spitting up the likeliest choices, which simply come from the multitude of previous searches, so that ultimately we have no one to blame but ourselves, but still, the reinforcement effect is strong. Suddenly you find yourself wondering why everyone seems to think that “girls” are bleeding cheaters who always fall for creeps.

Ultimately machines will learn the way they are taught to learn, which is the way we all learn, which is to filter, sort, and select what we secretly wanted in the first place. We choose that which looks like us, acts like us, feels like us, thinks like us, agrees with us, feels comfortable to us, which is why you’ll find zero Black engineers working at Twitter today. Bias, conscious and otherwise, is the road most travelled, the well-worn groove. As Karl Marx wrote:

“Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living.”

Repealing the Three Laws

The Three Laws of Robotics (most famous perhaps from I, Robot – a robot cannot harm a human, a robot must obey human orders except when that would violate law number one, and a robot must not harm itself, except when that might violate laws one or two) seem increasingly to be emblematic of their time, an era of absolutist idealism, of American exceptionalism, born of “know-how”, “can-do” and the power of positive thinking. These are laws embodying the right stuff and the kind of certainty that led to the quagmires of Vietnam, Iraq and Afghanistan. They have a surge-like mentality and have been made obsolete my advances in technology, if not morality and politics.

In the current state of things, artificial intelligence is rooted in machine learning, which is highly statistical and probabilistic in nature. In machine learning, there are no absolute certainties, no guarantees, no one hundred percents. Even a slight possibility that laws number one and or two might be violated would stop a robot in its tracks, and there is always that slight possibility. It is unavoidable. In other words, it’s not so easy. As Nietzsche once put it, life is no argument, because the conditions of life could include error.

An Asimov robot would always be paralyzed by doubt, by the knowledge of “what-if”, because there are natural, real laws that supercede those post-war picket-fence dreams, such as the law of unintended consequences, the law of road-to-hell-paving, and the law of Murphy. Anything a creature says or does can and will lead to unforeseen side effects and complications. We don’t live in a world as simple as some once liked to believe.

Machine, Learning

For the past 25 years or so I’ve been a human learning to program computers.  It’s been my day job for much of that time. My night jobs have included writing fictions of various stripes.  I’m currently working on one about computers learning to program humans.
It’s a work in progress going on in Wattpad under the working title “Machine, Learning”,  and so far consists of log file entries. The computer is controlling a star seed spaceship carrying colonists to a distant planet decades away.  While the humans lie in stasis in capsules,  encased in a minty fresh goo, two programs,  a main and a backup operating system,  set
out to try and understand their cargo. It’s an adventure for them,  but also for me as i try to apply my experiences in learning about an alien form of being. It’s an experiment that could easily fall flat,  but then that’s true of all attempted art.

Step Over, Step Into

Good debuggers allow you to “step over” or “step into” code as needed. There are some programming languages (I’m looking at you, Scala) where “stepping in” is really asking for it! It’s not only turtles all the way down, it’s an incomprehensible and infinite regression into the void.

Fiction, as a rule, “steps over” a lot more than it “steps into”. This is one reason why time never really works in fiction. In fiction, nobody goes to the bathroom, nobody sleeps, and people never talk at the same time as each other. In fiction, cause leads directly to effect, and usually it’s a childhood trauma that explains every thing that ever goes wrong.

I’m thinking about this because I’m toying with some ideas for a new short novel. One of the ideas is the common SF gambit of a “deep space” colonization mission. The humans are in stasis. The voyage may last for a century or more. Alpha Centauri is a long ways away!

Such stories step right over all that. They kind of have to. It would be incredibly boring to slog through a century or more of humans in stasis, eh? The longest I’ve ever seen it linger over it was in the movie ‘Alien’. Maybe a whole three minutes if I remember correctly.

Idea number two is related – just how incredibly vulnerable we are when we are asleep. Again, this is something that fiction generally steps right over. Who wants to see a lot of snoring, right?

Yet there are interesting experiments conducted on sleepers. The science of sleep can be fascinating. Scientists are now also beginning to be able to re-created images from people’s minds through equipment and software.

Sleepers in stasis, dozing away for a century or more. How much could be learned from them, about them, in that period of time, by some diligent observer/scientist!

Especially if that observer was a machine, or machines, programmed to learn, to learn all they can about humans.

What might such machines step into? What might they learn, and what might they do?

The novel would take place mostly – or maybe even entirely – during the period when the humans are in stasis.

BeRated is not Peeple

For one thing, we haven’t raised millions of dollars! Peeple is a new service where people rate other people. BeRated is superficially similar, although on BeRated you can rate anything whatsoever, including people. Differences include:

On BeRated, everyone is anonymous

On BeRated, there are no comments allowed, simply ratings

On BeRated, you are not the product. Your data is not collected, stored or sold. BeRated does not know who you are and does not care

Already there are concerns about bullying, especially on Peeple, where identities are confirmed, and comments are allowed. BeRated is 100% total bullshit (it may even say so on the front page) so there’s no need to feel bullied. People who give other people low ratings are assholes. Simple as that.

but it’s like I always say, nowadays you can still be ahead of your time, but only by a matter of minutes

A General Drama of Pain

Historical Fiction has a lot in common with Science Fiction, especially now, the more remote our present reality becomes from the past, I am continually reminded of William Gibson’s declaration that “the past is more difficult to imagine than the future”.

I just finished reading The Mayor of Casterbridge by Thomas Hardy and it might as well have been science fiction. The world of grain merchants in early 19th century England is as foreign to me as any made-up world. Hardy’s language is full of slang and terminology that are utterly meaningless to me as a 21st Century urban American, and yet the conclusion is as familiar as any TV show, as he sums up the novel by saying that “happiness [is] but the occasional episode in a general drama of pain”.

It’s a fine soap opera, as good as anything currently acclaimed in this “golden age of television” – lots of intrigue, surprises, coincidences, shocks, cruelty both intentional and not, good things happening to bad people and bad things happening to good. It could win an Emmy award for “general drama”.

He has a passage describing two bridges in the town, one nearer to the center than the other, at which different unhappy people go to contemplate suicide, even this being a part of life divided by class and circumstance, and how fantasies are particularly correlated to realities:

“There and thus they would muse; if their grief were the grief of oppression they would wish themselves kings; if their grief were poverty, wish themselves millionaires; if sin they would wish they were saints or angels; if despised, love, that they were some much-courted Adonis of country fame”

I once had an odd encounter that has stuck with me for decades. I was walking to work one day, hating my job, when a homeless madman stopped me on the street, blocking my way, insisting on showing me what he had in his brown paper bag. He then told me that when I got to work I should get a Bible and got to a passage in Ecclesiastes. I wondered how many people’s jobs have Bibles handy, or how he knew – if he did know – that I worked in a bookstore (I later considered that as a homeless madman wandering the streets with nothing else to do he had probably seen me there at work at some point and it wasn’t just a random stoppage on the street but that he knew who I was and had planned this intervention). The passage reads: “Of the making of many books there is no end, and much study is a weariness of the flesh.”

Ain’t it the truth. Every day there were more and more books coming out, the hits don’t stop, it all never stops … In my RSS feed I’m continually informed about “the five best songs we heard this week” or “the twelve books you absolutely have to read now” or “the ten best places in the world to visit this year” or some such onslaught of newer, better and best.

It was rather comforting to read The Mayor of Casterbridge, a book not newer, not constantly touted, not celebrated in any list, but just a d—– fine novel (as he would have put it)