Featured on Wattpad: How My Brain Ended Up Inside This Box

featured_on_wattpad

I’m happy to see that my most recent sci-fi story, “How My Brain Ended Up Inside This Box”, is now a “featured” selection on Wattpad. It’s a bit of what I like to call “magical futurism”, featuring a black-market “artificially intelligent person” (or A.I.P., or “ape” in the colloquial sense, as in ‘the planet of the’), an organic being, farm-raised on genetically engineered smoothies and destined for auction to the highest bidding criminal enterprise. Gifted with the ability to communicate with foul-mouthed seagulls and ill-tempered felines, the gender-less, age-less, race-less creature has to find its way to escape from the clutches of its mother and other assorted enemies, in this fairly exciting and ultimately utterly unexpected novel.

As with all my books, this one is free on Smashwords and Feedbooks as well.

 

Advertisements

How My Brain got a nice review

On Goodreads. Made me happy and edged up my books’ overall Goodreads average rating to 2.99. Can they ever hit 3.00? The Law of Average would say “maybe”. If enough random people randomly read random books and rated them, that rating would likely be around 3.00, and that’s exactly what seems to have happened with mine.

Anyway: How My Brain Ended Up Inside This Box really is (IMHO) a pretty good story, a fresh and somewhat more sane take on artificial intelligence than the usual. And it’s free, of course, like all my books always are on Smashwords or Feedbooks.

Such a great book! A fresh new take on the whole Artificial Intelligence genre. And it’s simplicity is its beauty!

When the AIP discovers their self, we people’s-people reading it discover ourselves and the world along with them!

Glad I stumbled across this little treasure. It will be in one of my all time favourite reads.

Machines for the Ethical Treatment of People

As I continue the adventure of writing my current story on Wattpad – Machines, Learning – I keep coming up across readers’ expectations that in the future machines will have had “ethics” programmed into them, somehow. The details escape us. I’ve come across nothing that shows, practically, just how these so-called ethics are going to be introduced into machines, just that it had to happen so of course it just happened. There is bound to be a learning curve, however, and so there are bound to be stories that are set during this period. Where are those stories? Why not write one?

In today’s world, it isn’t ethics that prevents a self-parking car from running over a child, it’s geometry. Any vertical object within range of the camera is enough to halt the motion of the vehicle. A self-driving car avoids a person for the same reason it avoids colliding with a fire hydrant. Is there a geometrical component to morality?

That a machine would not “willingly” harm a person will beg the question of what is meant by harm. Is it merely physical damage? What if the machine is programmed to diagnose psychological conditions. What if the person is unhappy and the program can tell that this unhappiness manifests (is it cause or effect?) by a chemical insufficiency (of, let us say, serotonin) and that by means of medication this deficiency can be addressed – is it ethical for a machine to alter the chemical balance in the human brain in order to induce a state the human would experience as “less unhappiness”? What if there are bad memories causing PTSD? Can the machine erase those memories. Would it be ethical?

Who gets to make that decision? On what grounds? How does this program work?

Who decides the value of happiness versus perhaps the important life lessons that may be learned by not being so fucking happy all the time? What kind of world will it be when no one experiences anything but the perfectly balanced chemical condition deemed optimal by the short-sighted dweebs who wrote the computer programs that were trained by that eternally optimizing data set?

The thing is, computer programs do exactly what they are programmed to do, as long as they don’t run short of resources such as RAM, Virtual Memory or CPU. If there is to be an ethical program it will be a program written by humans with the understandings those humans have about the ethics they prefer, and is not ethics another word for “opinions”?

We are experiencing a rash of such ethics this week after the Daesh bombings of nightclubs and other civilian venues in Paris. The Western World is enraged while at the same time continuously ignoring the same types of bombings of the same types of civilian venues in Beirut, in Baghdad, in other locales apparently not considered to be of the same ethical value to the Western World. Who is going to write these ethical programs? The same people who write the programs that guide the drones and missile launchers that mercilessly “bomb the shit” (to use Donald Trump’s phraseology) out of the civilians who happen to dwell in the cities currently occupied by Daesh?

Many, if not most of the problems in the human world stem from an inability to distinguish fantasy from reality. We insist on believing in our fictions, sometimes to a fanatical degree, such as those guided by an insane insistence on their own interpretations of the words of their prophets, and sometimes to a much lesser degree, such as those who “believe” that in the future people will engage in hand-to-hand combat using light sticks, or that machines will obviously and easily be programmed to behave “ethically”, while in reality they can never behave other than in ways programmed by the humans who design them and all you have to do is pay the slightest attention to the world around us to realize there is no such thing as an ethics, there is only contradiction, complexity and a hell of a lot of wishful thinking in magical make-believe.

Bias, Conscious and otherwise

My job had me go through a training session about “unconscious bias in interviewing“, which I found interesting in ways both expected and unexpected. I expected to be reminded of biases involving appearance, gender, age, voice, accent, nationality and so forth, but there were some notions particular to interviewing which apply to other situations as well. For instance, there is a tendency to weigh more heavily the last part of the interview – a person stumbling over a question at the end is more impactful than an earlier stumble from which they later recovered, but every moment should count equally. Also, one well-answered question can override a multitude of poorly answered ones – this is called “the halo effect”. We also compare people we most recently interviewed with the one we are currently talking to – there should be no extra weight on that recent-ness but there it is, the “contrast effect”. It’s important to be aware of every kind of bias, yet there are so many! It’s hard to keep track.

We build our biases into our systems, often just as unconsciously as we apply them in our daily lives or situations like interviews. I was recently working on a machine learning project to determine, by means of sensors and software, whether a residence is currently occupied or not. Motion sensors relay data throughout the day and night to a backend service, and a machine learning algorithm applies its initial model – gained through a training set – on the incoming information, producing probabilities of occupancy state. If little or no motion is detected throughout the day, the algorithm concludes that no one is at home, but given the same data throughout the night, the algorithm will decide that the occupants are sleeping. You can see a number of built-in biases here – that humans are nocturnal creatures, that they have day jobs, and that those jobs are outside the home. It’s also interesting to note that the time zone reported in the data is critical. It’s astounding to me how high the proportion of software bugs in such systems are because of errors involving time zones! How can the program be allowed to adapt for those homes where someone is working a graveyard shift, or some other non-standard routine? How can every exception possibly be accounted for without either severely diluting the criteria or creating a configuration confusion?

If we can’t help but build some biases into our machine learning systems, then considerations about the future of artificial intelligence have to include such flaws. Sophisticated computer programs are just as liable to “leap” to conclusions based on their limited experience, their sample sizes, and the biases built into their training data sets, as we humans do every single day. Even a setting as routine and commonplace as a job interview is filled to the brim with pre-loaded implications. What will we think of Artificial Intelligences that are inherently conformist, stuffing people into tidy little cubbyholes based on arbitrary biases? We are already beginning to come across such examples in our everyday lives as more and more “intelligence” is built into our smart-phones and other gadgets. We start a search term and instantly completion-suggestions are brought up – just start typing in a search bar “why do gi” to see what the world thinks you want to know. The algorithm is only spitting up the likeliest choices, which simply come from the multitude of previous searches, so that ultimately we have no one to blame but ourselves, but still, the reinforcement effect is strong. Suddenly you find yourself wondering why everyone seems to think that “girls” are bleeding cheaters who always fall for creeps.

Ultimately machines will learn the way they are taught to learn, which is the way we all learn, which is to filter, sort, and select what we secretly wanted in the first place. We choose that which looks like us, acts like us, feels like us, thinks like us, agrees with us, feels comfortable to us, which is why you’ll find zero Black engineers working at Twitter today. Bias, conscious and otherwise, is the road most travelled, the well-worn groove. As Karl Marx wrote:

“Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living.”

Repealing the Three Laws

The Three Laws of Robotics (most famous perhaps from I, Robot – a robot cannot harm a human, a robot must obey human orders except when that would violate law number one, and a robot must not harm itself, except when that might violate laws one or two) seem increasingly to be emblematic of their time, an era of absolutist idealism, of American exceptionalism, born of “know-how”, “can-do” and the power of positive thinking. These are laws embodying the right stuff and the kind of certainty that led to the quagmires of Vietnam, Iraq and Afghanistan. They have a surge-like mentality and have been made obsolete my advances in technology, if not morality and politics.

In the current state of things, artificial intelligence is rooted in machine learning, which is highly statistical and probabilistic in nature. In machine learning, there are no absolute certainties, no guarantees, no one hundred percents. Even a slight possibility that laws number one and or two might be violated would stop a robot in its tracks, and there is always that slight possibility. It is unavoidable. In other words, it’s not so easy. As Nietzsche once put it, life is no argument, because the conditions of life could include error.

An Asimov robot would always be paralyzed by doubt, by the knowledge of “what-if”, because there are natural, real laws that supercede those post-war picket-fence dreams, such as the law of unintended consequences, the law of road-to-hell-paving, and the law of Murphy. Anything a creature says or does can and will lead to unforeseen side effects and complications. We don’t live in a world as simple as some once liked to believe.

How My Brain Ended Up Free Online

When I was born I was so small I was mistaken for a french fry. I was never an ordinary child. My best friend was a seagull. I was also illegal. Artificially intelligent people like me had been banned ever since that thing with the Twelve Elevens. Mother raised me for profit. Buyers and sellers had other plans for me, but then I grew a mind of my own. This is my story, the story of how my brain ended up in this box.

My new short novel is now available from the usual suspects:

for free from Smashwords or Feedbooks or from Amazon Kindle if for some reason you feel like throwing ninety nine cents at it. It’ll make it onto Wattpad too one of these days (in the meantime, I have other stuff there if you’re a Wattpadite)

HowMyBrain_Cover1

Artificially Intelligent People

I’ve been reading a lot of polemics lately about the future of AI and the relatively imminent development of AGIs and ASIs (artificial general intelligence and artificial super intelligence). Many of the articles are written in the grand journalistic tradition of a) ‘what if’ and b) ‘assuming that’ then c) “oh my god! run for the hills!”

It reminds me of a book called Holy Blood, Holy Grail, which went like this:

1) Crucifixions weren’t always fatal. What if Christ didn’t actually die on the cross?

2) Assuming that Christ didn’t actually die on the cross, what if he got married and had kids?

3) Since we now know that Christ didn’t actually die on the cross, but got married and had kids, what if he and his family moved to the south of France?

4) Because Mr. and Mrs. Christ and their bevy of babies relocated to the Riviera, it’s no wonder that the Rosicrucian order was founded there.

5) Oh my God! There are some French people living today who are the direct descendants of Jesus H. Christ and Mary Magdalen! They’ve got his nose and her eyes!

So too with Artificial SuperIntelligence. On the one hand, we mere mortals can’t possibly conceive of that such an exalted machine would be like, BUT, we can assume two things. It will either be really bad for us. On the bad side, they might delete us, like exterminators versus ant. On the good side, they might make us immortal and infinitely attractive so we can have great sex all the time forever and ever anon.

My inclination is to wonder what it would be like to be such a creature. My novel-in-progress is where I’m doing that wondering. In the book (which I just realized is structured like a David Copperfield, except it begins with “I am made” rather than “I am born”) the creature is semi-organic, and is deliberately limited – by law and design – by human fear. It is a partial thing, missing crucial pieces – but then again, aren’t we all? Why do some people simply have no sense of direction? Why do some lack basic empathy? Why are some people unable to tell left from right? Why are some people musical prodigies and other people can’t make two consecutive notes sound good no matter how hard they try?

Anyway, the creature – however advanced its computational abilities – is a living thing and as such he has a life, and that is the framework of the story. I’m suggesting that maybe we should approach the subject of AI with less terror and more compassion, the way we should approach each other in this world today. Instead of pre-determining AI beings to be future Jihadis or Gandhis, what if they’re like every other living thing on this planet – imperfect yet beautiful, and deserving of ethical treatment.

It’s typical of us to radically underestimate complexity and I think the artificial intelligence essayists are doing just that with their simplistic either/or scenarios