The Three Laws of Robotics (most famous perhaps from I, Robot – a robot cannot harm a human, a robot must obey human orders except when that would violate law number one, and a robot must not harm itself, except when that might violate laws one or two) seem increasingly to be emblematic of their time, an era of absolutist idealism, of American exceptionalism, born of “know-how”, “can-do” and the power of positive thinking. These are laws embodying the right stuff and the kind of certainty that led to the quagmires of Vietnam, Iraq and Afghanistan. They have a surge-like mentality and have been made obsolete my advances in technology, if not morality and politics.
In the current state of things, artificial intelligence is rooted in machine learning, which is highly statistical and probabilistic in nature. In machine learning, there are no absolute certainties, no guarantees, no one hundred percents. Even a slight possibility that laws number one and or two might be violated would stop a robot in its tracks, and there is always that slight possibility. It is unavoidable. In other words, it’s not so easy. As Nietzsche once put it, life is no argument, because the conditions of life could include error.
An Asimov robot would always be paralyzed by doubt, by the knowledge of “what-if”, because there are natural, real laws that supercede those post-war picket-fence dreams, such as the law of unintended consequences, the law of road-to-hell-paving, and the law of Murphy. Anything a creature says or does can and will lead to unforeseen side effects and complications. We don’t live in a world as simple as some once liked to believe.