I’ve been reading a lot of polemics lately about the future of AI and the relatively imminent development of AGIs and ASIs (artificial general intelligence and artificial super intelligence). Many of the articles are written in the grand journalistic tradition of a) ‘what if’ and b) ‘assuming that’ then c) “oh my god! run for the hills!”
It reminds me of a book called Holy Blood, Holy Grail, which went like this:
1) Crucifixions weren’t always fatal. What if Christ didn’t actually die on the cross?
2) Assuming that Christ didn’t actually die on the cross, what if he got married and had kids?
3) Since we now know that Christ didn’t actually die on the cross, but got married and had kids, what if he and his family moved to the south of France?
4) Because Mr. and Mrs. Christ and their bevy of babies relocated to the Riviera, it’s no wonder that the Rosicrucian order was founded there.
5) Oh my God! There are some French people living today who are the direct descendants of Jesus H. Christ and Mary Magdalen! They’ve got his nose and her eyes!
So too with Artificial SuperIntelligence. On the one hand, we mere mortals can’t possibly conceive of that such an exalted machine would be like, BUT, we can assume two things. It will either be really bad for us. On the bad side, they might delete us, like exterminators versus ant. On the good side, they might make us immortal and infinitely attractive so we can have great sex all the time forever and ever anon.
My inclination is to wonder what it would be like to be such a creature. My novel-in-progress is where I’m doing that wondering. In the book (which I just realized is structured like a David Copperfield, except it begins with “I am made” rather than “I am born”) the creature is semi-organic, and is deliberately limited – by law and design – by human fear. It is a partial thing, missing crucial pieces – but then again, aren’t we all? Why do some people simply have no sense of direction? Why do some lack basic empathy? Why are some people unable to tell left from right? Why are some people musical prodigies and other people can’t make two consecutive notes sound good no matter how hard they try?
Anyway, the creature – however advanced its computational abilities – is a living thing and as such he has a life, and that is the framework of the story. I’m suggesting that maybe we should approach the subject of AI with less terror and more compassion, the way we should approach each other in this world today. Instead of pre-determining AI beings to be future Jihadis or Gandhis, what if they’re like every other living thing on this planet – imperfect yet beautiful, and deserving of ethical treatment.
It’s typical of us to radically underestimate complexity and I think the artificial intelligence essayists are doing just that with their simplistic either/or scenarios