Oversimplification

I used to say that the biggest source of problems in the human world was our propensity to confuse fantasy with reality, but I think now I have to update that notion, replace it with the biggest source of problems in the human world is our capacity for oversimplification. We oversimplify everything, and there’s a sound biological basis for that (although I am clearly over-simplifying right here  right now). We cannot absorb all that we perceive, so we filter, and we filter out nearly everything. It might be over-estimating to say that we filter out more than 99 percent of what our senses perceive, leaving us with only what our brains and bodies deem to be the essential. “Mindfulness” (that which leads to a state of bovine acceptance, according to a recent polemic on the subject) can make you more aware of all the details you’re ignoring in the world around you all the time. But what if our filtering is largely erroneous? What if we are leaving out the truly important? What if we are simply not capable of handling the vast complexity of everything in the world.

I know I’m not. In the writing of my current novel, I’m acutely aware that I’m oversimplifying the hell out of the whole field of artificial intelligence. I read an article the other day that covered so many subtleties and nuances it made my head spin. There are alarmists and there are non-alarmists about AI, but there seem to be very few who are both and for good reasons on all sides. So much complexity has to go into it! In my day job I work with a vast open-source house of cards, composed of many, many configurable components, it seems hopeless sometimes to come up with just the right possible combinations of settings, and anyway, the whole stack is going to change out from under me and sooner than I think, probably sooner than I have a really solid grasp on what’s already right in front of me. There will be new and better technologies which are just as freaky and unstable and unproven but have so much potential. This will also be true of any and all future AI software – extremely complex and tentative throughout its infinite iterations. There won’t be that one day when everything is settled, and the thing exists – AI will be a long and difficult process. In a novel, you have to glide along the surfaces, only hinting at the depths, or else your writing becomes tedious and the story bogs down.

Oversimplification leads to a state of thinking that we “know”, because we naturally discount all that we’ve ignored, all that we’ve filtered out – out of mind, out of sight, as it were. We end up believing in things, because really thinking is just too hard, it hurts the brain. Faith is that which soothes the mind, it’s like a spa for consciousness. On almost every issue there are reasonable facets from every angle, but we stake our ground, we plant our flags, we hold on for dear life because ambiguity is untenable.

In my novel I’m about to introduce the concept of Partial Law, but I think we already have it in place. We see it when we believe it. We’re already hanging on as best we can. Oversimplification was our first and greatest invention.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s