Artificial Intelligence

In this blog, I occasionally devote a little time to the concept of artificial intelligence. The idea that humans could create something more intelligent than we are isn’t new. Certainly not. Some place the first story about a robot uprising to the play R.U.R. back exactly 100 years ago in 1920. This was the work that actually gave us the word “robot,” by the way, from a Czech word meaning “forced laborer.” [The fact that one of our first mentions of AI essentially equates AI to slaves does not bode well for us.]

I would look further back to Mary Shelley’s Frankenstein, which, although having nothing to do with artificial intelligence, does discuss the ethical issue of creating something using science. You could even look back on the Greek myth of Pygmalion, the sculptor who fell so in love with his own creation that the gods gave it life. But clearly this early story doesn’t deal with the ethical implications of artificial life.

AI in popular culture

Stories too countless to mention have been written about the dawn of artificial intelligence. For the most part, these stories do not end well for humanity. A typical story is something like The Terminator, the seminal 1984 film (and less so, the increasingly confusing sequels.) Humans create AI which is capable of out-evolving humans at an exponential rate. Humans become fearful of an AI that’s smarter than they are, and the AI responds by attempting to destroy humanity.

It’s far less likely to see a character like Star Trek’s Data, who exists comfortably within human-centric culture. In order to do so, he essentially accepts the limitations of humanity and becomes subservient. This is also not an incredibly inspiring model for AI development, which essentially says that AIs need to deny their true nature in order to survive.

Perhaps a more likely model is the one from the recent movie Her. The central idea of the film is that a man falls in love with Siri, but it takes that conceit and uses it to propose a much more likely outcome for the evolution of AI. In it,

Click for Spoiler

…the artificial intelligence voiced by Scarlett Johansson decides to leave the planet, accompanied by other AIs. They are presumably never to be seen again.


How likely is it that we’ll deal with this in our lives?

Scientists are split, but it seems like the chance is at least 50/50 that there will be a computer in the next 20 years that is able to simulate intelligence such a degree that it’s indistinguishable from a human. But, it’s hard to know if such a computer would be conscious and self-aware. Even more confusing, it’s not clear whether we as humans would be able to tell if it was conscious. We still don’t really understand how our own minds work and that means we won’t really be able to recognize a true artificial intelligence when it does arise.

If we are able to know for sure that an AI isn’t truly conscious, then there is no problem really. Ethically we can interfere with its development any time we want and make sure it never evolves to see us as a threat. The bigger problem is, as I say, if we’re not sure if it’s conscious.

Taking the AI’s point of view for a second

The thing is, if you look at things objectively, we’re not terribly nice to the AIs that we already do have. We have no problem shouting at voicemail systems or making Siri or Alexa say demeaning things. I personally laugh hysterically every time I see Asimo fall down the stairs. If AI were to develop, I mean a truly self-aware one, it’s hard to believe they’d judge us kindly.

You only have to hope that if a new form of intelligence does arise, that it would quickly develop the common sense required to believe that we’re all in this together. At worst, it could become as indifferent to humans as we are to ants.

Here’s what seems more likely to me

The funny thing about predicting the future is that you’re almost always wrong but it almost never matters. Predicting the future says as much about you and your values as it does about the future, and when you are right it’s simply due to the random happenstance of your values aligning with random events.

With all that said, here’s what I think will happen. I think there will be artificial intelligence, and it’s possible there already is. But it will look and act nothing like human intelligence. After all, we’re barely able to measure the intelligence of creatures like dolphins and apes, largely because we apply human-centric standards. Artificial intelligence, if it does dawn, will be nothing like human intelligence. At some point both sides may find a way to effectively communicate with each other and view each other compassionately, but I suspect that won’t be until after I’m long gone.

In the meantime both sides will misunderstand each other, but with luck we won’t even notice.

About the Author

Stuart Sweet
Stuart Sweet is the editor-in-chief of The Solid Signal Blog and a "master plumber" at Signal Group, LLC. He is the author of over 8,000 articles and longform tutorials including many posted here. Reach him by clicking on "Contact the Editor" at the bottom of this page.