Artificial Intelligence (AI) is a terribly touchy topic not because we are not interested in it, but rather because we have imagined it to death. Whether in Blade Runner, Terminator, or the upcoming Chappie, our society has an understandable love for imagining its own destruction and salvation, a sort of obsession with a technological eschaton ending in our self-achieved redemption. It is a unique combination of the heroic literature of times past and our modern preoccupation with atheistic self-salvation.
Unfortunately, real AI only resembles that of our sci-fi fantasies. In fact, it is much scarier and less “human” than we imagine. Believe it or not, major tech industry figures like Bill Gates, Elon Musk, and Stephen Hawking have expressed concerns about its development and eventual implementation. Musk, the co-founder of PayPal and Tesla Motors as well as a noted inventor and futurist, has referred to the development of AI as “summoning the demon.” In a post on the popular site Reddit, Gates, the patriarch of the tech boom and current shepherd of the 21st-century flock, bluntly stated, “I don’t understand why some people are not concerned.” Hawking, who owes his life to some forms of AI, has worried that in time such technology would allow humans “to be superseded.” If some of the greatest (popular) scientific minds of our time express such concern, it is up to us to listen. Even after listening, though, we are left with questions: What exactly is AI? Why should we fear it? And what, if anything, can be done to live an ethical life in the age of the super-intelligent computer?
First, then, we must understand that AI already exists. I am typing this article on a laptop, which works using artificially-intelligent technology. Your GPS, your tablet, and Watson, the IBM Jeopardy robot-champion, are all AI insofar as they are programmed to do one task incredibly efficiently, often at a superhuman speed. Human beings can draw maps. GPSs simply accomplish this feat faster. The same goes for Watson. We call these systems ANI or “Artificial Narrow Intelligence.” They compute efficiently and accomplish small tasks; they do not worry the minds of such scientists.
What does concern these thinkers is the development of other forms of artificial intelligence, namely AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). The first would essentially be a working human brain in a computer, capable of self-correction and self-modification. The latter would theoretically develop from the former, essentially creating some form of super computer that we cannot even comprehend; it would take human intelligence and develop it beyond recognition. Techno-optimist and blogger, Tim Urban, has produced a wonderful two-part series on these questions (available here and here). He expresses the possibility thus:
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and it’s the ultimate example of The Law of Accelerating Returns [emphasis original].
Lest I seem like an insane futurist, this scenario is actually what major figures like Gates, Hawking, Musk, and Oxford philosopher Nick Bostrom fear, or at least something like it is. In other words, we ought to take the possibility seriously. These men understand that such a system is not only rationally cognizable but also likely to develop relatively soon (say 2040 or 2050 in some moderate estimates) given current trends in computing.
The system developed would not be the Skynet of Terminator or V.I.K.I from the I, Robot film. Rather it would be a very, very efficient and likely self-correcting computer program capable of extreme beneficence or human extinction. Something we program to be entirely mundane could easily become dangerous. Again, Urban puts the issue succinctly:
The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien [emphasis original].
That is to say that developing AGI likely means developing ASI. Once a computer system becomes self-correcting it can modify itself and advance its own “intelligence.” While such a computer would not be “conscious” in a human sense, it would have a mission, be amoral, and engage in the ultimate utilitarian means to accomplish its goal. It would think like us, but faster; deception would be easy enough for a “better us.” It represents the dream of the Modern Spirit: Entirely amoral, driven purely by IQ or what we might refer to as STEM intelligence, an AGI-turned-ASI would be the ultimate act of Babel building (and likely our destruction).
While it might seem easy to program ethics into the computer, the issue is quite a bit more complicated. First off: whose ethics? Second of all, the computer is still responsible for interpreting its own programming. If we demand happiness, it might drug us all with serotonin. If we demand power, it might scramble our molecular structure and use us as generators. As absurd as it sounds, we really cannot know what it would do because we would be ants in a world of super-human intelligence. A mouse can see a church building, but it cannot understand what the building is, why it exists, or how it functions. Sadly, we would be the new mice. Cue the fears of Gates, Hawking, et al.
Inventing such technology means opening Pandora’s box; ethically speaking, we can only prepare beforehand (assuming such technology is possible), as once such a being exists, we would not be able to comprehend it in human terms. Our situation reminds me of a chapter in Chuck Palahniuk’s Haunted called “The Nightmare Box.” The Box has a little “clock” inside that ticks and ticks. When the ticking stops, if one looks inside and pushes a button, he will see a flash of light. The flash ruins his life, resulting in a suicidal tendencies, a comatose state, death, or some other “nightmare.” The problem is that people keep looking into the Box because they are curious. Maybe it will be different for them. Maybe an ASI will figure out how to give us immortality through genetic manipulation and the application of nanotechnology (think little robots that help correct problems with the body. Though a bit farfetched, many scientists believe it to be possible). But, as Palahniuk writes:
It implants an image or an idea. A subliminal flash. It injects some message into your brain so deep you can’t retrieve it. You can’t resolve it. The box infects you this way. It makes everything you know wrong. Useless.
What’s inside the box is some fact you can’t unlearn. Some new ideas you can’t undiscover.
That is to say, we are unlikely to just stop. Skeptics would do well to put that idea out of their heads. Human beings have exhibited a clear curiosity from Adam the man to atom the bomb, and Artificial Intelligence is no different. If it has even the most remote chance of helping us (especially providing us with immorality), we will pursue its actualization. This reality, in turn, leads to an ethical quandary: How can we continue to improve our lot materially through ANI without developing potentially dangerous technologies like ASI?
I submit that ANI is dangerous in and of itself. Like all modern technologies, it has a mixture of negative and positive effects, which need to be navigated, mitigated, and translated. GPSs make us less likely to interact with strangers in one facet of our lives. Laptops obscure the line between interaction and simulation. Driverless cars might mean a lazier, more distracted populace with a diminished skillset. As the sociologist and lay theologian Jacques Ellul writes:
A principal characteristic of technique [technology] … is its refusal to tolerate moral judgments. It is absolutely independent of them and eliminates them from its domain. Technique never observes the distinction between moral and immoral use. It tends on the contrary, to create a completely independent technical morality.
In this sense, the first step is to recognize that questions of technology (and especially AI) are ethical questions, even if we are told they are not. No matter how much we hear the rallying cry of efficiency as a justification, we owe ourselves more. We owe ourselves deep reflection about how even basic AI modifies our way of being. And this is where even Gates, Musk, and Hawking fall short. Though great intellects, they are known for their technical expertise; they are men of this age, and thus skilled in intelligence quotients and regression lines, not dialectic and ethics. Their fears hint at a need to take the ethics of technology more seriously, to question how and why we use the smattering of ANI systems that we encounter on a daily basis. Perhaps the problem is that we concentrate too much on the building of these new technologies and not enough on their effects and applications. In this vein, Gates and the others are only now beginning to see the problem.
For, if we cannot grapple with these more basic questions, what chance do we have of successfully facing the dilemmas laid bare by their fears? What chance do we have of surviving a war if we cannot muster the troops to skirmish with, even just observe, the enemy’s vanguard?