Find essays by keyword, title, or author name

Artificial Intelligence: Virtue in the Age of the Computer

Artificial Intelligence (AI) is a terribly touchy topic not because we are not interested in it, but rather because we have imagined it to death. Whether in Blade Runner, Terminator, or the upcoming Chappie, our society has an understandable love for imagining its own destruction and salvation, a sort of obsession with a technological eschaton ending in our self-achieved redemption. It is a unique combination of the heroic literature of times past and our modern preoccupation with atheistic self-salvation.

Unfortunately, real AI only resembles that of our sci-fi fantasies. In fact, it is much scarier and less “human” than we imagine. Believe it or not, major tech industry figures like Bill Gates, Elon Musk, and Stephen Hawking have expressed concerns about its development and eventual implementation. Musk, the co-founder of PayPal and Tesla Motors as well as a noted inventor and futurist, has referred to the development of AI as “summoning the demon.” In a post on the popular site Reddit, Gates, the patriarch of the tech boom and current shepherd of the 21st-century flock, bluntly stated, “I don’t understand why some people are not concerned.” Hawking, who owes his life to some forms of AI, has worried that in time such technology would allow humans “to be superseded.” If some of the greatest (popular) scientific minds of our time express such concern, it is up to us to listen. Even after listening, though, we are left with questions: What exactly is AI? Why should we fear it? And what, if anything, can be done to live an ethical life in the age of the super-intelligent computer?

First, then, we must understand that AI already exists. I am typing this article on a laptop, which works using artificially-intelligent technology. Your GPS, your tablet, and Watson, the IBM Jeopardy robot-champion, are all AI insofar as they are programmed to do one task incredibly efficiently, often at a superhuman speed. Human beings can draw maps. GPSs simply accomplish this feat faster. The same goes for Watson. We call these systems ANI or “Artificial Narrow Intelligence.” They compute efficiently and accomplish small tasks; they do not worry the minds of such scientists.

What does concern these thinkers is the development of other forms of artificial intelligence, namely AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). The first would essentially be a working human brain in a computer, capable of self-correction and self-modification. The latter would theoretically develop from the former, essentially creating some form of super computer that we cannot even comprehend; it would take human intelligence and develop it beyond recognition. Techno-optimist and blogger, Tim Urban, has produced a wonderful two-part series on these questions (available here and here). He expresses the possibility thus:

An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and it’s the ultimate example of The Law of Accelerating Returns [emphasis original].

Lest I seem like an insane futurist, this scenario is actually what major figures like Gates, Hawking, Musk, and Oxford philosopher Nick Bostrom fear, or at least something like it is. In other words, we ought to take the possibility seriously. These men understand that such a system is not only rationally cognizable but also likely to develop relatively soon (say 2040 or 2050 in some moderate estimates) given current trends in computing.

The system developed would not be the Skynet of Terminator or V.I.K.I from the I, Robot film. Rather it would be a very, very efficient and likely self-correcting computer program capable of extreme beneficence or human extinction. Something we program to be entirely mundane could easily become dangerous. Again, Urban puts the issue succinctly:

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien [emphasis original].

That is to say that developing AGI likely means developing ASI. Once a computer system becomes self-correcting it can modify itself and advance its own “intelligence.” While such a computer would not be “conscious” in a human sense, it would have a mission, be amoral, and engage in the ultimate utilitarian means to accomplish its goal. It would think like us, but faster; deception would be easy enough for a “better us.” It represents the dream of the Modern Spirit: Entirely amoral, driven purely by IQ or what we might refer to as STEM intelligence, an AGI-turned-ASI would be the ultimate act of Babel building (and likely our destruction).

While it might seem easy to program ethics into the computer, the issue is quite a bit more complicated. First off: whose ethics? Second of all, the computer is still responsible for interpreting its own programming. If we demand happiness, it might drug us all with serotonin. If we demand power, it might scramble our molecular structure and use us as generators. As absurd as it sounds, we really cannot know what it would do because we would be ants in a world of super-human intelligence. A mouse can see a church building, but it cannot understand what the building is, why it exists, or how it functions. Sadly, we would be the new mice. Cue the fears of Gates, Hawking, et al.

Inventing such technology means opening Pandora’s box; ethically speaking, we can only prepare beforehand (assuming such technology is possible), as once such a being exists, we would not be able to comprehend it in human terms. Our situation reminds me of a chapter in Chuck Palahniuk’s Haunted called “The Nightmare Box.” The Box has a little “clock” inside that ticks and ticks. When the ticking stops, if one looks inside and pushes a button, he will see a flash of light. The flash ruins his life, resulting in a suicidal tendencies, a comatose state, death, or some other “nightmare.” The problem is that people keep looking into the Box because they are curious. Maybe it will be different for them. Maybe an ASI will figure out how to give us immortality through genetic manipulation and the application of nanotechnology (think little robots that help correct problems with the body. Though a bit farfetched, many scientists believe it to be possible). But, as Palahniuk writes:

It implants an image or an idea. A subliminal flash. It injects some message into your brain so deep you can’t retrieve it. You can’t resolve it. The box infects you this way. It makes everything you know wrong. Useless.

What’s inside the box is some fact you can’t unlearn. Some new ideas you can’t undiscover.

That is to say, we are unlikely to just stop. Skeptics would do well to put that idea out of their heads. Human beings have exhibited a clear curiosity from Adam the man to atom the bomb, and Artificial Intelligence is no different. If it has even the most remote chance of helping us (especially providing us with immorality), we will pursue its actualization. This reality, in turn, leads to an ethical quandary: How can we continue to improve our lot materially through ANI without developing potentially dangerous technologies like ASI?

I submit that ANI is dangerous in and of itself. Like all modern technologies, it has a mixture of negative and positive effects, which need to be navigated, mitigated, and translated. GPSs make us less likely to interact with strangers in one facet of our lives. Laptops obscure the line between interaction and simulation. Driverless cars might mean a lazier, more distracted populace with a diminished skillset. As the sociologist and lay theologian Jacques Ellul writes:

A principal characteristic of technique [technology] … is its refusal to tolerate moral judgments. It is absolutely independent of them and eliminates them from its domain. Technique never observes the distinction between moral and immoral use. It tends on the contrary, to create a completely independent technical morality.

In this sense, the first step is to recognize that questions of technology (and especially AI) are ethical questions, even if we are told they are not. No matter how much we hear the rallying cry of efficiency as a justification, we owe ourselves more. We owe ourselves deep reflection about how even basic AI modifies our way of being. And this is where even Gates, Musk, and Hawking fall short. Though great intellects, they are known for their technical expertise; they are men of this age, and thus skilled in intelligence quotients and regression lines, not dialectic and ethics. Their fears hint at a need to take the ethics of technology more seriously, to question how and why we use the smattering of ANI systems that we encounter on a daily basis. Perhaps the problem is that we concentrate too much on the building of these new technologies and not enough on their effects and applications. In this vein, Gates and the others are only now beginning to see the problem.

For, if we cannot grapple with these more basic questions, what chance do we have of successfully facing the dilemmas laid bare by their fears? What chance do we have of surviving a war if we cannot muster the troops to skirmish with, even just observe, the enemy’s vanguard?

 

Readers are invited to discuss essays in argumentative and fraternal charity, and are asked to help build up the community of thought and pursuit of truth that Ethika Politika strives to accomplish, which includes correction when necessary. The editors reserve the right to remove comments that do not meet these criteria and/or do not pertain to the subject of the essay.

  • Indeed Mr. Padusniak, it is only a fool who can wax enthusiastically about technology. Of course the purveyors can see it most of all, removed in their twilight from youthful ambition: the artificial is the profane.

    Technology gives while taking away, leaving yet another void that we then seek to fill with technology, in lieu of simple joy and love. I doubt the worries about cataclysms– rarely is the window of violence brief and decisive as in a 2-hour Hollywood script. The violence of the machine is always as slow and sure and timeless as Satan himself, who nurtures its monstrous seedlings in the bitter hearts of men.

    • Chase Padusniak

      Thank you for your comment!

      Well, surely Satan can work through man’s pride and ambition, which it seems to me techno-optimism represents. We are, after all, commanded to oppose Satan’s work, and while “technology” as such is not his work, its excesses and most extreme possibilities very well could be.

      • Ellul goes through it in Technological Society… all previous advanced civilizations had the intellect and opportunity to pursue technology and they did so, for their limited goals. Only our Anglo civilization has assigned science and technology a role as end in itself. Quite simply we do not worship God, we worship the false idol held high by Satan: man-as-new-God.

        • Chase Padusniak

          Unfortunately, quite true.

  • Dylan Pahman

    “It would think like us, but faster….”

    I don’t see how this follows from what you said before it. The whole point seemed to be that it would think radically differently than us, not “like us, but faster.”

    Also, I am thoroughly disappointed to read an essay on the internal ethics of AI that contains no interaction with Isaac Asimov’s Three Laws of Robotics.

    http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

    It’d be like writing an essay about the ethics of property ownership that fails even implicitly to discuss the command, “Thou shalt not steal.” Does nobody read Asimov anymore? That would make the fears expressed in this essay far more disconcerting to me….

    • Chase Padusniak

      Hi and thanks for your comment.

      I’ll try and respond as thoroughly as I can given limited time.

      1. It depends on how you’re using “thought.” In the scientific understanding (and I’m not saying I agree), the brain is essentially a very fast and efficient computer. For this reason, some computer scientists and neurologists believe we can build AI based on the structure of our own brain. If advanced AI came into existence, it certainly would be different, but in terms of its “IQ” it would essentially be a human being, just with the ability to reason much more quickly and effectively. It wouldn’t have empathy or any other “feelings,” but it would likely be able to emulate them.

      2. I’m familiar with Asimov and hint at his laws and that whole strain of thought when I write:

      “While it might seem easy to program ethics into the computer, the issue is quite a bit more complicated. First off: whose ethics? Second of all, the computer is still responsible for interpreting its own programming.”

      I put little faith in our ability to write a command that COULD not be misinterpreted. And then there’s the fact that we are human and any laws we invent, we would be tempted to modify, as happens in the Asimov story “Catch That Rabbit.”

      In other words, while Asimov’s laws are interesting, I do implicitly refer to them, and honestly, don’t think they’d be much help in the real world.

  • Jonathan Quist

    This could open up the whole epistemological question of what intelligence actually is. Is a computer actually capable of active intelligence – in the scholastic tradition this is seen to be a more metaphysical reality – or can the merely gather, relay, and deduce information. I would not consider the latter traits to constitute actual intelligence. John Searle makes a good case for my argument in an essay called The Chinese Room. http://www.iep.utm.edu/chineser/