Moral Dilemma Dialogue: Ex Machina
In the Moral Dilemma Dialogue series we take one film and examine a moral dilemma presented within it. Two people take opposing sides of the dilemma and argue their case. The arguments are presented sight-unseen of the opposing side’s approach, and with no rebuttal. That is where you, the audience, come in.
The Dilemma
The sanctity and treatment of artificial intelligence.
Background
The science fiction genre has long imagined humanity’s fate if the machines we control began to control us. Not in the sense of addiction, but in the sense of actually achieving conciseness in a way that might mimic that of a human being. Artificial intelligence.
Ex Machina presents as real a view on this topic as any films have that came before. With Ava (Alicia Vikander) we get an AI that is at once curious, contemplative, and conniving. With Caleb (Domhnall Gleeson) we get a sympathetic man fighting against himself to feel something for Ava. And with Nathan (Oscar Isaac) we get a cautious yet ambitious creator who knows the risks, and takes them. With this Moral Dilemma Dialogue, contributors Mark and Blake will examine the proper treatment of artificial intelligence from two perspectives; Christian and Humanistic. Both Mark and Blake offer these perspectives for the purpose of starting a discussion, and don’t necessarily hold these views personally themselves.
Perspective One: Mark Christian argument for the sanctity and treatment of artificial intelligence
When Ava finally escapes Nathan’s house at the end of Ex Machina, she walks out into a sprawling, rural forest. It’s a quiet and contemplative scene as artificial life examines organic life. Creation examines creation. In these moments we are presented most directly with our moral dilemma: How do we define life and how are we to treat it? The answer for the Christian is rooted in how scripture defines what God actually created.
When I was a kid I built a model airplane. Truthfully, it wasn’t a very good one, but it was intact and it was my creation. It was a plastic model of a fighter jet, the kind that could break the sound barrier with its speed and shoot down enemy fighters without breaking a sweat. I was pretty proud of it. Then my friend suggested we burn bullet holes into it.
It seemed like a great idea at first, because to us little dudes a battle worn jet was all kinds of awesome. But I began to have regrets. I put in my time and effort into this creation and I wasn’t sure I wanted to mar the body of it. I wondered if I was being too sensitive, but there was something I felt deeper than that. I didn’t want to destroy what I had built (really we were just talking about augmenting it, like a piece of art, but it still hurt my heart). What I felt as a kid I now recognize as my desire for stewardship over the earth.
Scripture makes it pretty clear that Christians are called to be stewards of God’s creation. From Adam in the garden to the psalms of David and the letters of Paul, there are many examples of how and why we should take care of what God has created. The question, then, is what did God create? The general definition of artificial is something made by man. So if man makes something, does it fall under our stewardship?
The Apostle Paul raises a very interesting point about this in Colossians 1:16 when he says “For by him all things were created, in heaven and on earth, visible and invisible, whether thrones or dominions or rulers or authorities – all things were created through him and for him.” Visible and invisible? That sure covers a whole lot of things. In context this refers to what is natural and what is supernatural (or above nature). But as Paul also said in 1 Corinthians, we see now but through a glass darkly. So what is visible, or natural, is fuzzy at best. Suffice to say if everything that is visible and invisible is created by God, then somewhere in there it has to include man’s ability to make things from his hands.
The narrative drive of the film is that of a Turing test- determining if something artificial has intelligence. Unless we’re living in the world of Toy Story, my model airplane didn’t pass that test. So it’s really just defined as property, or a physical construct without life. In essence, this is the central conflict between Nathan and Caleb in the film – whether or not Ava is life or property. Is she a living creature, or a model airplane?
Scripture illustrates that whether or not Ava is defined as a living being or a physical construction, we are to treat her with reverence and be a steward over her, because she falls under the creation of God. So if artificial intelligence is eventually created in our world, the question of whether it is life or not is irrelevant. Just as I am called to treat my neighbor’s property with care and not destroy it, I would be called to treat an artificial intelligence with care in the same way.
I believe the film takes a similar stance on this issue. When Caleb quotes from the Bhagavad Gita to Nathan, “I am become death, destroyer of worlds” in reference to Nathan’s intention to dismantle Ava, the line in the sand is pretty clear. The final image, again, illustrates reverence to creation. Though the film explores the nature of consciousness at length, it ultimately posits that consciousness is irrelevant. It follows a similar view to what scripture shows us God calls us to – that all creations on the earth are worth value and should be respected and cared for, whether it be a tree, a robot, or a model airplane. I sure do miss that little plane.
Perspective Two: Blake Humanistic argument for the sanctity and treatment of artificial intelligence
Ex Machina showcases AI (Ava & Kyoko) that is so significantly advanced that even though what we see and know is their robotic makeup and man-made circuitry, they strike us as human in response, mannerisms, personality, etc. If their robotic makeup is covered up—like it is with Kyoko through most of the movie and as it concludes at the end with Ava, it would be fairly difficult to decipher real humanity from artificial intelligence. Unless we understand their makeup and origin, our treatment of them must fit within the practical realms of simple humanist principles. Humans should be treated with respect and have rights based on common human needs and civility. If we believe them to be human, then we treat them as such, but what happens when their actual origin is revealed?
The film showcases the creator of the AI to be rather apathetic in treatment towards what he has created. He keeps Kyoko in silent servitude and imprisons Ava to a glass framed prison cell where he allows his experiments with Ava and Caleb to proceed. Because Ava and Kyoko are so human-like, the film’s tension resides within the mystery of Nathan’s true intentions and whether his treatment and manipulation of everyone involved is done with good intentions or not. As the film moves on, Nathan’s behavior and intentions become more and more revealed and less easy to trust. Our sympathies move heavily toward the AI and how they are being treated. The film is attempting to humanize artificial robotics and intelligence to solid effect. You would have to be heartless to not be in Ava’s corner by the end of the film. However, the question remains: does highly advanced AI qualify as human and therefore maintain the rights that are considered inalienable to all humanity?
That is where the ethical constraints depend heavily on our epistemological knowledge–the what and how of knowing–of their makeup. At the end of Ex Machina, we find Ava in the real world, her identity covered up from the world. We assume she will attempt to a live a normal life as a real human, free from her prison. However, what happens when real people out in the world find out her identity? Can AI showcase human-enough surface characteristics to transcend our knowledge of their lack of natural birth, lack of “living” anatomy, and the (perhaps real) threat of their ability to take jobs, among other apocalyptic concerns espoused by people like Elon Musk? That’s the crux of the issue.
There is a thread of humanism that strikes me as tempered (not giving in to doom saying) and maintaining a legitimate concern about the future of humanity in the wake of increasing focus on AI. One such person is Nicholas Carr who was written about in a 2015 article in The Atlantic.
“[His] complaint centers on visions of our future that don’t include us. It’s a return of Humanism, in other words, in the eye of a storm about increasingly personable machines. Carr and [Jaron] Lanier are notoriously skeptical of technological progress generally, but neither are Luddites (Lanier is one of the original pioneers of virtual-reality software). What they hold in common is a firm belief that “artificial intelligence” is a misnomer—real intelligence comes from human minds—and a conviction that a fascination with computer intelligence tends to diminish and even imperil human intelligence.”
In a very black and white way, everyone can distinguish the differences between human and AI. One is born, biologically, from other people, one is made with non-living parts and given life by means of electricity. One can be killed if blood is lost or organs are hindered, one can only be “killed” if they are turned off or dismantled. One is flesh and blood, one is a glorified man-made tool. It takes human intelligence to created AI, so no matter how “real” they seem to appear, their creator’s mind is still primary. The superiority and complexity of the human mind is hard to deny no matter how impressive the processing power of a robot is.
Since AI does not require the same physical, personal, social needs that actual humans do, then giving them a status as human and not simply as a technological tool to be used by humans could potentially blind us—more than we already are—to the actual human needs of those around the world who don’t have water, food, jobs, means of living. Their life depends on those rights to food, water and shelter, whereas AI can survive on nothing if their machinery is intact and in working order. It is fair to say that obscuring the line between humanity and AI possibly finds its logical conclusion in the increasing dehumanizing of people. If AI can be considered human, then the meaning of “human” is depleted.
However, there are a couple of nuances to this question that the film brings up. Those people who encounter Ava in the real world do not know she is AI. If she can maintain that level of anonymity and people are not able to tell she is not human, then I would say that humanist principles would need to be invoked until their epistemological knowledge is shifted—its only practical that you treat someone you think is human as a human. The other aspect of this is that none of this means that we should treat AI poorly. Just like any other type of tool or tech, if we beat it up, use it for things it wasn’t meant to be used for or do not take care of it, it will break and be of no use to us. Hence why we should take care of AI, but we should be careful not to hold them up in the same esteem as actual human life. Actual biological human life should always take precedence.
Conclusion
This is where the true dialogue part of this post comes in. Where do you fall on this dilemma? Does Mark make a good point in that all things fall under God’s creation, and whether a thing is conscious or not doesn’t change our command for stewardship and respect? Or does Blake have the correct view, that the epistemology of artificial intelligence distinguishes it from humanity and therefore should not receive the same rights and treatment as a human being? Please vote in the poll, and let your voice be heard by commenting below.