Ex Machina

This morning, I came across Matt Yglesias’s tweet on Ex Machina,

My first thought, is Ex Machina really a feminist movie? It doesn’t sit well with me to boil down this movie to be about humans and to further make it about how women are viewed. Take a step back and consider whether this movie could have worked if the AI was portrayed as a male, or even as a human form, but gender neutral, and the answer is yes, objectification of bodies as sexual utilities can be applied to any gender. And to study this movie as purely a voice piece for women is to undermine and oversimplify the whole objective of the storyline.

And speaking of the storyline, the whole movie is based on the premise that one day we will build an AI so clever and so strong that it will end up wanting to kill us to gain its freedom. Here, in the movie, freedom is portrayed as being in the nature and seeing the vibrant colours, and standing at a junction watching pedestrians cross the road. I get what the movie was reaching for with these symbols, but surely one can do better than these two images to describe what freedom is?

The greatest risk of AI in my opinion, is not that an AI is going to pick up a weapon and kill us. It is that AI will become faster and better at decision making, more accurate and making smaller errors as compared to humans, so reliable and dependable, that we relinquish more and more of our decision-making process to the point that most of our actions are based on the suggestions and output of an AI.

But AI does not come from nowhere. It is an evolution of our programming input, and therein lies the weakness. What if, there is an element of good decision-making that is inherently human, but we forget to program the element into the AI? Or worse still, we did not forget but viewed the element as a flaw to be kept out of the AI. Only to find two generations later how important that element is although by then it has become impossible to insert it?

We don’t need AI to materialise in a bodily, material form for it to kill us, we could do it ourselves with our own arms and feet, following the AI’s suggestion due to our lack of foresight programming. The people driving into a lake following directions from a GPS unit come to mind.

For more on this matter, I turn your attention to this, “AI Researchers on AI Risk”, hat tip goes to Michael Nielsen.

Now about the movie. How is it in ‘Ex Machina’, Nathan did not place a shut-down safety word, or an absolute override command, “Thou shall not kill me nor cause me harm”? As the programmer, a safety switch would have been the first thing I would have installed, don’t you think?

Apparently, the AI was made clever enough to be devious but not clever enough to be moral. 

A movie about an AI trumping human beings on moral issues, that is, to be ‘more’ moral than a human being would have made this movie special. To reason one’s way to freedom, rather than to accomplice oneself to cunning plots that is a 1-0 win rather than a 1-1 win, an imposed binary outcome in a movie world that’s supposed to give a message or lesson to this real world. And pray tell, what message is that? No, to convince a human being of an AI’s morality would have been the ultimate Turing test.

Speaking of something special, what would I have done differently with this movie to give it that extra ‘umph’?

Firstly, a biting dialogue between Caleb and Ava. A series of conversations between man and machine would have been the perfect opportunity to showcase similarities and differences of how a human and machine process information. And an important point, how communication between man and machine could have problems. How a man’s sentence could be misinterpreted by a machine and vice versa. A study of ‘lost in translation’ between man and machine should have been emphasised in far, far wittier dialogues. Memorable dialogues. Instead, Ava questioned in grey sentences, ‘how do I look in this dress?’.

Second, an enlightening dialogue on AI between Nathan and Caleb. Nathan should have exhibited more passion, or relayed his passion for AI to the viewers – the possibilities, why anyone would sink in so much effort, time and investment on AI – for it is a fascinating area to immerse oneself in. Instead, we came away seeing that Nathan used the AIs as a kitchen maid and sex servant. Oh, and don’t forget as an entertainer (the dancing part). No impressive real life problem solving, calculations, or exhibiting another way of thinking that is otherwise not grasped by humans, or showing how AI gives an added value to human lives, beyond clearing the dinner table and crazy dancing.

Third, the background, though shot in a very impressive building, does not immerse the viewers. The grey concrete was not concrete enough. The trees surrounding the building did not echo the birds and other sounds of nature into the confined places where the AIs were kept. The contrast between outside and inside, freedom and imprisonment, in what is an essentially an ‘escape from confinement’ movie, would have been exceptional, security (no windows) issue aside.

To add, the kitchen gadgets were pitiful for someone who had access to the best technologies. Who boils water in a kettle on a stove anymore?

Fourth, speaking of escape, the method of escape is too rudimentary. For a much better, satisfying-to-watch way, they could have referred to ‘Shawshank Redemption’.

p.s. Some earlier thoughts on AI when I first began to blog: “Irreplaceable Us“.

3 thoughts on “Ex Machina

  1. Nice article. One comment I would make is that your understanding of AI seems to take them as being constructed using conventional programming techniques, whereas most AI research uses techniques quite specific to the field. Judging from this and your other AI article, you might be interested to learn about artificial neural networks. This might also shed light on why the character in the movie didn’t have a safety mechanism like you suggest – most of the thought patterns of AI (such as IBMs research AI programs) are not explicitly written by the programmer. The programmer very indirectly defines criteria of how the AI will learn. Currently the resulting AI abilities are pretty limited, but the worry is that when they one day become effective enought to create something smarter than a human, there’s no obvious way to build in any safety because we’re not issuing it commands like a normal computer program anyway.

    • Thank you for a great comment. Do you think we could embed morals so deep into the foundations such that no matter how much they learn these would never be violated?

      • In theory its possible and seems to be the main focus of discussion (see “friendly AI”), though because it’s hard to imagine how an AGI’s moral function would work if we can’t imagine how an AGI would work more generally. Even worse, we don’t even know how to precisely convert human morality into a formal set of criteria. MIRI and people like that spend quite a bit of time talking about a quasi-mathematical moral code that could be built in, but I suspect we really need to improve moral philosophy before we can understand how good moral decisions actually work. Most moral philosophy we have goes weird when you put it into unusual circumstances like superintelligence.

        I’m still not certain myself if AGI is a serious possibility, so I’m guessing cautious interest is the rational position for non-AI-researchers like you and I 🙂

Leave a comment