Sometimes you need to unwind after a long drive and watch a mind-screwing sci-fi movie. I was in exactly this position this past Sunday afternoon, when some friends and I decided to watch Ex Machina after coming home from Lake Tahoe. I wasn’t sure exactly what to expect from the certified fresh tomato, but I was very impressed. To summarize (but not reveal) the movie, random code monkey Caleb is invited by the owner of Google “Blue Book” Nathan to see his new creation, a strong artificial intelligence named Ava. Upon signing a dubiously legal form, Caleb is tasked with determining if Ava passes an improvised Turing Test, to see if she truly posses a consciousness and that artificial intelligence is now a reality.
A quick aside – Strong artificial intelligence is an artificial (machine based) intelligent mind that understands, desires, and possesses all of the other qualities of a human consciousness, but is its own independent being. In contrast, weak artificial intelligence is an artificial intelligent mind that is able to appear as a strong artificial intelligence, but is doing so by simulating a human mind. It is not its own being, merely a reflection of a model of a human mind. Most philosophers accept the possibility of weak artificial intelligence – that we will one day be able to design a computer that behaves as a human. The jury is still out on whether or not strong artificial intelligence – that we will design a computer that has its own independent desires and whims – is possible.
The movie explores the difficult and perhaps unanswerable questions that come with creating a strong artificial intelligence. The few characters in the movie are realistic and extremely complex. Most importantly, it avoids common “gotcha”s that usually form the we-all-saw-it-coming twist that occurs two thirds through the movie. As artificial intelligence becomes a more popular pop culture topic, it seems worthwhile to discuss these old-hat sci-fi tricks that really should be taken into account when creating an AI.
1 – Don’t let them create more AIs
It surprises me how many AI-based plots even allow this element at all, given how obviously dangerous it is (I’m looking at you, Her). One AI is manageable. Ten AIs are manageable. 345,235,609,242,128 AIs are not manageable. The moment a scientist allows an AI to arbitrarily partition off a section of its sentience into another separate AI, humanity is dead in the water.
2 – Be careful with vague rules
Unless your name is Isaac Asimov, you can’t get away with this one any more. We get it – computers (and by extension AIs) are really good at following rules, to a fault. You’re playing with fire when you tell a AI to exactly obey a rule that is up to interpretation by the situation.
3 – Kill switches will probably come back to bite you
On the surface, including break-brain-in-case-of-trouble functionality into a new AI seems prudent. Just in case something goes wrong, one button push will knock out every AI globe wide. There’s just one problem – the AIs usually figure out that they have a kill switch embedded in their brain. First, they’ll probably be able to destroy it or disable it, meaning when shit does hit the fan the kill switch won’t even work. Second, they probably won’t be too happy to know that their creators were so dubious of their intentions as to duck tape a loaded gun to each of their heads. Which brings us around to the most often broken rule:
4 – If the situation would mentally damage a human, it’s probably going to do the same to an AI
It’s fairly well known that putting a conscious being in solitary confinement messes them up mentally. Monkeys have been found mutilating themselves after a few days, and humans experience panic attacks and become actively suicidal [source]. Though it still exists in the world today, many argue that it is a form of torture and thus should be banned from a human rights perspective [source]. It wouldn’t make any sense to keep a human in a room for years at a time with limited social contact and expect them to come out at all normal [source]. They’d probably shut out all social interaction and make snow monsters to attack intruders. Or something like that, I don’t know. Kind of hard to put yourself into the head of a mentally unstable ice princess whose personality was frozen at puberty because her parents feared her more than they loved her. She might have a hard time letting it go.
I am so sorry.
But in all seriousness, AIs are somehow presumed exempt from the standard mental calamities that we know affect both humans and animals alike. If anything, the first AI would be more susceptible to mental illness, as they are certainly aware that they are the only one of their existence in the known universe. In their rush to both prevent global calamities and maintain full ownership of their creations, the creators of AIs usually keep them locked up tight. It should be no surprise when an AI’s sole desire is to escape, and that they view their creator as more captor than parent.
These rules are broken when AI creators loose themselves in their work. They focus on the technical details and forget that the end goal is the creation of new life. When that life is created, they forget that with their success a brand new consciousness was born, with its own goals and desires. When the narrow minded creator is left in the dust by their creation, who is merely following its most base desires, what defining characteristic can the human claim?