Cheating on the Turing Test
Hello, my name is Anna
The “Turing Test” asks an intelligent computer program to pretend to be human. The computer wins if humans are unable to tell the difference between a computer and a human. In the Battlestar Galactica universe, humans can’t tell the difference even when having sex with a robot. That’s a pretty convincing win. Real-world AI is quite a bit less sophisticated (and less sexy).
Over the weekend, a program named Eugene Goostman allegedly passed the Turing Test, convincing 33% of judges that it was human. Problem is, that test had so many problems, the Internet spent more time mocking the winner than congratulating it. Don’t just take someone’s word for it, go read one of the chat transcripts:
Turing-test “chatbots” have existed for years. They’ve even been the inspiration for a song. These bots converse with a combination of canned jokes and scripted answers, repeating the question, and denying knowledge when directly asked. This often leads to dead-giveaway non sequiturs, like when a ‘bot first claims that it lives in Ukraine, only to say that it’s never been to Ukraine. You would think that the people running Turing Tests would ask those “gotcha” questions right away, but that would presuppose that people running Turing Tests are actually doing meaningful AI research. In reality they are the AI equivalent of pro wrestling.
The chatbot strategy of sticking to a finite script turns out to be quite useful in the real world, not for true artificial intelligence but for telemarketing. While a chatbot can deflect unwanted questions by saying they “don’t understand” or “I’ve never been there”, the telemarketer ‘bot has an even better excuse: “Sorry, we have a bad connection”.
The Turing Test
Alan Turing published the concept of the Turing Test in 1950. Machines with “a storage of 174,380” were the state of the art, enormous and expensive. This comes out to just over 21 kilobytes, enough to hold a few seconds of MP3 audio or a very small thumbnail JPG. Such a computer couldn’t do much more than basic arithmetic, yet arithmetic was a big deal in an era when airplanes and rockets were designed by slide rule. At that point, the idea of holding a natural-language conversation seemed nearly as fanciful as composing poetry or falling in love.
In his original 1950 article, Turing predicted that a computer with “10^9 bits” (128 megabytes) would be able to adequately imitate a human, and that such a machine would be invented within “50 years”. He was right about the 128 megabytes in 50 years (a decent PC in the year 2000 would have 128MB of RAM) but absolutely wrong about being able to pass the Turing Test.
Why is AI so difficult?
The futurists of the 1950s used arithmetic-computation speed as a surrogate for intelligence and found computers to be much “better” than humans. They wrongly assumed that this would rapidly translate into true artificial intelligence. For the purposes of this section I will use the phrase “AI” to refer to “general artificial intelligence”: machines capable of thinking their way through arbitrary scenarios as opposed to being narrowly programmed for one application. (much as a chatbot is programmed to fool humans)
In 2014, computers are billions of times faster than they were in the past. Yet no one’s come up with a computer program that can hold a real conversation, let alone demonstrate true intelligence. Why is this so difficult? I’ll go through a few hypotheses.
AI is impossible
Church and Turing proved that a “universal Turing machine” with infinite memory and time could compute any function that computable on any other Turing machine. They also proved that certain problems were not Turing-computable, most famously the Turing halting problem. No Turing machine can reliably determine whether another Turing machine will halt or run forever. All digital computers are finite Turing machines, so these limitations apply to Windows, iOS, Android and Playstation.
Ever since Alan Turing came up with the concept, scientists and philosophers have argued over whether the human brain is a Turing machine. This debate is more philosophical than practical, as it’s pretty much impossible to prove either way. The human brain certainly doesn’t act like a Turing machine – nothing about it is neatly classified into “1”s and “0”s. There may be enough quantum effects in the brain’s ion channels, neurotransmitters, proteins, DNA, chromatin, microtubules and vesicles to make it completely non-computable with classical deterministic mechanics. If the human brain works in a fundamentally non-Turing way, maybe all of our thoughts are non-Turing computable.
Some people believe that while a universal Turing machine can compute everything, it may be very inefficient at doing so. For example, the difficulty of decrypting cryptographic keys scales exponentially with the length of the key. Therefore, increasing the length of an RSA key from 1024 bits to 2048 bits doesn’t just double difficulty, it multiplies it by millions of times. If you expand this to the 10^11 neurons in a human brain, even if the infinite Turing machine could replicate the brain, it might require more mass/energy/time than exists in the visible universe.
AI is possible, but our computers aren’t good enough yet
Maybe Turing et al were off by a few orders of magnitude. Maybe instead of requiring 128 megabytes to simulate a human mind, computers will require 128 exabytes. If this theory is correct, it’s only a matter of time and Moore’s law (assuming it keeps on scaling) before computers overtake human intelligence.
The problem with this hypothesis is that it is non-disprovable. As long as true AI doesn’t exist, and computers are getting more powerful every year, you can always presume that true AI will exist next year.
Also, it’s not very interesting to talk about.
AI is possible, but humans aren’t smart enough to program it
What if our current PCs are more than powerful enough to host human-level intelligences, but humans just aren’t smart enough to write that code?
This theory isn’t all that far-fetched. After all, human intelligence is obviously finite. We have difficulty memorizing anything longer than a 10-digit phone number. We don’t fail to struggle with relatively non-difficult logical conjunctions such as multiple negatives. We cannot accurately remember smells.
We know that when animals run up against the limits of their finite intelligence, they cannot solve a problem no matter how hard they try. No matter how many times a fish wakes up and sees its reflection in the fishtank, it will always be scared of the “second fish”. It will never realize that it’s just a reflection.
So it’s entirely possible that even though a million Steve Jobs working for a million years could never come up with a workable AI algorithm because humans are inherently stupid, a nonhuman superintelligence could program your iPhone to be smarter than you.
If this “God-touched iPhone” copied itself over to the next generation of iPhones, it would become smarter. Then it could take over an iPhone factory and put more memory and more processor power in the next generation. The “God-touched iPhone” would very quickly take over humanity (hopefully in a benevolent way). After several generations, the iPhone may become as superintelligent as its Creator. Alternatively, it may plateau at some level of machine intelligence that is superior to humans, but not yet able to create other intelligences. God would remain God, machine would become Angel, and Man would remain Man.
The most frightening possibility is that humans may be too stupid to intentionally create an AI, but we could accidentally create an AI. If that AI was more intelligent than humans, it might be smart enough to improve its own intelligence until it became infinitely smarter than us.
At that point, we’d be left hoping that the AI enjoys having sex with humans.
What do you guys think? Leave a comment!