Cheating on the Turing Test

Cheating on the Turing Test
Hello, my name is Anna

The “Turing Test” asks an intelligent computer program to pretend to be human. The computer wins if humans are unable to tell the difference between a computer and a human. In the Battlestar Galactica universe, humans can’t tell the difference even when having sex with a robot. That’s a pretty convincing win. Real-world AI is quite a bit less sophisticated (and less sexy).

Over the weekend, a program named Eugene Goostman allegedly passed the Turing Test, convincing 33% of judges that it was human. Problem is, that test had so many problems, the Internet spent more time mocking the winner than congratulating it. Don’t just take someone’s word for it, go read one of the chat transcripts:

Turing-test “chatbots” have existed for years. They’ve even been the inspiration for a song. These bots converse with a combination of canned jokes and scripted answers, repeating the question, and denying knowledge when directly asked. This often leads to dead-giveaway non sequiturs, like when a ‘bot first claims that it lives in Ukraine, only to say that it’s never been to Ukraine. You would think that the people running Turing Tests would ask those “gotcha” questions right away, but that would presuppose that people running Turing Tests are actually doing meaningful AI research. In reality they are the AI equivalent of pro wrestling.

The chatbot strategy of sticking to a finite script turns out to be quite useful in the real world, not for true artificial intelligence but for telemarketing. While a chatbot can deflect unwanted questions by saying they “don’t understand” or “I’ve never been there”, the telemarketer ‘bot has an even better excuse: “Sorry, we have a bad connection”.


 

The Turing Test

Alan Turing published the concept of the Turing Test in 1950. Machines with “a storage of 174,380” were the state of the art, enormous and expensive. This comes out to just over 21 kilobytes, enough to hold a few seconds of MP3 audio or a very small thumbnail JPG. Such a computer couldn’t do much more than basic arithmetic, yet arithmetic was a big deal in an era when airplanes and rockets were designed by slide rule. At that point, the idea of holding a natural-language conversation seemed nearly as fanciful as composing poetry or falling in love.

In his original 1950 article, Turing predicted that a computer with “10^9 bits” (128 megabytes) would be able to adequately imitate a human, and that such a machine would be invented within “50 years”. He was right about the 128 megabytes in 50 years (a decent PC in the year 2000 would have 128MB of RAM) but absolutely wrong about being able to pass the Turing Test.


Why is AI so difficult?

The futurists of the 1950s used arithmetic-computation speed as a surrogate for intelligence and found computers to be much “better” than humans. They wrongly assumed that this would rapidly translate into true artificial intelligence. For the purposes of this section I will use the phrase “AI” to refer to “general artificial intelligence”: machines capable of thinking their way through arbitrary scenarios as opposed to being narrowly programmed for one application. (much as a chatbot is programmed to fool humans)

In 2014, computers are billions of times faster than they were in the past. Yet no one’s come up with a computer program that can hold a real conversation, let alone demonstrate true intelligence. Why is this so difficult? I’ll go through a few hypotheses.

AI is impossible

Church and Turing proved that a “universal Turing machine” with infinite memory and time could compute any function that computable on any other Turing machine. They also proved that certain problems were not Turing-computable, most famously the Turing halting problem. No Turing machine can reliably determine whether another Turing machine will halt or run forever. All digital computers are finite Turing machines, so these limitations apply to Windows, iOS, Android and Playstation.

Ever since Alan Turing came up with the concept, scientists and philosophers have argued over whether the human brain is a Turing machine. This debate is more philosophical than practical, as it’s pretty much impossible to prove either way. The human brain certainly doesn’t act like a Turing machine – nothing about it is neatly classified into “1”s and “0”s. There may be enough quantum effects in the brain’s ion channels, neurotransmitters, proteins, DNA, chromatin, microtubules and vesicles to make it completely non-computable with classical deterministic mechanics. If the human brain works in a fundamentally non-Turing way, maybe all of our thoughts are non-Turing computable.

Some people believe that while a universal Turing machine can compute everything, it may be very inefficient at doing so. For example, the difficulty of decrypting cryptographic keys scales exponentially with the length of the key. Therefore, increasing the length of an RSA key from 1024 bits to 2048 bits doesn’t just double difficulty, it multiplies it by millions of times. If you expand this to the 10^11 neurons in a human brain, even if the infinite Turing machine could replicate the brain, it might require more mass/energy/time than exists in the visible universe.

AI is possible, but our computers aren’t good enough yet

Maybe Turing et al were off by a few orders of magnitude. Maybe instead of requiring 128 megabytes to simulate a human mind, computers will require 128 exabytes. If this theory is correct, it’s only a matter of time and Moore’s law (assuming it keeps on scaling) before computers overtake human intelligence.

The problem with this hypothesis is that it is non-disprovable. As long as true AI doesn’t exist, and computers are getting more powerful every year, you can always presume that true AI will exist next year.

Also, it’s not very interesting to talk about.

AI is possible, but humans aren’t smart enough to program it

What if our current PCs are more than powerful enough to host human-level intelligences, but humans just aren’t smart enough to write that code?

This theory isn’t all that far-fetched. After all, human intelligence is obviously finite. We have difficulty memorizing anything longer than a 10-digit phone number. We don’t fail to struggle with relatively non-difficult logical conjunctions such as multiple negatives. We cannot accurately remember smells.

We know that when animals run up against the limits of their finite intelligence, they cannot solve a problem no matter how hard they try. No matter how many times a fish wakes up and sees its reflection in the fishtank, it will always be scared of the “second fish”. It will never realize that it’s just a reflection.

So it’s entirely possible that even though a million Steve Jobs working for a million years could never come up with a workable AI algorithm because humans are inherently stupid, a nonhuman superintelligence could program your iPhone to be smarter than you.

If this “God-touched iPhone” copied itself over to the next generation of iPhones, it would become smarter. Then it could take over an iPhone factory and put more memory and more processor power in the next generation. The “God-touched iPhone” would very quickly take over humanity (hopefully in a benevolent way). After several generations, the iPhone may become as superintelligent as its Creator. Alternatively, it may plateau at some level of machine intelligence that is superior to humans, but not yet able to create other intelligences. God would remain God, machine would become Angel, and Man would remain Man.

The most frightening possibility is that humans may be too stupid to intentionally create an AI, but we could accidentally create an AI. If that AI was more intelligent than humans, it might be smart enough to improve its own intelligence until it became infinitely smarter than us.

At that point, we’d be left hoping that the AI enjoys having sex with humans.


 

What do you guys think? Leave a comment!

Advertisements

Paging Dr. Hologram: Artificial Intelligence or Stupidity?

 

The Doctor (Star Trek: Voyager)

Doctors Turn to Artificial Intelligence When They’re Stumped,” reports PBS. A dermatologist uses the Modernizing Medicine app to search for a drug to prescribe. A Microsoft researcher describes electronic health records as “large quarries where there’s lots of gold, and we’re just beginning to mine them”. Vanderbilt pharmacists build a computer system to “predict which patients were likely to need certain medications in the future”. CEOs, venture capitalists, and PhD researchers all agree: artificial intelligence is the future of medicine.

In the article, IBM’s Watson is even described as an “artificially intelligent supercomputer”, which sounds far more brilliant than its intended level of expertise of a “nurse” or “second year med student”. (This makes no sense either. A nurse is way smarter than a 2nd year med student unless your patient desparately needs to know about the Krebs cycle. Unless it’s a brand new nurse.)

A simple read-through of the PBS article might convince you that artificial intelligence really is on the cusp of taking over medicine. By the last few paragraphs, the PBS writers are questioning whether computers might not be altogether more intelligent than humans, making “decisions” rather than “recommendations”. You’d be forgiven for believing that electronic health records (EHR) software is on the verge of becoming an Elysium Med-Pod, Prometheus Auto-Surgeon, or if you prefer the classics a Nivenian AutoDoc.


 

 

 

“Machines will be capable, within twenty years, of doing any work that a man can do.”

Herbert A. Simon, The Shape of Automation for Men and Management, 1965

Reading between the lines gives a much clearer picture of the state of electronic clinical decision support (CDS) algorithms:

  • Dr. Kavita Mariwalla, a MD dermatologist treating real patients, uses AI to figure out what drugs to prescribe.
  • Dr. Joshua Denny, a PharmD treating real patients, uses AI to recieve prescriptions and to anticipate what drugs may be prescribed.
  • Dr. Eric Horvitz, a PhD computer scientist at Microsoft, talks about mining your medical records for profit. Of course he would do it in a totally privacy-respecting, non-creepy, non-exploitative way.
  • Daniel Cane, a MBA CEO who sells software, suggests that it is easier for physicians to learn “what’s happening in the medical journals” by buying his software. (because reading medical journals is just too difficult)
  • Euan Thompson, a partner at a venture capital firm, suggests that artificial intelligence will make “the biggest quality improvements”, but only if people are willing to pay the “tremendous expense” involved.
  • Dr. Peter Szolovits, a PhD computer scientist, is optimistic about computers learning to make medical decisions, and his biggest concern is that the FDA would come down on them “like a ton of bricks” for “claiming to practice medicine.”

It isn’t hard to tell that the clinicians and the non-clinicians have very different views of medical AI.

 


Are Computers Really That Smart?
I’m sorry Dave, but I cannot do that.

The most useful programs in current-day medical practice are pharmacy-related. So when PBS wrote their article about AI, they latched on to two pharmacy-related examples of direct patient care. Computers can search through vast amounts of information very quickly, telling us the correct dosing for a drug, second-line drugs you can switch to, or whether X patient is more likely to have a bleeding event with Plavix based on the data in their EHR.

Even then, computers can sometimes be more of a hassle than a help. Most physicians practicing nowadays have run into annoying pharmacy auto-messages in the vein of, “Mrs. Smith is 81 years old, and you just ordered Benadryl. Patients over the age of 70 are more likely to have adverse effects from Benadryl. Please confirm that you still want to order the Benadryl.” (you can replace “benadryl” with just about any imaginable medication.)

However, one thing that computers definitely can’t do is pick up on subtle cues. The PBS article suggests that a computer could tell that a patient is lying when he says he’s not smoking even though there are “nicotine stains” on his teeth and fingers. A computer would need incredibly good machine vision just to see those stains, and how would it know the teeth weren’t stained from coffee, chronic antibiotic use, or just poor dental care. Same with fingers; your patient could be a mechanic wearing a Ford dealership hat and coveralls, how do you know his fingers aren’t stained with motor oil?

For all the recent advances in machine-vision, self-driving cars and all, a computer can only do what it is programmed to do. A Googlemobile can only drive itself because Google has spent years collecting immense amounts of data, correcting errors as they pop up. “Rather than having to figure out what the world looks like and what it means,” Google says, “we tell it what the world is expected to look like when it’s empty. And then the job of the software is to figure out how the world is different from that expectation.”

A wannabe Hologram Doctor can’t rely on having an ultra-precise map of what to expect from a human body, because every single human is different. This is a vastly more difficult problem than figuring out that a slowly moving human-sized object is a pedestrian.


 

The Perils of Excessive Hype
Daisy, Daisy, daisy…

So what’s the harm? If medical-AI researchers want to suggest that computers are on the verge of telling lies from truth, diagnosing complex diseases, and “practicing medicine” like a trained professionals, can we really blame them? After all, they’re just hyping up their field.

Well, the fact is that AI publicity has always been the greatest enemy of AI research. Ever since the 1960s, every time an incremental improvement is made in AI, people hype it up to ridiculous levels, and the hype ends up discrediting the actual technology. Real machine-learning technologies have only improved over time (after all, Moore’s Law is still in effect) but the perception of AI has whiplashed back and forth through the decades.

Perception is a very big deal in healthcare, just ask pediatricians about vaccines. If large healthcare institutions implement (or mandate) half-assed AI programs that end up hurting some patients (even if relatively few), the ensuing public mistrust of medical AI may never go away. You can bet your ass that the FDA would turn hostile to AI if that happened.

Machine-learning technology has a lot of potential for improving healthcare, but unless you’re a venture capitalist or software CEO it’s irresponsible to suggest that decision-support software will rapidly change medical decision-making for the better.

What’s even more irresponsible is suggesting that commercial software should replace reading as a way for physicians to keep up with the medical literature. Anyone who’s worked with “Clinical Pathways” type software knows that don’t always give you a “board exam safe” answer. While they may hew to some consensus guideline, which guideline they use is entirely up to the MD consultants hired by the software company. It’s the professional responsibility of each physician to go to meetings, keep up with the evidence, and use our brains to decide which papers to believe and which guidelines to follow. If we can’t be trusted with that much, then why do MDs go through 4 years of med school and 3-8+ years of postgraduate training?

As a physician and technophile, I think that EHR and CDS are greatly beneficial when done correctly and when they don’t take away from the physician’s medical judgement. Rushing new medical software into practice, whether to comply with a poorly-thought-out government mandate or to generate free publicity, has the potential to do much more harm than good. Like many other medical advances, it is much better to be right than to be first.

Link

Google Car: No steering wheel!

Google Car: No steering wheel!

Just ran across this article about Google’s latest creation. It is an electric car that drives itself. The Googlecar has no steering wheel, gas pedal or brake pedal. It only has two buttons, “stop” and “go”. Google’s idea is that the car will be a taxicab-like rideshare, allowing people to hop in and tell the car where to go (OK Google, take me to the airport). You’ll be able to hail a self-driving cab through a smartphone app, much like Uber. Unlike Uber, there won’t be any point in leaving a Driver Rating!

In big cities where parking costs hundreds of dollars a month, self-driving rideshare cabs could really combine the convenience of cabs with the efficiency of buses. You could even put autocab stops at train stations, so that people don’t have to drive themselves to catch a train. This would significantly improve the usability and profitability of rail systems.

While the Googlecar is more of a proof of concept than a usable model, I believe that within our lifetimes automated ridesharing will dominate urban transportation. It makes too much sense. The greatest obstacle to the proliferation of autocabs will be regulatory in nature. Taxicab companies have already lobbied hard for every major city to ban Uber; they’ll fight ten times as hard over Googlecar. Inevitably, some day a Googlecar will get into a wreck. Even if it’s not Googlecar’s fault (after all, they’re supposed to have the reflexes of Spider-man vs Alien vs Predator vs Terminator) this will cause a big controversy and bigger litigation.

Now is the Microsoft version of a self-driving car named Cartana? I’m not sure I’d want to ride in that one.


Update (5/28):

Uber wants driverless cars. I guess there’s no need to reinvent the wheel when you’re removing it altogether.

http://techcrunch.com/2014/05/28/uber-confirms-record-breaking-fund-raising-hopes-for-driverless-ubers/