Paging Dr. Hologram: Artificial Intelligence or Stupidity?


The Doctor (Star Trek: Voyager)

Doctors Turn to Artificial Intelligence When They’re Stumped,” reports PBS. A dermatologist uses the Modernizing Medicine app to search for a drug to prescribe. A Microsoft researcher describes electronic health records as “large quarries where there’s lots of gold, and we’re just beginning to mine them”. Vanderbilt pharmacists build a computer system to “predict which patients were likely to need certain medications in the future”. CEOs, venture capitalists, and PhD researchers all agree: artificial intelligence is the future of medicine.

In the article, IBM’s Watson is even described as an “artificially intelligent supercomputer”, which sounds far more brilliant than its intended level of expertise of a “nurse” or “second year med student”. (This makes no sense either. A nurse is way smarter than a 2nd year med student unless your patient desparately needs to know about the Krebs cycle. Unless it’s a brand new nurse.)

A simple read-through of the PBS article might convince you that artificial intelligence really is on the cusp of taking over medicine. By the last few paragraphs, the PBS writers are questioning whether computers might not be altogether more intelligent than humans, making “decisions” rather than “recommendations”. You’d be forgiven for believing that electronic health records (EHR) software is on the verge of becoming an Elysium Med-Pod, Prometheus Auto-Surgeon, or if you prefer the classics a Nivenian AutoDoc.




“Machines will be capable, within twenty years, of doing any work that a man can do.”

Herbert A. Simon, The Shape of Automation for Men and Management, 1965

Reading between the lines gives a much clearer picture of the state of electronic clinical decision support (CDS) algorithms:

  • Dr. Kavita Mariwalla, a MD dermatologist treating real patients, uses AI to figure out what drugs to prescribe.
  • Dr. Joshua Denny, a PharmD treating real patients, uses AI to recieve prescriptions and to anticipate what drugs may be prescribed.
  • Dr. Eric Horvitz, a PhD computer scientist at Microsoft, talks about mining your medical records for profit. Of course he would do it in a totally privacy-respecting, non-creepy, non-exploitative way.
  • Daniel Cane, a MBA CEO who sells software, suggests that it is easier for physicians to learn “what’s happening in the medical journals” by buying his software. (because reading medical journals is just too difficult)
  • Euan Thompson, a partner at a venture capital firm, suggests that artificial intelligence will make “the biggest quality improvements”, but only if people are willing to pay the “tremendous expense” involved.
  • Dr. Peter Szolovits, a PhD computer scientist, is optimistic about computers learning to make medical decisions, and his biggest concern is that the FDA would come down on them “like a ton of bricks” for “claiming to practice medicine.”

It isn’t hard to tell that the clinicians and the non-clinicians have very different views of medical AI.


Are Computers Really That Smart?
I’m sorry Dave, but I cannot do that.

The most useful programs in current-day medical practice are pharmacy-related. So when PBS wrote their article about AI, they latched on to two pharmacy-related examples of direct patient care. Computers can search through vast amounts of information very quickly, telling us the correct dosing for a drug, second-line drugs you can switch to, or whether X patient is more likely to have a bleeding event with Plavix based on the data in their EHR.

Even then, computers can sometimes be more of a hassle than a help. Most physicians practicing nowadays have run into annoying pharmacy auto-messages in the vein of, “Mrs. Smith is 81 years old, and you just ordered Benadryl. Patients over the age of 70 are more likely to have adverse effects from Benadryl. Please confirm that you still want to order the Benadryl.” (you can replace “benadryl” with just about any imaginable medication.)

However, one thing that computers definitely can’t do is pick up on subtle cues. The PBS article suggests that a computer could tell that a patient is lying when he says he’s not smoking even though there are “nicotine stains” on his teeth and fingers. A computer would need incredibly good machine vision just to see those stains, and how would it know the teeth weren’t stained from coffee, chronic antibiotic use, or just poor dental care. Same with fingers; your patient could be a mechanic wearing a Ford dealership hat and coveralls, how do you know his fingers aren’t stained with motor oil?

For all the recent advances in machine-vision, self-driving cars and all, a computer can only do what it is programmed to do. A Googlemobile can only drive itself because Google has spent years collecting immense amounts of data, correcting errors as they pop up. “Rather than having to figure out what the world looks like and what it means,” Google says, “we tell it what the world is expected to look like when it’s empty. And then the job of the software is to figure out how the world is different from that expectation.”

A wannabe Hologram Doctor can’t rely on having an ultra-precise map of what to expect from a human body, because every single human is different. This is a vastly more difficult problem than figuring out that a slowly moving human-sized object is a pedestrian.


The Perils of Excessive Hype
Daisy, Daisy, daisy…

So what’s the harm? If medical-AI researchers want to suggest that computers are on the verge of telling lies from truth, diagnosing complex diseases, and “practicing medicine” like a trained professionals, can we really blame them? After all, they’re just hyping up their field.

Well, the fact is that AI publicity has always been the greatest enemy of AI research. Ever since the 1960s, every time an incremental improvement is made in AI, people hype it up to ridiculous levels, and the hype ends up discrediting the actual technology. Real machine-learning technologies have only improved over time (after all, Moore’s Law is still in effect) but the perception of AI has whiplashed back and forth through the decades.

Perception is a very big deal in healthcare, just ask pediatricians about vaccines. If large healthcare institutions implement (or mandate) half-assed AI programs that end up hurting some patients (even if relatively few), the ensuing public mistrust of medical AI may never go away. You can bet your ass that the FDA would turn hostile to AI if that happened.

Machine-learning technology has a lot of potential for improving healthcare, but unless you’re a venture capitalist or software CEO it’s irresponsible to suggest that decision-support software will rapidly change medical decision-making for the better.

What’s even more irresponsible is suggesting that commercial software should replace reading as a way for physicians to keep up with the medical literature. Anyone who’s worked with “Clinical Pathways” type software knows that don’t always give you a “board exam safe” answer. While they may hew to some consensus guideline, which guideline they use is entirely up to the MD consultants hired by the software company. It’s the professional responsibility of each physician to go to meetings, keep up with the evidence, and use our brains to decide which papers to believe and which guidelines to follow. If we can’t be trusted with that much, then why do MDs go through 4 years of med school and 3-8+ years of postgraduate training?

As a physician and technophile, I think that EHR and CDS are greatly beneficial when done correctly and when they don’t take away from the physician’s medical judgement. Rushing new medical software into practice, whether to comply with a poorly-thought-out government mandate or to generate free publicity, has the potential to do much more harm than good. Like many other medical advances, it is much better to be right than to be first.


One thought on “Paging Dr. Hologram: Artificial Intelligence or Stupidity?

  1. Human-level intelligence is a distant dream but we are making some good progress in AI. I hope that some big company like Google or Microsoft will make the dream of AI-complete come true.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s