Why I’m over “Overdiagnosis”

#BadScienceOfTheDay:
Once again, a journal article claiming extremely high rates of breast cancer overdiagnosis (“up to 48%”) makes big news.
 
Here’s the problem: it’s a retrospective cohort study which calculated an overdiagnosis % based on a truly bizarre endpoint: the total number of “advanced” versus “non-advanced” breast cancers.
 
Now in any scientifically valid study, you would use a definition of “advanced” that involves lymph node and/or distant metastases – features that strongly correlate with overall survival. You might even do a secondary analysis with overall survival as an endpoint.
 
Nope. These clowns defined advanced as “radiographic primary tumor size >= 2cm”… then using a completely invalid endpoint they did a bunch of statistics to come up with a completely invalid conclusion. Garbage in, garbage out.
 
As if to illustrate the ridiculousness of their own study, the Danish researchers published several overdiagnosis estimates based on different statistical approaches. The two bottom-line numbers were 24.4% and 48.3%…
 
How can you have any faith in your statistics when they give you two different answers that are completely different? If I tried to sell you a car by saying that, depending on how you measure it, it either has 244 horsepower or 483 horsepower, you’d call me a fraud!
 
* * *
Here’s the problem with the entire concept of “overdiagnosis”:
 
1) Overdiganosis is a synthetic endpoint that is effectively a derivative of a derivative. The degree of statistical modeling required to estimate overdiagnosis means that any errors, biases, or design flaws in the starting data-set will be enlarged by orders of magnitude. A tiny change in the statistical assumptions leads to a 2-fold difference in your final “endpoint”.
 
2) Overdiagnosis is highly un-reproducible within studies, or between studies. If you look at literature reviews of “overdiagnosis in breast cancer”, the published values from top-tier journals ranges from ~5% to ~50%. That’s such a large range it is impossible to apply to real life.
 
3) Overdiagnosis is neither clinically apparent, nor is it provable or disprovable. No one can single out a patient and definitively prove that they were overdiagnosed, or they weren’t overdiagnosed. In my simplistic clinical mindset, that makes it more like faith healing than scientific medicine.
 
* * *
The conceptual basis of overdiagnosis makes some sense in some cases. It is easy to imagine a bedridden 90-year-old being diagnosed with some tiny little cancer that probably won’t kill them.
 
However, the statistical methods used to estimate overdiagnosis percentages are horribly unreliable, and they do a piss-poor job of helping make decisions in real life.
 
A physician with common sense can avoid scanning or treating the bedridden 90-year-old while still offering care to the rest of the patients, and he doesn’t need to quote an artificial “24% to 48%” number to back up his clinical judgement.
Advertisements

Paging Dr. Hologram: Artificial Intelligence or Stupidity?

 

The Doctor (Star Trek: Voyager)

Doctors Turn to Artificial Intelligence When They’re Stumped,” reports PBS. A dermatologist uses the Modernizing Medicine app to search for a drug to prescribe. A Microsoft researcher describes electronic health records as “large quarries where there’s lots of gold, and we’re just beginning to mine them”. Vanderbilt pharmacists build a computer system to “predict which patients were likely to need certain medications in the future”. CEOs, venture capitalists, and PhD researchers all agree: artificial intelligence is the future of medicine.

In the article, IBM’s Watson is even described as an “artificially intelligent supercomputer”, which sounds far more brilliant than its intended level of expertise of a “nurse” or “second year med student”. (This makes no sense either. A nurse is way smarter than a 2nd year med student unless your patient desparately needs to know about the Krebs cycle. Unless it’s a brand new nurse.)

A simple read-through of the PBS article might convince you that artificial intelligence really is on the cusp of taking over medicine. By the last few paragraphs, the PBS writers are questioning whether computers might not be altogether more intelligent than humans, making “decisions” rather than “recommendations”. You’d be forgiven for believing that electronic health records (EHR) software is on the verge of becoming an Elysium Med-Pod, Prometheus Auto-Surgeon, or if you prefer the classics a Nivenian AutoDoc.


 

 

 

“Machines will be capable, within twenty years, of doing any work that a man can do.”

Herbert A. Simon, The Shape of Automation for Men and Management, 1965

Reading between the lines gives a much clearer picture of the state of electronic clinical decision support (CDS) algorithms:

  • Dr. Kavita Mariwalla, a MD dermatologist treating real patients, uses AI to figure out what drugs to prescribe.
  • Dr. Joshua Denny, a PharmD treating real patients, uses AI to recieve prescriptions and to anticipate what drugs may be prescribed.
  • Dr. Eric Horvitz, a PhD computer scientist at Microsoft, talks about mining your medical records for profit. Of course he would do it in a totally privacy-respecting, non-creepy, non-exploitative way.
  • Daniel Cane, a MBA CEO who sells software, suggests that it is easier for physicians to learn “what’s happening in the medical journals” by buying his software. (because reading medical journals is just too difficult)
  • Euan Thompson, a partner at a venture capital firm, suggests that artificial intelligence will make “the biggest quality improvements”, but only if people are willing to pay the “tremendous expense” involved.
  • Dr. Peter Szolovits, a PhD computer scientist, is optimistic about computers learning to make medical decisions, and his biggest concern is that the FDA would come down on them “like a ton of bricks” for “claiming to practice medicine.”

It isn’t hard to tell that the clinicians and the non-clinicians have very different views of medical AI.

 


Are Computers Really That Smart?
I’m sorry Dave, but I cannot do that.

The most useful programs in current-day medical practice are pharmacy-related. So when PBS wrote their article about AI, they latched on to two pharmacy-related examples of direct patient care. Computers can search through vast amounts of information very quickly, telling us the correct dosing for a drug, second-line drugs you can switch to, or whether X patient is more likely to have a bleeding event with Plavix based on the data in their EHR.

Even then, computers can sometimes be more of a hassle than a help. Most physicians practicing nowadays have run into annoying pharmacy auto-messages in the vein of, “Mrs. Smith is 81 years old, and you just ordered Benadryl. Patients over the age of 70 are more likely to have adverse effects from Benadryl. Please confirm that you still want to order the Benadryl.” (you can replace “benadryl” with just about any imaginable medication.)

However, one thing that computers definitely can’t do is pick up on subtle cues. The PBS article suggests that a computer could tell that a patient is lying when he says he’s not smoking even though there are “nicotine stains” on his teeth and fingers. A computer would need incredibly good machine vision just to see those stains, and how would it know the teeth weren’t stained from coffee, chronic antibiotic use, or just poor dental care. Same with fingers; your patient could be a mechanic wearing a Ford dealership hat and coveralls, how do you know his fingers aren’t stained with motor oil?

For all the recent advances in machine-vision, self-driving cars and all, a computer can only do what it is programmed to do. A Googlemobile can only drive itself because Google has spent years collecting immense amounts of data, correcting errors as they pop up. “Rather than having to figure out what the world looks like and what it means,” Google says, “we tell it what the world is expected to look like when it’s empty. And then the job of the software is to figure out how the world is different from that expectation.”

A wannabe Hologram Doctor can’t rely on having an ultra-precise map of what to expect from a human body, because every single human is different. This is a vastly more difficult problem than figuring out that a slowly moving human-sized object is a pedestrian.


 

The Perils of Excessive Hype
Daisy, Daisy, daisy…

So what’s the harm? If medical-AI researchers want to suggest that computers are on the verge of telling lies from truth, diagnosing complex diseases, and “practicing medicine” like a trained professionals, can we really blame them? After all, they’re just hyping up their field.

Well, the fact is that AI publicity has always been the greatest enemy of AI research. Ever since the 1960s, every time an incremental improvement is made in AI, people hype it up to ridiculous levels, and the hype ends up discrediting the actual technology. Real machine-learning technologies have only improved over time (after all, Moore’s Law is still in effect) but the perception of AI has whiplashed back and forth through the decades.

Perception is a very big deal in healthcare, just ask pediatricians about vaccines. If large healthcare institutions implement (or mandate) half-assed AI programs that end up hurting some patients (even if relatively few), the ensuing public mistrust of medical AI may never go away. You can bet your ass that the FDA would turn hostile to AI if that happened.

Machine-learning technology has a lot of potential for improving healthcare, but unless you’re a venture capitalist or software CEO it’s irresponsible to suggest that decision-support software will rapidly change medical decision-making for the better.

What’s even more irresponsible is suggesting that commercial software should replace reading as a way for physicians to keep up with the medical literature. Anyone who’s worked with “Clinical Pathways” type software knows that don’t always give you a “board exam safe” answer. While they may hew to some consensus guideline, which guideline they use is entirely up to the MD consultants hired by the software company. It’s the professional responsibility of each physician to go to meetings, keep up with the evidence, and use our brains to decide which papers to believe and which guidelines to follow. If we can’t be trusted with that much, then why do MDs go through 4 years of med school and 3-8+ years of postgraduate training?

As a physician and technophile, I think that EHR and CDS are greatly beneficial when done correctly and when they don’t take away from the physician’s medical judgement. Rushing new medical software into practice, whether to comply with a poorly-thought-out government mandate or to generate free publicity, has the potential to do much more harm than good. Like many other medical advances, it is much better to be right than to be first.

Link

Hacking the Mind Epilogue: Psychosurgery

Hacking the Brain Epilogue: Psychosurgery

While we’re on the subject of “hacking the human mind“, it looks like there is renewed interest in psychosurgery. The link goes to an article about deep brain stimulation for alcoholic cravings, PTSD, and depression!

People have been triyng to control psychiatric conditions with surgery since the days of the prefrontal lobotomy. Electrical stimulation has the advantages of precision and reversibility. However, as with any neurosurgical procedure it relies upon localizing an unwanted symptom to a specific location in the brain. For example, deep brain stimulation works for Parkinson’s because the disease is localized to the basal ganglia.

No matter how much funding you throw at electroneurology, it won’t do any good if an unwanted emotion or compulsion is spread out over a large area of the brain. It remains to be seen how well localized things like alcoholism and PTSD are.

Why did the Corpus Callosum cross the road?

Why did the Corpus Callosum cross the road?
To get to the other side.

Why did the beta-amyloid cross the road?
Because… I… it… What was the question again?

Why did the spinothalamic tract cross the road?
The other side was on fire.

Why did the amygdala cross the road?
It was running away from a… OH MY GOD ITS COMING RIGHT FOR US!

Why did the central sulcus cross the road?
You would too, if you were surrounded by creepy homunculus things.

Why did the Wernicke’s aphasia cross the road?
The road a cross dirun like two. Free crosses rodeo why? Arrest and Texas in red, yes happy area.

Why did the Korsakoff syndrome cross the road?
I don’t know, what does it matter to you? I was going to the park. That’s right, I was taking a walk in the park. Now get off my back!

Why did the septum pellucidum cross the road?
I don’t know but that sounds pretty bad, you better’d start dexamethasone.

Why did the optic chiasm cross the road?
Because it couldn’t see the median.

Why did the cavernous sinus cross the road?
It didn’t.

Why did the Broca’s aphasia cross the road?
Hodor?

Hacking the Human Mind: Enter Reality

Image courtesy of Emotiv/ExtremeTech.

Hacking the Human Mind, Pt. 2: Enter Reality
In the first part of this post, I discuss the concept of “hacking the human mind” in mythology and fiction. Ever since antiquity, many people have tried to improve the human mind and body. The information era has contributed the term “hacking” to the idea of human-improvement. More recently, pop culture has adopted the idea of hacking humanity and turned it into a ubiquitous plot device.

 


Snap Back to Reality
Whoops there goes Gravity

Hollywood has portrayed hacker-like characters as superhumans, shadowy villains or even honest-to-goodness sorcerers. However, hacker culture in real life is a far cry from its fictional portrayal. While wizards and sorcerers jealously guard their knowledge, real-world hackers are famous for sharing knowledge. (especially when they’re not supposed to)

Possibly thanks to the popularity of “hacking the human mind” as an idea, medical researchers have started to promote the so-called hacker ethic. This philosophy holds that decentralized, open-source use of technology can improve the world. Traditional medical research goes through multiple cycles of proposal, review and revision before anything happens. Successes are often published in closed-access journals while failures are often buried. The hacker ethos encourages freewheeling experimentation and open-source sharing among the scientific community.

Among its many innovations, hacker culture has given birth to the idea of medical hackathons. A “hackathon” is defined as an short duration (often just a weekend), high-intensity multidisciplinary collaboration. During the event, participants make “60 second pitches” to attract other people who might have special skills. For example, a physician with a good idea for telemedicine might go around trying to find a coder who knows about Internet security. Then they could come across a hacker with machine-vision expertise and use him to improve their cameras.

Although they occur too quickly to really polish a product or conduct clinical trials, hackathons generate numerous bright ideas that can be worked on later. In a way they are the ultimate brainstorm.


Heroes of the Brainstorm
Harder, Better, Faster, Stronger

Hackathons are undoubtedly coming up with lots of very good ideas. However, even the best medical ideas take a long time to implement. The only ideas that can be implemented immediately are very small pieces of provider-side software. (ie, enhanced changeover sheets for hospitalists) Anything that touches a patient requires a lengthy process of requests, reviews, and consents before it is ever used… and only then can you figure out whether it is effective.

As of 2014, the medical hackathon simply hasn’t been around long enough to show much of an effect. It’s a bit like a drug in Phase I-Phase II studies: everyone has great hope that it will improve things, but you can’t point to a major innovation that would not have been possible without the hackathon.

Integrating small-scale hackathon products into larger suites of medical software is a much tougher problem. Even the large-vendor EHRs (Epic, Meditech, Cerner) have difficulty communicating with each other, let alone with smaller pieces of software. The greatest problem in healthcare IT is that the so-called “HL7 Standard” isn’t really a standard.

Standard file formats exist so that they can be consistently read by everyone. A PDF looks the same on a PC, Mac, iPhone or Google Glass. A Kindle file (.AZW) is the same on a Kindle, PC or phone. Even medical imaging has a true standard format. Whether your CT scanner is a GE, Phillips, or Siemens, when you export DICOM images to another physician, the CT slices will show up exactly the same.

HL7 is not like that at all. In my personal experience, naively transferring documents between two pieces of “HL7-compliant” software results in loss or misinterpretation of some of the data. In order to fix this, you need a highly trained IT expert to create a specialized “connectivity interface”, or sometimes you pay big bucks to purchase such an interface. I am amazed that things are still so difficult in the year 2014.

In the field of traditional software design, hackers have benefited from the uniform interoperability of Unix (Linux) for many decades. As of today, healthcare lacks this important feature.

Maybe the hackers could come up with a solution for interoperability?


Big Data: The Rise of the Machines
Thank God it’s not Big Lore, Big Bishop, or Big Terminator

One of the promises of “medical hacking” has been the application of “Big Data” techniques to healthcare. Data analysis in healthcare has always been difficult and often inconsistently performed. Many medical students and residents can tell you about painstaking research hours spent on manual data-entry. Big Data techniques could turn ten thousand med student hours into five minutes of computer script runtime. Unfortunately, to this date Big Data has been much less successful in real life.

So far, the two Biggest Data medical innovations have been Google Flu Trends and 23andMe. GFT purports to forecast the severity of the flu season, region by region, based on statistics on flu-related Google searches. 23andMe was originally supposed to predict your risk of numerous diseases and conditions using a $99 DNA microarray (SNP) analysis. Far from being a home run for Big Data, both of these tools are more reminiscent of a strikeout, if not a pick-six.

GFT was billed as a Big Data tool that would vastly improve the accuracy and granularity of infectious disease forecasting. When first introduced in 2008, GFT’s flu predictions were more accurate than any existing source. However, every year it became less and less accurate, until it became worse than simply measuring how many flu cases happened two weeks ago. GFT’s performance degraded so badly, it was described as a “parable of traps in data analysis” by Harvard researchers.

23andMe offered SNP testing of the entire genome, used both for ancestry analysis and disease prediction. Prior to November 2013, the website offered a vast number of predictors ranging from lung cancer to erectile dysfunction to Alzheimer’s dementia to drug side effects. It was held up as an exemplar of 21st-century genomic empowerment, giving individuals access to unprecedented information about themselves for the low, low price of $99.

The problem was, 23andMe never bothered to submit any scientific evidence of accuracy or reproducibility to the Food and Drug Administration. The FDA sent a cease and desist letter, forcing them to stop marketing their product as a predictive tool. They’re still selling their gene test, but they are only allowed to tell you about your ancestry. (not any health predictions) This move launched a firestorm, with some people arguing that the FDA was overstepping or even following “outdated laws“.

However, the bulk of the evidence suggested that 23andMe simply didn’t give accurate genetic info. Some molecular biologists pointed out the inherent flaws in SNP testing, which make it impossible for 23AndMe to be usably accurate. Others pointed out that even if accurate, most of the correlations were too weak to have any effect on lifestyle or healthcare. The New England Journal of Medicine concluded that the FDA was justified in issuing a warning, and that “serious dialogue” is required to set standards in the industry. Other commentators were “terrified” by 23andMe’s ability to use your genetic info for secondary studies. After all, how can 23andMe sell genetic tests for $99 when other companies charge thousands? Obviously they didn’t plan to make money from the consumers; instead, 23andMe hoped to make money selling genetic data to drug companies and the rest of the healthcare industry.

In the end, that is my biggest misgiving against medical Big Data. Thanks to social media (this blog included) we have already commoditized our browsing habits, our buying habits, our hobbies and fandoms. Do we really want to commoditize our DNA as well? If so, count me out.


Doctoring the Doctor
Damnit Jim, I’m a doctor, not a hologram!

Another big promise of the “hacker ethos” in medicine is that it could improve physician engagement and enthusiasm for technology. Small decentralized teams of hackers could communicate directly with physicians, skipping the multi-layered bureaucracy of larger healthcare companies.

Many healthcare commentators have (falsely) framed the issue of physician buy-in as a matter of technophobia. Doctors are “stuck in the past“, “Luddites in white coats”, and generally terrified of change. The thing is, it’s just not true. Just look at the speed at which new medical devices are popularized – everything from 4DCTs to surgical robots to neuronavigation units, insulin pumps, AICDs and deep brain stimulators. If physicians saw as much of a benefit from electronic health records (EHRs) as we were supposed to, we would be enthusiastic instead of skeptical.

I believe that EHR would be in much better shape today if there had never been an Obamacare EHR mandate. No one ever improved the state of the art by throwing a 158-page menu of mandates at it. Present-day EHRs care much more about Medicare and other billing rules than they do about doctor or nurse usability.

Back on subject, I do believe that medical hacking has the potential to get physicians more involved in technological innovation. So long as physicians are stuck dealing with massive corporate entities, we can provide feedback and suggestions but they are very unlikely to be implemented. Small-scale collaborations empower doctors with the ability to really change the direction of a project.

Now, not every medical hack will result in something useful. In fact, a lot of hacks will amount to little more than cool party tricks, but some of these hacks will evolve into more useful applications. Some easily-hackable projects may involve documents or files produced by older medical technology. During residency I worked on a research project involving radiation treatment plans from a very old, non DICOM-compliant system. We quickly discovered that the old CTs were not usable by modern treatment planning software. Fortunately, one of the physicists on our research team was familiar with DICOM. He coded a computer program that inserted the missing DICOM headers into the old CT images, allowing us to import old CTs without any problems.

Introducing more hackers to medicine can only increase the number of problems solved by astute coding.


What Happened to Superpowers?
Paging Dr. Manhattan…

The addition of hacker culture to medicine certainly has a lot of potential to improve the everyday practice of medicine. But what happened to the idea of “hacking the human mind” in order to develop super-strength and speed?

On a very rudimentary level, “hacking the mind” improves physical performance every time an athlete grows a beard for the playoffs or wears his college shorts under his NBA uniform. But true hacking should be more sophisticated than mere superstition!

Biofeedback is a common pre-game ritual for various athletes that could be construed as a minor form of “hacking the mind/body”. Dietary habits such as carb loading could also be considered a mild form of hacking. For less legal mind-body hacking you could always turn to performance enhancing drugs.

Speaking of drugs, there’s a long-held belief that people high on drugs (mostly PCP, sometimes meth or bath salts) gain superhuman strength. While the evidence is mostly anecdotal, there’s a plausible medical explanation. The Golgi tendon reflex normally prevents muscles from over-exerting themselves, and it can be suppressed in desperate situations (the “mother lifts a car off her child” scenario). It’s reasonable to assume that some drugs could have a similar effect.

It’s also reasonable to assume that military physicians have spent decades (the entire Cold War for sure) trying to produce a super-strength drug with fewer side effects than PCP. The fact that our entire army doesn’t have the physique of Captain America suggests that those efforts were unsuccessful. Granted, this doesn’t rule out the existence of a super serum that only worked on one guy ever.

Evolutionarily speaking, it is highly implausible that humans would have tremendous physiological potential locked behind some mental gate. If the human body had such great power, our prehistoric ancestors would have needed every ounce of it to outrun or outfight angry lions and hippos and crocs. It would make no sense for humans to have a mental block on our strength. Unless removing that mental block led to instant death or infertility, the first caveman to lose his mental block would be evolutionarily favored over the rest of proto-humanity. Therefore, it’s very unlikely to think that human performance can be “magically” improved with drugs, meditation or other techniques.


 

So let’s cap off this long ramble with a little teaser on evolution and human strength. This National Geographic feature suggests that early humans directly traded muscle strength for brain power.

http://news.nationalgeographic.com/news/2014/05/140527-brain-muscle-metabolism-genes-apes-science/

What is wrong with this argument?

Hacking the Human Mind: The Other 90%

Image courtesy of Emotiv and ExtremeTech.

Hacking the Human Mind: The Other 90% (Pt. 1 of 2)
Luminous beings are we. Not this crude matter.

Can the Nervous System be Hacked?”, asks a New York Times headline? The article examines recent developments and ongoing research in peripheral nerve stimulation. To its credit, the NYT avoids the rampant sci-fi speculation all-too-common to biomedical research articles. Which is strange, because according to the Internet the NYT is supposed to reinvent itself for the digital age by turning into BuzzFeed. Guess the Grey Lady hasn’t gone full Upworthy – yet. Fortunately, blending fantasy with reality is what I do. So let’s get to it!

The meme of “hacking the human mind” fascinates me. While the idea of modifying humanity through clever tinkering has been around since time immemorial, it is deeply entrenched in 21st century popular culture. Human-hacking is frequently justified by the myth that humans only use 10% of their brains. If only a hacker could unleash that other 90% we’d be able to cure disease, boost intelligence, maybe even develop superhuman abilities. In a superhero-dominated Hollywood, “hacking the human mind” and/or “using the other 90%” is used as a convenient excuse for all sorts of ridiculously unrealistic abilities. In the real world of biology and medicine, hacking is used more as a workflow metaphor, encouraging loosely-organized cross-disciplinary teams instead of the rigid hierarchy prevalent in medicine.

In the first of a 2-part series on “Hacking the Human Mind”, I will focus on mythological and fictional influences on the concept of human-hacking. In a second half I will discuss the real-world implications.


Older than Dirt
Powered by Green Energy

As I mentioned, the concept of “hacking the human body” vastly predates the concept of hacking. Since antiquity, numerous martial arts orders have claimed that their training does more than just improve physical fitness and coordination. In traditional Chinese belief, the body has a large number of “energy (Qi) gates” that can be opened by practice, meditation, and/or acupuncture. Variations on this belief are common in fiction, especially Anime. However, the Asian belief in opening the gates of the body is fundamentally different from “hacking”. Traditional Asian techniques draw from mysticism and spirituality, emptying the mind so that the spirit can take control. Hacking is about filling your mind with rigorous logic and calculation. While the outcome may appear “magical”, the process of hacking is strictly scientific. As in the NYT article, in order to control millions of neurons you start by studying 7 neurons at a time.

So what about hacking the body in the scientific tradition? The earliest Western version of “hacking the mind” dates back to 1937, when E.E. Smith and the Lensman series fought a galactic-scale war against aliens strangely reminiscent of Nazis. The Lensmen were genetically superior humans, the product of aeons of selective breeding for psychic powers. Using their Lens as a focus, they could conjure matter, negamatter (antimatter) and energy from their minds. Later on, DC Comics would popularize the concept of a galactic police corps with superpowers based on focusing their imagination through a small trinket. Both of these Western examples are still closer to “magical powers” than to science, although you could argue that there’s no meaningful difference at the galactic scale.


Into the Age of Hackers
Two Hiros and a Stark

The modern concept of “hacking the human mind” could be credited to Neal Stephenson’s Snow Crash. People could contract the Snow Crash virus by viewing a computer graphic, causing them to lose much of their personality and become susceptible to mind control. This was explained by suggesting that ancient Sumerian was an “assembly code of the brain”, capable of re-programming humans on a fundamental level. The ancient sorcerer Enki created a “nam-shub” that prevented all other humans from understanding Sumerian. This protected them from mind control but caused human language to fragment into incomprehensible tongues, an event known as the Tower of Babel. Snow Crash is remarkable for equating the spread of information with that of a virus (in fact, people infected via computer would also transmit viruses in their bloodstream), over a decade before the phrase “going viral” infected the English language. The Snow Crash version of mind-hacking is remarkable for its negativity – hacking takes away your free will and doesn’t give you any superpowers. The characters with super-strength or super-speed got those the old-fashioned way: radiation exposure.

The idea of hackers learning the secrets of the human mind in order to gain supernatural abilities is much more recent than Snow Crash. As far as I can tell, the first major work to use this trope was Heroes (2006). Just like Snow CrashHeroes featured a lovable hero named Hiro. (Yatta!) Mohinder was the first hacker-like character on the show, a geeky fellow who studied supernormals but didn’t actually have superpowers. But we all know that the dominant hacker of Heroes was the brain-dissecting villain Sylar. Sylar personifies the trope of hacker as a selfish, unpredictable criminal, hidden behind layers of secrecy. Like the victims of Snow Crash, Sylar could alter his biology/physiology simply by gaining information (in his case, studying the brains of other superhumans). Unlike a Snow Crash victim, Sylar could control the information that he gained from their brains, a truly gruesome method of increasing his power level.

No mention of human-brain-hacking is complete without mentioning Aldrich Killian of Iron Man 3. He invents the drug Extremis, which has the ability to cure disease, grant super-strength, super-speed, super-durability, and light yourself on fire, all with the small risk of exploding like an incredibly powerful bomb. How is Extremis so powerful? Well, Aldrich explains that he “hacked the human genome”, so of course it makes sense. At least, it makes about as much sense as Tony Stark’s arc reactor, and much more sense than Captain America or the Incredible Hulk. (let’s not get started on Asgardians…)


Wrap-Up: Fiction
Less Strange than Reality

I hope you have enjoyed Part 1 of my article on hacking the human mind. In the second part of my article I will discuss the real-world effects of the “hacker ethos” on medical research and practice.

Bruno, Semmelweis, and McCarthy

329px-Giordano_Bruno_Campo_dei_Fiori

Bruno, Semmelweis, and McCarthy:
Declaring war against the Establishment; what is it good for?

Over the past several months, Neil DeGrasse Tyson has done a masterful job of narration on “COSMOS: A Spacetime Odyssey“. The very first episode of this show introduced viewers to the historical cosmologist Giordano Bruno. A quick recap:

Giordano Bruno promoted a heliocentric view of the universe way before it was cool. In the 16th century, spouting garbage about the Earth revolving around the Sun was a dangerous heresy. After all, everyone knew the Earth was created at the center of the universe – it says right there in the Bible. Geocentrism was supported by an overwhelming consensus among the educated classes, as well as foolproof scientific evidence – the absence of stellar parallax.

Astronomers have charted the stars since time immemorial, and the “fixed stars” traced the same paths year after year. Any village idiot could rotate an astrolabe around its axis – the Earth – and see for themselves. An astrolabe was precise. You could navigate by an astrolabe. If you placed the Earth anywhere off the central axis, the geometry would fall apart and the damned thing would never work.

So of course Bruno was a heretic. He was imprisoned, tortured and executed by the Church.

Over a decade after Bruno’s death, Galileo Galilei popularized the heliocentric model of the stars. Galileo was also persecuted, but was allowed to live under house arrest.

Stellar parallax would not be directly observed until two centuries later. By then, the Church had no problem with heliocentricity.


Now that we’re in the mid-19th century, we can look around for our second tragic genius. Dr. Ignaz Semmelweis witnessed the epidemic of fatal childbed fever that was sweeping Europe at the time. The good Doctor became convinced that disease was transmitted by “cadaveric particles” that could be removed by handwashing with chlorinated lime (aka bleach). He performed clinical trials, showing a dramatic improvement in survival with antiseptic handwashing.

Had there had been an Affordable Care Act of 1847, it would have made bleach-based handwashing a quality reporting measure. 19th century telegraph operators would have been busy copying “Did you wash your hands with bleach?” into dots and dashes on bronze templates, fully compliant with His Royal Apostolic Majesty’s Meaningfulle-Utilization Decree. Unfortunately for Dr. Semmelweis, Emperor Ferdinand I was too busy being deposed to pass comprehensive healthcare reform.

So Semmelweis did what any good physician would do, if he were an actor playing a physician on a medical television drama. He went around accusing his medical colleagues of being unclean, irresponsible, even “murderers”. The establishment rejected him so violently that he went insane and was imprisoned against his will in a mental asylum. Or was it the other way around?

Over a decade after his death, Semmelweis was finally recognized as correct. Louis Pasteur published the germ theory of contagious disease, which immediately went viral.

Much, much later, people coined the term “Semmelweis Reflex” to describe the human tendency to reject new information.


 

Both Bruno and Semmelweis had a revolutionary idea that contradicted everything the “scientific establishment” believed at the time. Both men decided to fight the establishment despite considerable risk to their health and sanity. In both cases, the clash ended like you’d expect.

This tragic fate has elevated Bruno and Semmelweis to Leonidas status among many people with unpopular beliefs. If Bruno and Semmelweis were crushed under the heel of the establishment, then surely more geniuses were suppressed to the point where we never heard about them. God only knows how many transformative worldviews were lost to mankind thanks to the reactionary mainstream… In fact, any time you see an idea forcefully suppressed, that idea must be trueOtherwise the establishment wouldn’t waste its time on oppression.

Semmelweis has been quoted by a crowd as diverse as anti-vaccination activistsclimate change activists, climate change deniers, and Major League Baseball agents. The idea that “conventional thinking is wrong” has obvious appeal to anyone with beliefs just crazy enough to be true. (not least of all the agent representing an athlete who is so much more skilled than what he shows on film, or in workouts, or in interviews)

As you might expect, plenty of nonsense-peddlers quote the Semmelweis Reflex to justify their beliefs.


 

The problem with Bruno and Semmelweis is two-fold:

First, neither one was actually right. Giordano Bruno based his cosmology on speculation and (weird) theology. He wasn’t an astronomer, he didn’t have any evidence, nor did he bother collecting any. Semmelweis based his handwashing practice on the theory of “cadaverous particles”. He didn’t try to explain what cadaverous particles were, how to measure them, or how they fit into our understanding of biology. Both Bruno and Semmelweis stumbled into correct conclusions through methods that were closer to magical thinking than to science.

Second, both men went out of their way to antagonize the establishment. While being a jerk doesn’t justify a painful early death (usually), there’s no doubt that Bruno and Semmelweis did a lot to harm their own causes. Bruno publicly doubted the Trinity, the virgin birth, and the divinity of Christ. He called his fellow friars “asses” and went around claiming to teach people magic. He probably pissed in the holy water too. It’s not a surprise that the Church killed him for his heresies.

Semmelweis wasn’t quite as far-out there, but he also did not help his own cause. He performed a controlled trial to demonstrate the efficacy of handwashing with antiseptic technique (good science!) and then firmly tied handwashing to his belief in harmful “cadaverous particles” / “cadaveric matter” (bad science!). When his contemporaries presented him with evidence that sepsis could occur even without a cadaver, Semmelweis mostly ignored them and continued to push his wrong-headed cadaver theory. Semmelweis’s attachment to his pet theory worked against the adoption of his real-life practice of antisepsis. If he’d been a little more flexible on cadaver theory, antiseptic handwashing may have been popularized years before Louis Pasteur, saving hundreds of thousands more lives.

With that in mind, Bruno and Semmelweis can teach us more than just “groupthink = bad”. The fact is, many of the great ideas throughout history were quite unconventional at the time. Before Isaac Newton, the “scientific consensus” would have said that a large weight falls faster than a small weight. Before Albert Einstein, only madmen believed that movement through space could distort the passage of time. Before Jim Watson, everyone knew that genes were made of proteins and not DNA. All three men were celebrated, not ridiculed for their unconventional genius.

The problem with Bruno and Semmelweis is that they went beyond “unconventional” and into what I’ll call “anti-conventional”. They didn’t just spit in the eye of the establishment, they turned around, dropped their drawers and farted. They pissed off their peers just because they could, and then it turned out that they really couldn’t.

No human is completely immune to ad hominem bias; when someone you dislike presents the facts, your first reflex is suspicion. You search for the deception or manipulation behind his logic, and nit-pick any small flaws in his data. Then you present a rebuttal with your own data and theory, and your opponent quickly sets about refuting your evidence. A demented version of Clarke’s Law takes hold: any science sufficiently politicized is indistinguishable from bullshit.

That’s why even though I agree that anti-vaxxers are dangerously wrong, I also disagree with the strategy of shaming and blaming them (complete with the occasional wave of anti-anti-vax Facebook “Share”s). People with fixed false beliefs are not going to change just because someone tells them how wrong they are. A better strategy is a combination of harm reduction (not making vaccine exemptions trivially easy to get), education (infectious disease is bad, y’all), and limiting the number of public platforms where they can shout nonsense.

It’s very unlikely that any of us can change anyone’s deeply held wrong beliefs, but we can all hope to limit the spread of such beliefs. After all, an idea is the deadliest parasite.

Now if only we had a mental equivalent of handwashing with bleach.