Why Neuropsych Studies are Big Liars

Bad Science Of The Day:

Why Big Liars Often Start Out as Small Ones

I came across this article in the “Science” section of New York Times. It is a link to a Nature Neuroscience paper out of the University College of London, which amazingly enough appears to have free fulltext. Naturally, I pulled up the actual article and spent quite some time trying to make heads/tails out of it. Sadly, it wasn’t worth the time.


The original article, as well as the NYT piece, makes the very plausible claim that the human brain desensitizes itself to dishonesty in the same way that you become desensitized to bad smells. So slimy corporate executives, crooked politicians, and hustling street vendors aren’t actually trying to lie and cheat. They’ve just gone nose-blind to the stink of their own deception.

That’s certainly a plausible hypothesis, and it passes the Bayesian common-sense test. The problem is, after reading the Nature Neuroscience article, I have a hard time washing away the stink of their poor methodology. It smells like an Unreproducible Neuropsych Study, suffering from many of their common Bad Habits:

* Very small n
* Really stretching it with experimental design
* Really stretching it with synthetic endpoints
* Running minimally-bothersome trial stimuli on subjects stuck in a highly-bothersome fMRI scanner
* Data-torturing statistical methods
* Shoehorning hard numerical data into a Touchy Feely Narrative

First of all, their subjects were 25 college students with an average age of 20. I can understand only having 25 subjects, as it’s not exactly cheap/easy to recruit people into fMRI neuropsych experiments. But they actually scanned 35 kids. 10 of them caught on to their trial design and were excluded.

Really? One third of their subjects “figured out” the trial and had to be excluded? Actually, it was probably more, only one-third admitted to figuring out the trial design. For being a study about deception, the researchers sure were terrible at decieving their test subjects.

Alanis Morisette would be proud of the irony, as would Iron Deficiency Tony Stark.

The experimental design was questionable as well. The researchers used the Advisor-Estimator experiment, a commonly cited psychological model of Conflict of Interest.

Normally an advisor-estimator experiment involves a biased advisor (who is rewarded for higher estimates) assisting an unbiased estimator (who is rewarded for accurate estimates).

This is a great surrogate model for real-world conflicts of interest, like consultants who make more money if you are convinced to buy ancillary services. But it seems like a terrible surrogate for deception. As the experimenters themselves noted, there was no direct personal interaction between the subject and the estimator, no actual monetary stakes involved, and no risk of the subject being caught or punished for lying.

Worse yet, the magnitude of deception involved is incredibly minimal: skewing an estimate by a few pounds in the hopes of being paid a pound or two. That’s a minimal level of emotional manipulation of the subjects. I don’t know about British college kids, but I’d be much more emotionally disturbed by the fact that I’m stuck in a fMRI scanner.

Radiographic measurement, as with photographic image quality, is all about signal to noise ratio. In this case the emotional “signal” (distress caused by lying) is tiny compared to the ambient emotional “noise”.

Things get really silly when you read their composite endpoint, something called “Prediction beta”. It appears to be a statistical mess: a 2nd-order metric divided by a 2nd-order metric and averaged into something that resembles a correlation coefficient but is numerically less than 0.1.

Somehow this was statistically significant at p=0.021. But then you read that the authors also tested a crapload of other brain regions, and none of them were nearly as “predictive” as the amygdala. That’s a textbook case of multiple-comparisons data torturing, and it means that their p-values should have been Bonferroni’d into oblivion. The significance threshold shouldn’t have been 0.05, it should have been much, much lower.

When all is said and done, the authors should be congratulated for having taken a common sense anecdote (“Small lies lead to bigger ones”) and spent an immense amount of time and money coming up with super-unconvincing scientific data to back it up.

I imagine their next Amazing Rigorous Neuro-Psycho-Radiology trial will demonstrate, after testing twenty hypotheses with thirty different regressions, a borderline-statistically-significant correlation between insufficient parental affection and abusive bullying behavior.

Bullcrap like this is why common-sense driven people are losing their faith in science.

Was Malthus Right?


I ran across this very interesting PBS article recently (link above). It is an excellent summary of Malthusian philosophy that got me musing about Malthusianism and public policy.

Reverend Thomas Malthus first published his theories in the late 18th century, a time of dramatic social upheaval. The might of England had fallen short against the rebellious colonies, while the Ancien Régime had lost its head to the rebellious Jacobins. The only thing certain in this era was uncertainty.

Against this backdrop, Malthus proclaimed that there were a finite quantity of resources on Earth, and that the human population will always proliferate until those resources are consumed. Once the resources are exhausted, the world is doomed either to widespread famine or violence. If the overall resource level is increased by social or technological developments, humans will simply proliferate to a larger population and our overall misery will remain unchanged.

Malthus wrote that the median income of the common folk, expressed in the amount of food (pounds of wheat) they could afford, had remained constant from prehistoric times to the end of the 18th century – and this number was barely enough food to survive. The central dogma of Malthusian belief was that increasing living standards led to higher populations which led to decreasing living standards, causing a long-term equilibrium of famine and poverty.

Malthus believed that this negative feedback cycle could only be broken if the whole world decided to have fewer children. In an era where reliable contraception was nonexistent and many children died at a young age, this must have sounded as loony as putting a man on the moon.

Malthus also suggested that any large-scale charity (such as social welfare programs) would prove useless or harmful in the long run. According to Malthusian dynamics, the only thing keeping poverty in check is the death rate of poor people. Therefore, anything you did to help poor people would only cause more people to become poor. This part of his philosophy was attractive to an aristocracy terrified of the proletariat mob at their gates. As such, 19th century Malthusianism was staunchly conservative.


By the time of World War II, every civilized country had major social welfare programs in place. Thus, the “charity is harmful” portion of Malthusian philosophy was largely ignored (as it remains to this day). Instead, 20th century Malthusians focused the importance of population control. In the pre-WWII era this often meant eugenics and forced sterilization – the Malthusian Belt of Brave New World. Again, this placed Malthusianism firmly on the conservative end of the political spectrum.

Adolf Hitler proceeded to Godwin the eugenics movement, taking it to its most horrific extreme and making it unmentionable in polite society. However, a pharmaceutical innovation revived interest in Malthus – The Pill. Oral contraceptives allowed a new generation to have kids only when they wanted to. Birth control was immediately opposed by the religious right, so Malthusian philosophy was suddenly liberal. This right-to-left shift was completed when many early environmentalists started preaching Malthusian population control as a way to decrease environmental impact.

Malthus believed that food production was the crucial limiting factor for population growth. The Earth had a “carrying capacity”, a maximum number of mouths that the planet could feed. Back in the 1950s and 1960s, food was a central dogma in Malthusian environmentalism. In The Population Bomb(1968), Paul Ehrlich stated that hundreds of millions of people would starve to death by the end of the 1970s. He suggested putting contraceptives in the water supply or in staple foods, while noting the sociopolitical impossibility of doing so.

Instead, a social and technological revolution occurred. Basic farming techniques such as irrigation, fertilizers and pesticides spread from the First World to the Third. New crop cultivars, developed first by conventional breeding and later by genetic modification, massively increased farm yields. Food prices dropped so low that many industrialized countries had to pay farmers not to farm. Even as the human population of Earth increased from a few hundred million to over 7 billion, Malthus’s prediction of widespread food shortages never came true.


A funny thing happened between the 1970s and now. Populations leveled off and started to decline in Europe, Russia, Japan, and among non-Hispanic whites in the USA. This happened despite the fact that an increasing world population had not triggered any horrific famines, wars or plagues. It also happened in the absence of any draconian measures such as Ehrlich’s hypothetical contraceptive water supply. Economists coined the phrase “demographic-economic paradox” to describe the decreasing fertility among wealthy socioeconomic groups. What public policy triumph allowed population control to finally happen? Widespread access to affordable contraception, a remedy far easier to swallow than forced sterilization.

The success of birth control could be seen as the ultimate confirmation of Malthus’s thesis that limiting the population would improve quality of life. It has undoubtedly broken the Malthusian cycle of “increased living standards -> increased birth rate -> decreased living standards”. Recent predictions suggest that human population will peak in the mid-21st century and then decline. This predicted peak doesn’t happen due to food shortages, but because humans are choosing to have fewer children. Those children will not be limited to Malthus’s “14 pounds of wheat”, they will have much greater access to food and material goods.

Reverend Malthus’ ultimate objective was to decrease the worldwide fertility rate, and by that measure he has been wildly successful. What he could not have forseen was the method of this success. Malthusian doctrine gave birth to numerous population-limiting schemes over the centuries, many of which were impractical or inhumane. In the end, the global fertility decline occurred thanks to affordable contraception. Billions of human beings chose to have fewer children. No one forced them to do so. (except in China).

I wish that more policy thinkers would draw a lesson from this part of history. You can craft onerous laws to change people’s behavior, and they will fight you every step of the way. Or you could give people the freedom to choose. If the change in behavior is truly beneficial, people will gravitate toward it over time – as has happened in every high-income country over the past several decades.

Cheating on the Turing Test

Cheating on the Turing Test
Hello, my name is Anna

The “Turing Test” asks an intelligent computer program to pretend to be human. The computer wins if humans are unable to tell the difference between a computer and a human. In the Battlestar Galactica universe, humans can’t tell the difference even when having sex with a robot. That’s a pretty convincing win. Real-world AI is quite a bit less sophisticated (and less sexy).

Over the weekend, a program named Eugene Goostman allegedly passed the Turing Test, convincing 33% of judges that it was human. Problem is, that test had so many problems, the Internet spent more time mocking the winner than congratulating it. Don’t just take someone’s word for it, go read one of the chat transcripts:

Turing-test “chatbots” have existed for years. They’ve even been the inspiration for a song. These bots converse with a combination of canned jokes and scripted answers, repeating the question, and denying knowledge when directly asked. This often leads to dead-giveaway non sequiturs, like when a ‘bot first claims that it lives in Ukraine, only to say that it’s never been to Ukraine. You would think that the people running Turing Tests would ask those “gotcha” questions right away, but that would presuppose that people running Turing Tests are actually doing meaningful AI research. In reality they are the AI equivalent of pro wrestling.

The chatbot strategy of sticking to a finite script turns out to be quite useful in the real world, not for true artificial intelligence but for telemarketing. While a chatbot can deflect unwanted questions by saying they “don’t understand” or “I’ve never been there”, the telemarketer ‘bot has an even better excuse: “Sorry, we have a bad connection”.


The Turing Test

Alan Turing published the concept of the Turing Test in 1950. Machines with “a storage of 174,380” were the state of the art, enormous and expensive. This comes out to just over 21 kilobytes, enough to hold a few seconds of MP3 audio or a very small thumbnail JPG. Such a computer couldn’t do much more than basic arithmetic, yet arithmetic was a big deal in an era when airplanes and rockets were designed by slide rule. At that point, the idea of holding a natural-language conversation seemed nearly as fanciful as composing poetry or falling in love.

In his original 1950 article, Turing predicted that a computer with “10^9 bits” (128 megabytes) would be able to adequately imitate a human, and that such a machine would be invented within “50 years”. He was right about the 128 megabytes in 50 years (a decent PC in the year 2000 would have 128MB of RAM) but absolutely wrong about being able to pass the Turing Test.

Why is AI so difficult?

The futurists of the 1950s used arithmetic-computation speed as a surrogate for intelligence and found computers to be much “better” than humans. They wrongly assumed that this would rapidly translate into true artificial intelligence. For the purposes of this section I will use the phrase “AI” to refer to “general artificial intelligence”: machines capable of thinking their way through arbitrary scenarios as opposed to being narrowly programmed for one application. (much as a chatbot is programmed to fool humans)

In 2014, computers are billions of times faster than they were in the past. Yet no one’s come up with a computer program that can hold a real conversation, let alone demonstrate true intelligence. Why is this so difficult? I’ll go through a few hypotheses.

AI is impossible

Church and Turing proved that a “universal Turing machine” with infinite memory and time could compute any function that computable on any other Turing machine. They also proved that certain problems were not Turing-computable, most famously the Turing halting problem. No Turing machine can reliably determine whether another Turing machine will halt or run forever. All digital computers are finite Turing machines, so these limitations apply to Windows, iOS, Android and Playstation.

Ever since Alan Turing came up with the concept, scientists and philosophers have argued over whether the human brain is a Turing machine. This debate is more philosophical than practical, as it’s pretty much impossible to prove either way. The human brain certainly doesn’t act like a Turing machine – nothing about it is neatly classified into “1”s and “0”s. There may be enough quantum effects in the brain’s ion channels, neurotransmitters, proteins, DNA, chromatin, microtubules and vesicles to make it completely non-computable with classical deterministic mechanics. If the human brain works in a fundamentally non-Turing way, maybe all of our thoughts are non-Turing computable.

Some people believe that while a universal Turing machine can compute everything, it may be very inefficient at doing so. For example, the difficulty of decrypting cryptographic keys scales exponentially with the length of the key. Therefore, increasing the length of an RSA key from 1024 bits to 2048 bits doesn’t just double difficulty, it multiplies it by millions of times. If you expand this to the 10^11 neurons in a human brain, even if the infinite Turing machine could replicate the brain, it might require more mass/energy/time than exists in the visible universe.

AI is possible, but our computers aren’t good enough yet

Maybe Turing et al were off by a few orders of magnitude. Maybe instead of requiring 128 megabytes to simulate a human mind, computers will require 128 exabytes. If this theory is correct, it’s only a matter of time and Moore’s law (assuming it keeps on scaling) before computers overtake human intelligence.

The problem with this hypothesis is that it is non-disprovable. As long as true AI doesn’t exist, and computers are getting more powerful every year, you can always presume that true AI will exist next year.

Also, it’s not very interesting to talk about.

AI is possible, but humans aren’t smart enough to program it

What if our current PCs are more than powerful enough to host human-level intelligences, but humans just aren’t smart enough to write that code?

This theory isn’t all that far-fetched. After all, human intelligence is obviously finite. We have difficulty memorizing anything longer than a 10-digit phone number. We don’t fail to struggle with relatively non-difficult logical conjunctions such as multiple negatives. We cannot accurately remember smells.

We know that when animals run up against the limits of their finite intelligence, they cannot solve a problem no matter how hard they try. No matter how many times a fish wakes up and sees its reflection in the fishtank, it will always be scared of the “second fish”. It will never realize that it’s just a reflection.

So it’s entirely possible that even though a million Steve Jobs working for a million years could never come up with a workable AI algorithm because humans are inherently stupid, a nonhuman superintelligence could program your iPhone to be smarter than you.

If this “God-touched iPhone” copied itself over to the next generation of iPhones, it would become smarter. Then it could take over an iPhone factory and put more memory and more processor power in the next generation. The “God-touched iPhone” would very quickly take over humanity (hopefully in a benevolent way). After several generations, the iPhone may become as superintelligent as its Creator. Alternatively, it may plateau at some level of machine intelligence that is superior to humans, but not yet able to create other intelligences. God would remain God, machine would become Angel, and Man would remain Man.

The most frightening possibility is that humans may be too stupid to intentionally create an AI, but we could accidentally create an AI. If that AI was more intelligent than humans, it might be smart enough to improve its own intelligence until it became infinitely smarter than us.

At that point, we’d be left hoping that the AI enjoys having sex with humans.


What do you guys think? Leave a comment!

Paging Dr. Hologram: Artificial Intelligence or Stupidity?


The Doctor (Star Trek: Voyager)

Doctors Turn to Artificial Intelligence When They’re Stumped,” reports PBS. A dermatologist uses the Modernizing Medicine app to search for a drug to prescribe. A Microsoft researcher describes electronic health records as “large quarries where there’s lots of gold, and we’re just beginning to mine them”. Vanderbilt pharmacists build a computer system to “predict which patients were likely to need certain medications in the future”. CEOs, venture capitalists, and PhD researchers all agree: artificial intelligence is the future of medicine.

In the article, IBM’s Watson is even described as an “artificially intelligent supercomputer”, which sounds far more brilliant than its intended level of expertise of a “nurse” or “second year med student”. (This makes no sense either. A nurse is way smarter than a 2nd year med student unless your patient desparately needs to know about the Krebs cycle. Unless it’s a brand new nurse.)

A simple read-through of the PBS article might convince you that artificial intelligence really is on the cusp of taking over medicine. By the last few paragraphs, the PBS writers are questioning whether computers might not be altogether more intelligent than humans, making “decisions” rather than “recommendations”. You’d be forgiven for believing that electronic health records (EHR) software is on the verge of becoming an Elysium Med-Pod, Prometheus Auto-Surgeon, or if you prefer the classics a Nivenian AutoDoc.




“Machines will be capable, within twenty years, of doing any work that a man can do.”

Herbert A. Simon, The Shape of Automation for Men and Management, 1965

Reading between the lines gives a much clearer picture of the state of electronic clinical decision support (CDS) algorithms:

  • Dr. Kavita Mariwalla, a MD dermatologist treating real patients, uses AI to figure out what drugs to prescribe.
  • Dr. Joshua Denny, a PharmD treating real patients, uses AI to recieve prescriptions and to anticipate what drugs may be prescribed.
  • Dr. Eric Horvitz, a PhD computer scientist at Microsoft, talks about mining your medical records for profit. Of course he would do it in a totally privacy-respecting, non-creepy, non-exploitative way.
  • Daniel Cane, a MBA CEO who sells software, suggests that it is easier for physicians to learn “what’s happening in the medical journals” by buying his software. (because reading medical journals is just too difficult)
  • Euan Thompson, a partner at a venture capital firm, suggests that artificial intelligence will make “the biggest quality improvements”, but only if people are willing to pay the “tremendous expense” involved.
  • Dr. Peter Szolovits, a PhD computer scientist, is optimistic about computers learning to make medical decisions, and his biggest concern is that the FDA would come down on them “like a ton of bricks” for “claiming to practice medicine.”

It isn’t hard to tell that the clinicians and the non-clinicians have very different views of medical AI.


Are Computers Really That Smart?
I’m sorry Dave, but I cannot do that.

The most useful programs in current-day medical practice are pharmacy-related. So when PBS wrote their article about AI, they latched on to two pharmacy-related examples of direct patient care. Computers can search through vast amounts of information very quickly, telling us the correct dosing for a drug, second-line drugs you can switch to, or whether X patient is more likely to have a bleeding event with Plavix based on the data in their EHR.

Even then, computers can sometimes be more of a hassle than a help. Most physicians practicing nowadays have run into annoying pharmacy auto-messages in the vein of, “Mrs. Smith is 81 years old, and you just ordered Benadryl. Patients over the age of 70 are more likely to have adverse effects from Benadryl. Please confirm that you still want to order the Benadryl.” (you can replace “benadryl” with just about any imaginable medication.)

However, one thing that computers definitely can’t do is pick up on subtle cues. The PBS article suggests that a computer could tell that a patient is lying when he says he’s not smoking even though there are “nicotine stains” on his teeth and fingers. A computer would need incredibly good machine vision just to see those stains, and how would it know the teeth weren’t stained from coffee, chronic antibiotic use, or just poor dental care. Same with fingers; your patient could be a mechanic wearing a Ford dealership hat and coveralls, how do you know his fingers aren’t stained with motor oil?

For all the recent advances in machine-vision, self-driving cars and all, a computer can only do what it is programmed to do. A Googlemobile can only drive itself because Google has spent years collecting immense amounts of data, correcting errors as they pop up. “Rather than having to figure out what the world looks like and what it means,” Google says, “we tell it what the world is expected to look like when it’s empty. And then the job of the software is to figure out how the world is different from that expectation.”

A wannabe Hologram Doctor can’t rely on having an ultra-precise map of what to expect from a human body, because every single human is different. This is a vastly more difficult problem than figuring out that a slowly moving human-sized object is a pedestrian.


The Perils of Excessive Hype
Daisy, Daisy, daisy…

So what’s the harm? If medical-AI researchers want to suggest that computers are on the verge of telling lies from truth, diagnosing complex diseases, and “practicing medicine” like a trained professionals, can we really blame them? After all, they’re just hyping up their field.

Well, the fact is that AI publicity has always been the greatest enemy of AI research. Ever since the 1960s, every time an incremental improvement is made in AI, people hype it up to ridiculous levels, and the hype ends up discrediting the actual technology. Real machine-learning technologies have only improved over time (after all, Moore’s Law is still in effect) but the perception of AI has whiplashed back and forth through the decades.

Perception is a very big deal in healthcare, just ask pediatricians about vaccines. If large healthcare institutions implement (or mandate) half-assed AI programs that end up hurting some patients (even if relatively few), the ensuing public mistrust of medical AI may never go away. You can bet your ass that the FDA would turn hostile to AI if that happened.

Machine-learning technology has a lot of potential for improving healthcare, but unless you’re a venture capitalist or software CEO it’s irresponsible to suggest that decision-support software will rapidly change medical decision-making for the better.

What’s even more irresponsible is suggesting that commercial software should replace reading as a way for physicians to keep up with the medical literature. Anyone who’s worked with “Clinical Pathways” type software knows that don’t always give you a “board exam safe” answer. While they may hew to some consensus guideline, which guideline they use is entirely up to the MD consultants hired by the software company. It’s the professional responsibility of each physician to go to meetings, keep up with the evidence, and use our brains to decide which papers to believe and which guidelines to follow. If we can’t be trusted with that much, then why do MDs go through 4 years of med school and 3-8+ years of postgraduate training?

As a physician and technophile, I think that EHR and CDS are greatly beneficial when done correctly and when they don’t take away from the physician’s medical judgement. Rushing new medical software into practice, whether to comply with a poorly-thought-out government mandate or to generate free publicity, has the potential to do much more harm than good. Like many other medical advances, it is much better to be right than to be first.

Science Or Nonsense: Did Humans Evolve into Weaklings?

Rise of the Planet of the Apes

Science or Nonsense:
Did Humans Evolve into Weaklings?

Humans Evolved Weak Muscles to Feed Brain’s Growth,” says the National Geographic headline. The idea that humans are the cloistered wimps of the animal kingdom is an old and commonly repeated meme. I was intrigued by the promise of scientific evidence in its favor, so I clicked the above article. What I found was even more interesting but less straightforward.

The NatGeo article is based on a scholarly article from the Max Planck Institute in Germany, with the much less catchy name of Exceptional Evolutionary Divergence of Human Muscle and Brain Metabolomes Parallels Human Cognitive and Physical UniquenessThe full text is available @ PLOS One, along with an accompanying commentary article. (Yay for open-access publication!)

Based on its coverage in NatGeo as well as other media coverage, you would be forgiven for thinking that the entire research paper was a literal tug-of-war between humans and apes. “All participants had to lift weights by pulling a handle,” states the NatGeo article. It quotes the editorial commentary, “Amazingly, untrained chimps and macaques raised in captivity easily outperformed university-level basketball players and professional mountain climbers.


Dexter’s Lab

Digging Through The Science
Reading between the Press Releases

If you read through the original article you’ll find that pull strength really wasn’t the point. The actual scientific study was an opus of molecular biology, specifically metabolomics. Using a combination of liquid chromatography and mass spectroscopy, they could examine the patterns of metabolites in each tissue in the body. By comparing humans to chimps, macaques and mice they could figure out what metabolic pathways had the most human-specific differences. Not surprisingly, the human brain was very different from animal brains: around 4x as many human-specific changes as the kidneys (their “control” tissue). However, muscles actually managed to outdo brain: they had 8x as many human-specific changes!

The authors were intrigued by this large difference in human muscles, so they embarked on additional studies. First, they performed a genomic (mRNA) analysis to confirm that their metabolomics weren’t totally crazy. They found that the gene expression analysis matched very closely with their metabolomic analyses. Then, they did a less scientific but more newsworthy confirmatory study: the “pulling strength” experiment.

This metaphoric human versus ape tug-of-war accounted for a single paragraph in a very long paper. If you asked any of the authors what experiment they were most proud of, I’m pretty sure none of them would say “Pulling Strength!” Unlike the great detail they gave of their biochemical and statisticcal analyses, the authors gave very little detail on the physical setup of their pulling strength experiment. All we know is that humans, chimps and macaques had to pull a handle to get food, the weight on the machine was progressively increased until the subject couldn’t pull it, and the heaviest weight pulled was recorded as a datapoint.

The final conclusion of “apes are stronger than humans”, came from the endpoint of “pull strength per kilogram of body weight”. This metric is inherently biased toward the smaller animals, and with chimps weighing in at 40kgs (88#) they certainly had an advantage. A similar analysis done on a smaller weight machine would have proven the supernatural strength and agility of spiders.


The authors and the editor all made comments about how they did not control for biomechanics. Yes, biomechanics is a valid criticism – everyone knows that the same amount of weight can feel a lot heavier or lighter depending on which weight machine it’s on. Maybe the geometry of their experimental weight machine was bad for humans. However, biomechanics is really the least of “pull strength”s problems.


Bipodia as the key insight
Are you pulling my leg?

The problem with a human-ape tug-of-war should have been much more obvious: Humans are bipedal. Apes and monkeys are not. A chimpanzee walks on its knuckles and hangs from tree branches. Ape shoulders are angled toward its head, and ape scapulae are narrow and elongated. This gives apes a more solid muscle attachment for knuckle-walking, hanging, and swinging motions, at the cost of restricted range of motion. For example, apes cannot scratch their own backs, one of many reasons that they spend a lot of time grooming each other. Ape arms are significantly longer than their legs, as is necessary for their walking posture.

On the other hands, human arms are not meant for locomotion, they are meant for manipulation. Our arms are 30% shorter than our legs. Our hands are much smaller than ape hands, trading raw grip strength for dexterity and opposable thumbs. We can cross our arms behind our backs, something that apes cannot do. While our arms are strong enough to hang from monkey bars, it takes us an awful lot of effort to do so. (much as a chimp can walk upright with effort) And humans really can’t knuckle-walk; our arms are too short and our knuckles too small.

Given that ape locomotion uses a lot of pulling motions and human locomotion doesn’t, the fact that apes can out-pull humans shouldn’t surprise anyone. As long as we use a tug-of-war to judge strength, humans just don’t stand a chance. Change the strength test to throwing speed and now the ape seems much weaker.

The idea that humans are the pathetic weaklings of the animal kingdom flies in the face of everything we know about primitive humans. Human biology evolved hundreds of thousands of years before effective weapons like spear-throwing slings or bows. Armed with rocks and sharp sticks, there’s no way we could have survived if our muscles were 2-3x weaker than any other animal.

It is quite likely that human muscles are weaker in short bursts but better at prolonged exertion when compared to other animals. The practice of persistence hunting, primitive tribesmen catching prey by running at it until it collapses from exhaustion, gave a spark to the barefoot running movement. If our muscles aren’t able to produce as much peak pulling force, it’s only because they are optimized for endurance and heat tolerance instead.


So, Are Our Muscles Different?
Get your hands off me!

So if I don’t buy the “human muscles are useless” theory, then why are there so many biochemical differences in human muscle compared to our kidneys and brains? The answer of course, is that we don’t know. I’m sure there are plenty of researchers trying to figure this out – every good study needs follow-up studies!

That said, you could make a few guesses based on simple metabolic facts:

The Real Paleo Diet: The great apes are omnivores, but they eat meat very rarely. It’s estimated that wild chimpanzees get ~3% of their calories from meat. The overwhelming majority of their calories come from fruits; there’s a reason why monkeys and apes are portrayed with bananas! On the other hand, ancient humans ate a lot of meat. Variations of the Paleo Diet tell you to get 50-70% of your calories from meat and seafood. While the Paleo Diet is of questionable prehistoric accuracy, humans definitely eat way more meat than the great apes that we evolved from. After all, our bipedal gait and heat tolerance were really good for running down prey!

On a broad scale, human metabolism can be described as two different modes: a glucose metabolism and a ketone metabolism. In well-fed humans with an unrestricted diet, glucose is the basic energy source. Glucose produces a small amount of energy through anaerobic glycolysis, then it feeds into the citric acid cycle which produces a large amount of aerobic energy. During a meal, we digest carbohydrates into very large amounts of glucose. We produce insulin in order to allow our cells to uptake the glucose and use it for energy. Excess glucose is stored as glycogen in liver and muscle. When fasting, we digest the glycogen back into glucose, keeping our blood sugar stable. Glycogen is such a good energy source, athletes often practice “carbohydrate loading” to increase their own glycogen stores.

When we become carbohydrate-deficient, either due to starvation or due to a low-carb diet, the human body switches track completely. The liver breaks down fatty acids into ketone bodies, which become the primary energy source for the body. Ketone bodies enter the citric acid cycle as acetyl-CoA, producing aerobic energy in the same fashion as glucose. However, insulin-mediated glucose transport and glycolysis are completely left out of the picture. Low-carb diet advocates claim numerous benefits of ketosis, some of them more plausible than others.

So what’s my point? The human diet is sufficiently different from that of monkeys and apes that it’s likely that our metabolism has changed in response to diet. A chimp never has to worry about ketosis; its diet is way too high-carb. And since the majority of our carbohydrates are stored as glycogen in the muscles, it only makes sense that our muscle metabolome would change more than any other organ.

Well, that’s my theory at least.

Hacking the Human Mind: Enter Reality

Image courtesy of Emotiv/ExtremeTech.

Hacking the Human Mind, Pt. 2: Enter Reality
In the first part of this post, I discuss the concept of “hacking the human mind” in mythology and fiction. Ever since antiquity, many people have tried to improve the human mind and body. The information era has contributed the term “hacking” to the idea of human-improvement. More recently, pop culture has adopted the idea of hacking humanity and turned it into a ubiquitous plot device.


Snap Back to Reality
Whoops there goes Gravity

Hollywood has portrayed hacker-like characters as superhumans, shadowy villains or even honest-to-goodness sorcerers. However, hacker culture in real life is a far cry from its fictional portrayal. While wizards and sorcerers jealously guard their knowledge, real-world hackers are famous for sharing knowledge. (especially when they’re not supposed to)

Possibly thanks to the popularity of “hacking the human mind” as an idea, medical researchers have started to promote the so-called hacker ethic. This philosophy holds that decentralized, open-source use of technology can improve the world. Traditional medical research goes through multiple cycles of proposal, review and revision before anything happens. Successes are often published in closed-access journals while failures are often buried. The hacker ethos encourages freewheeling experimentation and open-source sharing among the scientific community.

Among its many innovations, hacker culture has given birth to the idea of medical hackathons. A “hackathon” is defined as an short duration (often just a weekend), high-intensity multidisciplinary collaboration. During the event, participants make “60 second pitches” to attract other people who might have special skills. For example, a physician with a good idea for telemedicine might go around trying to find a coder who knows about Internet security. Then they could come across a hacker with machine-vision expertise and use him to improve their cameras.

Although they occur too quickly to really polish a product or conduct clinical trials, hackathons generate numerous bright ideas that can be worked on later. In a way they are the ultimate brainstorm.

Heroes of the Brainstorm
Harder, Better, Faster, Stronger

Hackathons are undoubtedly coming up with lots of very good ideas. However, even the best medical ideas take a long time to implement. The only ideas that can be implemented immediately are very small pieces of provider-side software. (ie, enhanced changeover sheets for hospitalists) Anything that touches a patient requires a lengthy process of requests, reviews, and consents before it is ever used… and only then can you figure out whether it is effective.

As of 2014, the medical hackathon simply hasn’t been around long enough to show much of an effect. It’s a bit like a drug in Phase I-Phase II studies: everyone has great hope that it will improve things, but you can’t point to a major innovation that would not have been possible without the hackathon.

Integrating small-scale hackathon products into larger suites of medical software is a much tougher problem. Even the large-vendor EHRs (Epic, Meditech, Cerner) have difficulty communicating with each other, let alone with smaller pieces of software. The greatest problem in healthcare IT is that the so-called “HL7 Standard” isn’t really a standard.

Standard file formats exist so that they can be consistently read by everyone. A PDF looks the same on a PC, Mac, iPhone or Google Glass. A Kindle file (.AZW) is the same on a Kindle, PC or phone. Even medical imaging has a true standard format. Whether your CT scanner is a GE, Phillips, or Siemens, when you export DICOM images to another physician, the CT slices will show up exactly the same.

HL7 is not like that at all. In my personal experience, naively transferring documents between two pieces of “HL7-compliant” software results in loss or misinterpretation of some of the data. In order to fix this, you need a highly trained IT expert to create a specialized “connectivity interface”, or sometimes you pay big bucks to purchase such an interface. I am amazed that things are still so difficult in the year 2014.

In the field of traditional software design, hackers have benefited from the uniform interoperability of Unix (Linux) for many decades. As of today, healthcare lacks this important feature.

Maybe the hackers could come up with a solution for interoperability?

Big Data: The Rise of the Machines
Thank God it’s not Big Lore, Big Bishop, or Big Terminator

One of the promises of “medical hacking” has been the application of “Big Data” techniques to healthcare. Data analysis in healthcare has always been difficult and often inconsistently performed. Many medical students and residents can tell you about painstaking research hours spent on manual data-entry. Big Data techniques could turn ten thousand med student hours into five minutes of computer script runtime. Unfortunately, to this date Big Data has been much less successful in real life.

So far, the two Biggest Data medical innovations have been Google Flu Trends and 23andMe. GFT purports to forecast the severity of the flu season, region by region, based on statistics on flu-related Google searches. 23andMe was originally supposed to predict your risk of numerous diseases and conditions using a $99 DNA microarray (SNP) analysis. Far from being a home run for Big Data, both of these tools are more reminiscent of a strikeout, if not a pick-six.

GFT was billed as a Big Data tool that would vastly improve the accuracy and granularity of infectious disease forecasting. When first introduced in 2008, GFT’s flu predictions were more accurate than any existing source. However, every year it became less and less accurate, until it became worse than simply measuring how many flu cases happened two weeks ago. GFT’s performance degraded so badly, it was described as a “parable of traps in data analysis” by Harvard researchers.

23andMe offered SNP testing of the entire genome, used both for ancestry analysis and disease prediction. Prior to November 2013, the website offered a vast number of predictors ranging from lung cancer to erectile dysfunction to Alzheimer’s dementia to drug side effects. It was held up as an exemplar of 21st-century genomic empowerment, giving individuals access to unprecedented information about themselves for the low, low price of $99.

The problem was, 23andMe never bothered to submit any scientific evidence of accuracy or reproducibility to the Food and Drug Administration. The FDA sent a cease and desist letter, forcing them to stop marketing their product as a predictive tool. They’re still selling their gene test, but they are only allowed to tell you about your ancestry. (not any health predictions) This move launched a firestorm, with some people arguing that the FDA was overstepping or even following “outdated laws“.

However, the bulk of the evidence suggested that 23andMe simply didn’t give accurate genetic info. Some molecular biologists pointed out the inherent flaws in SNP testing, which make it impossible for 23AndMe to be usably accurate. Others pointed out that even if accurate, most of the correlations were too weak to have any effect on lifestyle or healthcare. The New England Journal of Medicine concluded that the FDA was justified in issuing a warning, and that “serious dialogue” is required to set standards in the industry. Other commentators were “terrified” by 23andMe’s ability to use your genetic info for secondary studies. After all, how can 23andMe sell genetic tests for $99 when other companies charge thousands? Obviously they didn’t plan to make money from the consumers; instead, 23andMe hoped to make money selling genetic data to drug companies and the rest of the healthcare industry.

In the end, that is my biggest misgiving against medical Big Data. Thanks to social media (this blog included) we have already commoditized our browsing habits, our buying habits, our hobbies and fandoms. Do we really want to commoditize our DNA as well? If so, count me out.

Doctoring the Doctor
Damnit Jim, I’m a doctor, not a hologram!

Another big promise of the “hacker ethos” in medicine is that it could improve physician engagement and enthusiasm for technology. Small decentralized teams of hackers could communicate directly with physicians, skipping the multi-layered bureaucracy of larger healthcare companies.

Many healthcare commentators have (falsely) framed the issue of physician buy-in as a matter of technophobia. Doctors are “stuck in the past“, “Luddites in white coats”, and generally terrified of change. The thing is, it’s just not true. Just look at the speed at which new medical devices are popularized – everything from 4DCTs to surgical robots to neuronavigation units, insulin pumps, AICDs and deep brain stimulators. If physicians saw as much of a benefit from electronic health records (EHRs) as we were supposed to, we would be enthusiastic instead of skeptical.

I believe that EHR would be in much better shape today if there had never been an Obamacare EHR mandate. No one ever improved the state of the art by throwing a 158-page menu of mandates at it. Present-day EHRs care much more about Medicare and other billing rules than they do about doctor or nurse usability.

Back on subject, I do believe that medical hacking has the potential to get physicians more involved in technological innovation. So long as physicians are stuck dealing with massive corporate entities, we can provide feedback and suggestions but they are very unlikely to be implemented. Small-scale collaborations empower doctors with the ability to really change the direction of a project.

Now, not every medical hack will result in something useful. In fact, a lot of hacks will amount to little more than cool party tricks, but some of these hacks will evolve into more useful applications. Some easily-hackable projects may involve documents or files produced by older medical technology. During residency I worked on a research project involving radiation treatment plans from a very old, non DICOM-compliant system. We quickly discovered that the old CTs were not usable by modern treatment planning software. Fortunately, one of the physicists on our research team was familiar with DICOM. He coded a computer program that inserted the missing DICOM headers into the old CT images, allowing us to import old CTs without any problems.

Introducing more hackers to medicine can only increase the number of problems solved by astute coding.

What Happened to Superpowers?
Paging Dr. Manhattan…

The addition of hacker culture to medicine certainly has a lot of potential to improve the everyday practice of medicine. But what happened to the idea of “hacking the human mind” in order to develop super-strength and speed?

On a very rudimentary level, “hacking the mind” improves physical performance every time an athlete grows a beard for the playoffs or wears his college shorts under his NBA uniform. But true hacking should be more sophisticated than mere superstition!

Biofeedback is a common pre-game ritual for various athletes that could be construed as a minor form of “hacking the mind/body”. Dietary habits such as carb loading could also be considered a mild form of hacking. For less legal mind-body hacking you could always turn to performance enhancing drugs.

Speaking of drugs, there’s a long-held belief that people high on drugs (mostly PCP, sometimes meth or bath salts) gain superhuman strength. While the evidence is mostly anecdotal, there’s a plausible medical explanation. The Golgi tendon reflex normally prevents muscles from over-exerting themselves, and it can be suppressed in desperate situations (the “mother lifts a car off her child” scenario). It’s reasonable to assume that some drugs could have a similar effect.

It’s also reasonable to assume that military physicians have spent decades (the entire Cold War for sure) trying to produce a super-strength drug with fewer side effects than PCP. The fact that our entire army doesn’t have the physique of Captain America suggests that those efforts were unsuccessful. Granted, this doesn’t rule out the existence of a super serum that only worked on one guy ever.

Evolutionarily speaking, it is highly implausible that humans would have tremendous physiological potential locked behind some mental gate. If the human body had such great power, our prehistoric ancestors would have needed every ounce of it to outrun or outfight angry lions and hippos and crocs. It would make no sense for humans to have a mental block on our strength. Unless removing that mental block led to instant death or infertility, the first caveman to lose his mental block would be evolutionarily favored over the rest of proto-humanity. Therefore, it’s very unlikely to think that human performance can be “magically” improved with drugs, meditation or other techniques.


So let’s cap off this long ramble with a little teaser on evolution and human strength. This National Geographic feature suggests that early humans directly traded muscle strength for brain power.


What is wrong with this argument?

Hacking the Human Mind: The Other 90%

Image courtesy of Emotiv and ExtremeTech.

Hacking the Human Mind: The Other 90% (Pt. 1 of 2)
Luminous beings are we. Not this crude matter.

Can the Nervous System be Hacked?”, asks a New York Times headline? The article examines recent developments and ongoing research in peripheral nerve stimulation. To its credit, the NYT avoids the rampant sci-fi speculation all-too-common to biomedical research articles. Which is strange, because according to the Internet the NYT is supposed to reinvent itself for the digital age by turning into BuzzFeed. Guess the Grey Lady hasn’t gone full Upworthy – yet. Fortunately, blending fantasy with reality is what I do. So let’s get to it!

The meme of “hacking the human mind” fascinates me. While the idea of modifying humanity through clever tinkering has been around since time immemorial, it is deeply entrenched in 21st century popular culture. Human-hacking is frequently justified by the myth that humans only use 10% of their brains. If only a hacker could unleash that other 90% we’d be able to cure disease, boost intelligence, maybe even develop superhuman abilities. In a superhero-dominated Hollywood, “hacking the human mind” and/or “using the other 90%” is used as a convenient excuse for all sorts of ridiculously unrealistic abilities. In the real world of biology and medicine, hacking is used more as a workflow metaphor, encouraging loosely-organized cross-disciplinary teams instead of the rigid hierarchy prevalent in medicine.

In the first of a 2-part series on “Hacking the Human Mind”, I will focus on mythological and fictional influences on the concept of human-hacking. In a second half I will discuss the real-world implications.

Older than Dirt
Powered by Green Energy

As I mentioned, the concept of “hacking the human body” vastly predates the concept of hacking. Since antiquity, numerous martial arts orders have claimed that their training does more than just improve physical fitness and coordination. In traditional Chinese belief, the body has a large number of “energy (Qi) gates” that can be opened by practice, meditation, and/or acupuncture. Variations on this belief are common in fiction, especially Anime. However, the Asian belief in opening the gates of the body is fundamentally different from “hacking”. Traditional Asian techniques draw from mysticism and spirituality, emptying the mind so that the spirit can take control. Hacking is about filling your mind with rigorous logic and calculation. While the outcome may appear “magical”, the process of hacking is strictly scientific. As in the NYT article, in order to control millions of neurons you start by studying 7 neurons at a time.

So what about hacking the body in the scientific tradition? The earliest Western version of “hacking the mind” dates back to 1937, when E.E. Smith and the Lensman series fought a galactic-scale war against aliens strangely reminiscent of Nazis. The Lensmen were genetically superior humans, the product of aeons of selective breeding for psychic powers. Using their Lens as a focus, they could conjure matter, negamatter (antimatter) and energy from their minds. Later on, DC Comics would popularize the concept of a galactic police corps with superpowers based on focusing their imagination through a small trinket. Both of these Western examples are still closer to “magical powers” than to science, although you could argue that there’s no meaningful difference at the galactic scale.

Into the Age of Hackers
Two Hiros and a Stark

The modern concept of “hacking the human mind” could be credited to Neal Stephenson’s Snow Crash. People could contract the Snow Crash virus by viewing a computer graphic, causing them to lose much of their personality and become susceptible to mind control. This was explained by suggesting that ancient Sumerian was an “assembly code of the brain”, capable of re-programming humans on a fundamental level. The ancient sorcerer Enki created a “nam-shub” that prevented all other humans from understanding Sumerian. This protected them from mind control but caused human language to fragment into incomprehensible tongues, an event known as the Tower of Babel. Snow Crash is remarkable for equating the spread of information with that of a virus (in fact, people infected via computer would also transmit viruses in their bloodstream), over a decade before the phrase “going viral” infected the English language. The Snow Crash version of mind-hacking is remarkable for its negativity – hacking takes away your free will and doesn’t give you any superpowers. The characters with super-strength or super-speed got those the old-fashioned way: radiation exposure.

The idea of hackers learning the secrets of the human mind in order to gain supernatural abilities is much more recent than Snow Crash. As far as I can tell, the first major work to use this trope was Heroes (2006). Just like Snow CrashHeroes featured a lovable hero named Hiro. (Yatta!) Mohinder was the first hacker-like character on the show, a geeky fellow who studied supernormals but didn’t actually have superpowers. But we all know that the dominant hacker of Heroes was the brain-dissecting villain Sylar. Sylar personifies the trope of hacker as a selfish, unpredictable criminal, hidden behind layers of secrecy. Like the victims of Snow Crash, Sylar could alter his biology/physiology simply by gaining information (in his case, studying the brains of other superhumans). Unlike a Snow Crash victim, Sylar could control the information that he gained from their brains, a truly gruesome method of increasing his power level.

No mention of human-brain-hacking is complete without mentioning Aldrich Killian of Iron Man 3. He invents the drug Extremis, which has the ability to cure disease, grant super-strength, super-speed, super-durability, and light yourself on fire, all with the small risk of exploding like an incredibly powerful bomb. How is Extremis so powerful? Well, Aldrich explains that he “hacked the human genome”, so of course it makes sense. At least, it makes about as much sense as Tony Stark’s arc reactor, and much more sense than Captain America or the Incredible Hulk. (let’s not get started on Asgardians…)

Wrap-Up: Fiction
Less Strange than Reality

I hope you have enjoyed Part 1 of my article on hacking the human mind. In the second part of my article I will discuss the real-world effects of the “hacker ethos” on medical research and practice.

Bruno, Semmelweis, and McCarthy


Bruno, Semmelweis, and McCarthy:
Declaring war against the Establishment; what is it good for?

Over the past several months, Neil DeGrasse Tyson has done a masterful job of narration on “COSMOS: A Spacetime Odyssey“. The very first episode of this show introduced viewers to the historical cosmologist Giordano Bruno. A quick recap:

Giordano Bruno promoted a heliocentric view of the universe way before it was cool. In the 16th century, spouting garbage about the Earth revolving around the Sun was a dangerous heresy. After all, everyone knew the Earth was created at the center of the universe – it says right there in the Bible. Geocentrism was supported by an overwhelming consensus among the educated classes, as well as foolproof scientific evidence – the absence of stellar parallax.

Astronomers have charted the stars since time immemorial, and the “fixed stars” traced the same paths year after year. Any village idiot could rotate an astrolabe around its axis – the Earth – and see for themselves. An astrolabe was precise. You could navigate by an astrolabe. If you placed the Earth anywhere off the central axis, the geometry would fall apart and the damned thing would never work.

So of course Bruno was a heretic. He was imprisoned, tortured and executed by the Church.

Over a decade after Bruno’s death, Galileo Galilei popularized the heliocentric model of the stars. Galileo was also persecuted, but was allowed to live under house arrest.

Stellar parallax would not be directly observed until two centuries later. By then, the Church had no problem with heliocentricity.

Now that we’re in the mid-19th century, we can look around for our second tragic genius. Dr. Ignaz Semmelweis witnessed the epidemic of fatal childbed fever that was sweeping Europe at the time. The good Doctor became convinced that disease was transmitted by “cadaveric particles” that could be removed by handwashing with chlorinated lime (aka bleach). He performed clinical trials, showing a dramatic improvement in survival with antiseptic handwashing.

Had there had been an Affordable Care Act of 1847, it would have made bleach-based handwashing a quality reporting measure. 19th century telegraph operators would have been busy copying “Did you wash your hands with bleach?” into dots and dashes on bronze templates, fully compliant with His Royal Apostolic Majesty’s Meaningfulle-Utilization Decree. Unfortunately for Dr. Semmelweis, Emperor Ferdinand I was too busy being deposed to pass comprehensive healthcare reform.

So Semmelweis did what any good physician would do, if he were an actor playing a physician on a medical television drama. He went around accusing his medical colleagues of being unclean, irresponsible, even “murderers”. The establishment rejected him so violently that he went insane and was imprisoned against his will in a mental asylum. Or was it the other way around?

Over a decade after his death, Semmelweis was finally recognized as correct. Louis Pasteur published the germ theory of contagious disease, which immediately went viral.

Much, much later, people coined the term “Semmelweis Reflex” to describe the human tendency to reject new information.


Both Bruno and Semmelweis had a revolutionary idea that contradicted everything the “scientific establishment” believed at the time. Both men decided to fight the establishment despite considerable risk to their health and sanity. In both cases, the clash ended like you’d expect.

This tragic fate has elevated Bruno and Semmelweis to Leonidas status among many people with unpopular beliefs. If Bruno and Semmelweis were crushed under the heel of the establishment, then surely more geniuses were suppressed to the point where we never heard about them. God only knows how many transformative worldviews were lost to mankind thanks to the reactionary mainstream… In fact, any time you see an idea forcefully suppressed, that idea must be trueOtherwise the establishment wouldn’t waste its time on oppression.

Semmelweis has been quoted by a crowd as diverse as anti-vaccination activistsclimate change activists, climate change deniers, and Major League Baseball agents. The idea that “conventional thinking is wrong” has obvious appeal to anyone with beliefs just crazy enough to be true. (not least of all the agent representing an athlete who is so much more skilled than what he shows on film, or in workouts, or in interviews)

As you might expect, plenty of nonsense-peddlers quote the Semmelweis Reflex to justify their beliefs.


The problem with Bruno and Semmelweis is two-fold:

First, neither one was actually right. Giordano Bruno based his cosmology on speculation and (weird) theology. He wasn’t an astronomer, he didn’t have any evidence, nor did he bother collecting any. Semmelweis based his handwashing practice on the theory of “cadaverous particles”. He didn’t try to explain what cadaverous particles were, how to measure them, or how they fit into our understanding of biology. Both Bruno and Semmelweis stumbled into correct conclusions through methods that were closer to magical thinking than to science.

Second, both men went out of their way to antagonize the establishment. While being a jerk doesn’t justify a painful early death (usually), there’s no doubt that Bruno and Semmelweis did a lot to harm their own causes. Bruno publicly doubted the Trinity, the virgin birth, and the divinity of Christ. He called his fellow friars “asses” and went around claiming to teach people magic. He probably pissed in the holy water too. It’s not a surprise that the Church killed him for his heresies.

Semmelweis wasn’t quite as far-out there, but he also did not help his own cause. He performed a controlled trial to demonstrate the efficacy of handwashing with antiseptic technique (good science!) and then firmly tied handwashing to his belief in harmful “cadaverous particles” / “cadaveric matter” (bad science!). When his contemporaries presented him with evidence that sepsis could occur even without a cadaver, Semmelweis mostly ignored them and continued to push his wrong-headed cadaver theory. Semmelweis’s attachment to his pet theory worked against the adoption of his real-life practice of antisepsis. If he’d been a little more flexible on cadaver theory, antiseptic handwashing may have been popularized years before Louis Pasteur, saving hundreds of thousands more lives.

With that in mind, Bruno and Semmelweis can teach us more than just “groupthink = bad”. The fact is, many of the great ideas throughout history were quite unconventional at the time. Before Isaac Newton, the “scientific consensus” would have said that a large weight falls faster than a small weight. Before Albert Einstein, only madmen believed that movement through space could distort the passage of time. Before Jim Watson, everyone knew that genes were made of proteins and not DNA. All three men were celebrated, not ridiculed for their unconventional genius.

The problem with Bruno and Semmelweis is that they went beyond “unconventional” and into what I’ll call “anti-conventional”. They didn’t just spit in the eye of the establishment, they turned around, dropped their drawers and farted. They pissed off their peers just because they could, and then it turned out that they really couldn’t.

No human is completely immune to ad hominem bias; when someone you dislike presents the facts, your first reflex is suspicion. You search for the deception or manipulation behind his logic, and nit-pick any small flaws in his data. Then you present a rebuttal with your own data and theory, and your opponent quickly sets about refuting your evidence. A demented version of Clarke’s Law takes hold: any science sufficiently politicized is indistinguishable from bullshit.

That’s why even though I agree that anti-vaxxers are dangerously wrong, I also disagree with the strategy of shaming and blaming them (complete with the occasional wave of anti-anti-vax Facebook “Share”s). People with fixed false beliefs are not going to change just because someone tells them how wrong they are. A better strategy is a combination of harm reduction (not making vaccine exemptions trivially easy to get), education (infectious disease is bad, y’all), and limiting the number of public platforms where they can shout nonsense.

It’s very unlikely that any of us can change anyone’s deeply held wrong beliefs, but we can all hope to limit the spread of such beliefs. After all, an idea is the deadliest parasite.

Now if only we had a mental equivalent of handwashing with bleach.