The Culture War Is Iraq

trumpmeteor

Donald Trump was so unlikable that even some of his own voters were terrified of him. In exit polls, as many as 17% of Trump voters were “concerned or scared” about Trump being President. Amazingly, that didn’t stop them from casting a ballot for Trump, who will enter the White House as the most disliked American President in recent history.

Much ink and pixels have been spilled over how such a bad candidate could have won a Presidential election. Is America an irredemably racist and sexist country of deplorables? Or was the election all about trade, jobs, and the dwindling power of labor unions?  Maybe it was really all about Hillary Clinton’s political incompetence? Or is the Republican Party is really quite strong despite all evidence to the contrary?

Personally, I have a different theory:

The Culture War is Iraq

(And Liberalism is George W Bush)

 

DomNoupcV3-P1.tiff

When President George W. Bush invaded Iraq, everyone knew the war was going to be utterly one-sided. The US had all of the weapons, all of the aircraft, all of the satellites. The US had far more troops with far better training and morale. We had a vast coalition of allies, some more enthusiastic than others.

And above all else, the US was equipped an overweening sense of superiority. History had ended, the West had won, and the sorry barbarians in Iraq just hadn’t realized it yet. Once we overthrew their corrupt and brutal government, surely the people would recognize what a big favor we’ve done them. Dick Cheney infamously said that we would be greeted as liberators.

Of course, none of that happened. We won the war in record time, smashing the Baathist government of Saddam Hussein. Baghdad Bob tried to claim that they could fight off the Americans but ended up in the ludicrous position of claiming that “there are no American tanks in Baghdad” while American tanks rolled down the street behind him. Within two months of the invasion, Saddam’s military was annihilated, Saddam was hiding in a spider hole, and George W Bush was on an aircraft carrier proclaiming “Mission Accomplished”.

But hindsight shows that the mission never was. We may have won the war, but we lost the peace, and during the fighting we lost our own moral values. The Iraq war eventually led to radical Islamic terrorism becoming far more influential than ever before.

So what does this have to do with President-Elect Donald Trump?


votingstars_hillary

Quite simply, Liberalism has won the Culture Wars and hoisted a giant Mission Accomplished banner over the metaphorical aircraft carrier of popular culture.

If you examine the “armies” and “weapons” of the culture wars, liberalism has all of the firepower on its side. Hillary Clinton was endorsed by 167 Hollywood stars, while Donald Trump’s only Hollywood star was vandalized. The newspapers and television news are heavily left-leaning. Almost every high-ranking university has endorsed the liberal “safe spaces” movement, to the point that the University of Chicago attracted widespread attention, and criticism, for rejecting it.

During the 2016 election cycle, many pundits on both sides of the aisle stated that the Culture War is over, liberals have won, therefore the Republican Party is dead. Even Republican pundits agreed that their party was in a meltdown.

Even if the GOP failed to keel over this year, they reassured everyone that “demography is destiny”. The GOP’s unpopularity with young and nonwhite populations would doom it to complete irrelevance within a few years. Much like what the “End of History” argument said for foreign policy, the “Demographic Destiny” argument assumes that the good guys will always win in the end because we are good and they are bad.

It’s almost reminiscent of the saying that God Is On Our Side, but minus the God. That in and of itself should be a red flag.

As we all found out, the God of Demographics was a no-show at the ballot box on November 8th. Despite criticizing and sometimes insulting Hispanic Americans, Trump won more Latino votes than Mitt Romney. And down-ballot candidates outperformed the wildest expectations of the Republican party.

Why did this happen?


 

missionaccomplished

Liberalism won the Culture War. Like US tanks rolling through Baghdad with F-15s circling overhead, liberalism won in crushing, annihilating fashion. And just like Dubya, they won the war so easily that they completely forgot about the need to win the ensuing peace.

When the US took over Iraq in 2003, we made the infamous mistake of “de-Baathification“. Believing that Saddam Hussein’s old Baath party was the root of all evils, the US-led occupation completely dismantled anything and anyone that may have been linked to the old party. This generated an immense amount of ill-will, plus overall chaos and disorganization, all of which provided a fertile field for the later rise of ISIS.

In an analogous move, victorious liberal culture warriors have demanded a level of political purity that is grossly unsustainable. When Yale professor Erika Christakis wrote an email about Halloween costumes in October 2015, the ensuing protests led to her resignation from Yale. A New York University professor was placed on leave and questioned about his mental health when he expressed support for Donald Trump.

Worse yet, during this election cycle, many people talked about the white working class in an absolutely demeaning manner. In the same way that culturally-insensitive Americans assumed that recalcitrant Iraqis must have been “derka derka Muhammad jihad“, culturally-insensitive big city elites jumped to the conclusion that Trump-supporting whites must be an irredeemably racist and sexist basket of deplorables.

Hillary Rodham Clinton

You’re not going to convince anyone to vote for Hillary (or even to stay home from Trump) by calling them deplorable. You’re doing the exact opposite. So many people were turned off by liberal tactics and messaging that they voted for Trump despite worrying that he was unqualified and unfit for the Presidency.

As stated by New York Times columnist Mark Lilla, liberalism has become “largely expressive, not persuasive.” I wish that more liberals would take this to heart. I agree with a lot of liberal ideals, but I find it devilishly frustrating to watch liberal political statements land like a drone strike in Mosul. If you win a battle and kill the enemy’s troops, but you create three times as many enemies as before, you haven’t won a battle at all.

Just like the US government killed Saddam only to empower the rise of  ISIS, liberalism may have killed off Cheney and McCain only to empower the rise of Trump and Bannon.


So this brings us to the question of, “What now?”

I think it’s simple. We need to stop being so damn expressive and start being more persuasive. A lot of high-profile liberal actions seem like they were done without any regard to whether it would win more supporters or detractors.

When Ruth Bader Ginsberg criticized Colin Kaepernick for protesting, it wasn’t because she disagreed with the message, a protest against police brutality. She was criticizing the way the message was delivered. When people see a political messenger disrespect the flag, a large percentage of Americans won’t even listen to the message. They’ll simply assume that we are wrong.

Sure, you may believe that it’s silly to get all wee-wee’d up about perceived disrespect to the flag. You may even be factually correct. But if millions of Americans are already upset, mocking them for over-sensitivity will not win any friends.

I completely agree that police brutality is a terrible problem. It’s one of many factors contributing to racial and economic inequality. But those of us who are pro-police-reform can’t possibly win a national debate by using a strategy that inspires two opponents for every one supporter.

A more persuasive strategy would be to highlight the areas where community policing has workedDraw more attention and charity dollars to events like police-community cookouts and other goodwill-building measures. Have police-reform liberals and police-reform conservatives sit down and come to an agreement on best practices. Yeah, this strategy is hard work. No, it won’t draw nearly as many television eyeballs as a football-stadium protest. But it’s better to gain 100 friends and 0 enemies than 1,000 friends and 2,000 enemies.

* * * * *

This strategy doesn’t just apply to one issue. It should apply to all political statements, campaigns, and causes. Before you get all fired up by a cause that you agree with… go around and listen to people who disagree with you.

You’ll quickly get a sense of which political messages generate sympathy, and which ones provoke resentment, hostility, or even hatred. Don’t go around baiting the “deplorables” with the latter, no matter how satisfying it may feel to win arguments with obviously irrational people. In the end you’re just creating more resentment and hatred.

Liberalism has won the Culture War in the USA. Gay marriage went from being unspeakable to being supported by a majority of Americans. Marijuana legalization has made progress in a large number of states. Americans are much less sympathetic toward crony-capitalism and abusive lending than before the Great Recession. And we are much more skeptical of the military-industrial complex and military adventurism.

But if liberals continue to behave like a victorious occupying force, clamping down on dissent with heavy handed shame-and-blame tactics… the victorious Culture War will drag on into a cultural quagmire of mistrust and anger. (as it has already done in 2016)

moab

It’s time to stop dropping Hellfire Missiles and MOABs on the Culture War. It’s time to focus on winning hearts and minds.

Why Neuropsych Studies are Big Liars

Bad Science Of The Day:

Why Big Liars Often Start Out as Small Ones

I came across this article in the “Science” section of New York Times. It is a link to a Nature Neuroscience paper out of the University College of London, which amazingly enough appears to have free fulltext. Naturally, I pulled up the actual article and spent quite some time trying to make heads/tails out of it. Sadly, it wasn’t worth the time.

soniamdisappoint

The original article, as well as the NYT piece, makes the very plausible claim that the human brain desensitizes itself to dishonesty in the same way that you become desensitized to bad smells. So slimy corporate executives, crooked politicians, and hustling street vendors aren’t actually trying to lie and cheat. They’ve just gone nose-blind to the stink of their own deception.

That’s certainly a plausible hypothesis, and it passes the Bayesian common-sense test. The problem is, after reading the Nature Neuroscience article, I have a hard time washing away the stink of their poor methodology. It smells like an Unreproducible Neuropsych Study, suffering from many of their common Bad Habits:

* Very small n
* Really stretching it with experimental design
* Really stretching it with synthetic endpoints
* Running minimally-bothersome trial stimuli on subjects stuck in a highly-bothersome fMRI scanner
* Data-torturing statistical methods
* Shoehorning hard numerical data into a Touchy Feely Narrative

***
First of all, their subjects were 25 college students with an average age of 20. I can understand only having 25 subjects, as it’s not exactly cheap/easy to recruit people into fMRI neuropsych experiments. But they actually scanned 35 kids. 10 of them caught on to their trial design and were excluded.

Really? One third of their subjects “figured out” the trial and had to be excluded? Actually, it was probably more, only one-third admitted to figuring out the trial design. For being a study about deception, the researchers sure were terrible at decieving their test subjects.

Alanis Morisette would be proud of the irony, as would Iron Deficiency Tony Stark.

***
The experimental design was questionable as well. The researchers used the Advisor-Estimator experiment, a commonly cited psychological model of Conflict of Interest.

Normally an advisor-estimator experiment involves a biased advisor (who is rewarded for higher estimates) assisting an unbiased estimator (who is rewarded for accurate estimates).

This is a great surrogate model for real-world conflicts of interest, like consultants who make more money if you are convinced to buy ancillary services. But it seems like a terrible surrogate for deception. As the experimenters themselves noted, there was no direct personal interaction between the subject and the estimator, no actual monetary stakes involved, and no risk of the subject being caught or punished for lying.

Worse yet, the magnitude of deception involved is incredibly minimal: skewing an estimate by a few pounds in the hopes of being paid a pound or two. That’s a minimal level of emotional manipulation of the subjects. I don’t know about British college kids, but I’d be much more emotionally disturbed by the fact that I’m stuck in a fMRI scanner.

Radiographic measurement, as with photographic image quality, is all about signal to noise ratio. In this case the emotional “signal” (distress caused by lying) is tiny compared to the ambient emotional “noise”.

***
Things get really silly when you read their composite endpoint, something called “Prediction beta”. It appears to be a statistical mess: a 2nd-order metric divided by a 2nd-order metric and averaged into something that resembles a correlation coefficient but is numerically less than 0.1.

Somehow this was statistically significant at p=0.021. But then you read that the authors also tested a crapload of other brain regions, and none of them were nearly as “predictive” as the amygdala. That’s a textbook case of multiple-comparisons data torturing, and it means that their p-values should have been Bonferroni’d into oblivion. The significance threshold shouldn’t have been 0.05, it should have been much, much lower.

***
When all is said and done, the authors should be congratulated for having taken a common sense anecdote (“Small lies lead to bigger ones”) and spent an immense amount of time and money coming up with super-unconvincing scientific data to back it up.

I imagine their next Amazing Rigorous Neuro-Psycho-Radiology trial will demonstrate, after testing twenty hypotheses with thirty different regressions, a borderline-statistically-significant correlation between insufficient parental affection and abusive bullying behavior.

Bullcrap like this is why common-sense driven people are losing their faith in science.

Is America Already Socialist?

America-2014

Socialism has been a hot topic in the 2016 election cycle. Bernie Sanders has drawn an unexpected amount of support, and he is promoting something called “Democratic Socialism” in the US. In response, POLITICO Magazine published an anti-socialist polemic titled “How Did America Forget What ‘Socialist’ Means“.

 This brings up the obvious question: What exactly does the word ‘socialist’ mean? Well, you could turn to various dictionaries. There’s the Oxford Online Dictionary, Merriam-Webster, Random House, and Wikipedia. They’re each slightly different but basically: Socialism is a system where the means of production are owned by everyone.

The POLITICO article references socialist systems in Cuba, China, Vietnam, Laos and North Korea, as well as the old Soviet Union. All of these countries followed a Marxist concept of socialism, in which private property is outlawed and the government owns all of the means of production. Based on this definition, Bernie Sanders is not a socialist. Bernie has loudly disavowed any plans to nationalize the corporations or outlaw private ownership of capital. This is fortunate, as the POLITICO polemic is absolutely correct in one thing. Outright nationalization has been a disaster in every country that’s attempted it.


However, European and Canadian socialists haven’t nationalized the corporations either. (except for healthcare, which Bernie would nationalize as well) Despite the fact that the Danish do not describe themselves as ‘socialist’, Bernie Sanders has cited Denmark, Sweden and Norway as shining exemplars of democratic socialism. This requires a definition of socialism far different from the Marxist ‘state control of the means of production’.

In a country like Denmark, Sweden or Norway, the government and labor unions have a great degree of control over corporate practices such as hiring practices, work hours, wages, and pensions. This follows the spirit of the word ‘socialism’ by giving the public a sense of ownership in business decision-making. However, it avoids actual public ownership of businesses, which is the dictionary definition of ‘socialism’. So it’s entirely reasonable to say that Denmark is not a socialist country, but it’s also reasonable to say that Denmark is a socialist country. (as Bernie does)

Many economists have argued that under this relaxed definition of socialism, the US is every bit as socialist as Europe. They rightfully point out that the US has a higher regulatory burden than most European countries – especially when it comes to licensure, registration, permitting, and tax laws. These complicated and expensive-to-follow laws are a method of public control of private businesses, so under the relaxed definition they are ‘socialism’. And these laws cast a powerful shadow on American businesses. The World Bank rates the US’s Ease of Doing Business lower than Denmark, the UK and New Zealand. The Heritage Economic Freedom Index rates the US and Denmark roughly the same. If capitalism is supposed to stand for freedom of doing business, the US is no more capitalist than Denmark.

The Danish regulatory regime has a strong focus on ‘fair distribution of wealth’, something that American socialists envy greatly. In comparison, US regulatory bodies are non-redistributive by nature, largely because socialism has been a toxic word in the US for so many decades. Medicare was prohibited from negotiating drug prices for exactly this reason. Instead of addressing inequalities, our unfree and uncapitalist regulatory bodies mostly serve narrow special interest groups. The RFS corn ethanol standard was supposed to benefit the environment, yet most environmentalists believe it to be harmful. Despite this fact, very few politicians are willing to run against the Big Corn lobby.


You could accurately describe the US economic system as being ‘capitalist in name only’, or CINO. You could also say that the difference between the US and Denmark is that Denmark places government regulators in corporate boardrooms, while the US places corporate executives in government regulatory committees.

Both conservative and liberal groups have written many jeremiads about “regulatory capture“, the tendency for government regulators to serve special interests instead of regular citizens. Established businesses push restrictive licensure laws to prevent competitors from setting up shop. Megabank executives pressure their friends at the Federal Reserve to carve out exceptions to banking laws. Car dealerships twist state laws to prevent competitors from entering the state. Environmental agencies willfully ignore mass poisonings. Startup businesses run into ridiculous restrictions all across the US.


Unfortunately, while almost everyone can agree that corrupt regulatory bodies are a major problem in the US, there is very little agreement on how to fix this problem. Some people believe that regulatory bodies can be improved by making the regulations more strict and punishing those who game the system. Others believe that regulatory bodies can be fixed by putting ‘the right people’ in charge – someone altruistic and incorruptible.

Libertarians believe in George Stigler‘s theory of regulatory capture – which is that regulatory capture is inevitable. No matter how strict the rules or how well-intentioned the personnel, corruption can only increase over time. This is because there is a strong financial incentive for corrupt individuals to influence a regulatory body, but there is a weak or nonexistent incentive for righteous individuals to fight back. This may sound like a nihilistic theory, but it certainly seems consistent with the present-day US economy. According to this theory, any attempt to “purify” corrupt agencies will have at most a temporary effect. Instead, the best antidote to corruption is to limit the number of levers and knobs that a corrupt bureaucrat could possibly touch. This means keeping regulations as simple as possible, eliminating special incentives, and cutting down on subjective discretion as much as possible.

Steve Forbes famously said that “Capitalism is the world’s greatest economic success story.” Unfortunately, it is also the world’s greatest political failure story. Over the course of the the 20th century, capitalist economic systems repeatedly triumphed over communist economic systems in productivity and wealth. Yet at the same time capitalism itself has degenerated into the corrupt system of CINO-ism.

How, or if, capitalism can be saved may be the most important question of the 21st century.

Unless robots decide to kill all of mankind. That might be slightly more important. Until then, capitalism is the most important question.

Was Malthus Right?

http://www.pbs.org/newshour/making-sense/world-woe-malthus-right/

I ran across this very interesting PBS article recently (link above). It is an excellent summary of Malthusian philosophy that got me musing about Malthusianism and public policy.

Reverend Thomas Malthus first published his theories in the late 18th century, a time of dramatic social upheaval. The might of England had fallen short against the rebellious colonies, while the Ancien Régime had lost its head to the rebellious Jacobins. The only thing certain in this era was uncertainty.

Against this backdrop, Malthus proclaimed that there were a finite quantity of resources on Earth, and that the human population will always proliferate until those resources are consumed. Once the resources are exhausted, the world is doomed either to widespread famine or violence. If the overall resource level is increased by social or technological developments, humans will simply proliferate to a larger population and our overall misery will remain unchanged.

Malthus wrote that the median income of the common folk, expressed in the amount of food (pounds of wheat) they could afford, had remained constant from prehistoric times to the end of the 18th century – and this number was barely enough food to survive. The central dogma of Malthusian belief was that increasing living standards led to higher populations which led to decreasing living standards, causing a long-term equilibrium of famine and poverty.

Malthus believed that this negative feedback cycle could only be broken if the whole world decided to have fewer children. In an era where reliable contraception was nonexistent and many children died at a young age, this must have sounded as loony as putting a man on the moon.

Malthus also suggested that any large-scale charity (such as social welfare programs) would prove useless or harmful in the long run. According to Malthusian dynamics, the only thing keeping poverty in check is the death rate of poor people. Therefore, anything you did to help poor people would only cause more people to become poor. This part of his philosophy was attractive to an aristocracy terrified of the proletariat mob at their gates. As such, 19th century Malthusianism was staunchly conservative.


 

By the time of World War II, every civilized country had major social welfare programs in place. Thus, the “charity is harmful” portion of Malthusian philosophy was largely ignored (as it remains to this day). Instead, 20th century Malthusians focused the importance of population control. In the pre-WWII era this often meant eugenics and forced sterilization – the Malthusian Belt of Brave New World. Again, this placed Malthusianism firmly on the conservative end of the political spectrum.

Adolf Hitler proceeded to Godwin the eugenics movement, taking it to its most horrific extreme and making it unmentionable in polite society. However, a pharmaceutical innovation revived interest in Malthus – The Pill. Oral contraceptives allowed a new generation to have kids only when they wanted to. Birth control was immediately opposed by the religious right, so Malthusian philosophy was suddenly liberal. This right-to-left shift was completed when many early environmentalists started preaching Malthusian population control as a way to decrease environmental impact.

Malthus believed that food production was the crucial limiting factor for population growth. The Earth had a “carrying capacity”, a maximum number of mouths that the planet could feed. Back in the 1950s and 1960s, food was a central dogma in Malthusian environmentalism. In The Population Bomb(1968), Paul Ehrlich stated that hundreds of millions of people would starve to death by the end of the 1970s. He suggested putting contraceptives in the water supply or in staple foods, while noting the sociopolitical impossibility of doing so.

Instead, a social and technological revolution occurred. Basic farming techniques such as irrigation, fertilizers and pesticides spread from the First World to the Third. New crop cultivars, developed first by conventional breeding and later by genetic modification, massively increased farm yields. Food prices dropped so low that many industrialized countries had to pay farmers not to farm. Even as the human population of Earth increased from a few hundred million to over 7 billion, Malthus’s prediction of widespread food shortages never came true.


 

A funny thing happened between the 1970s and now. Populations leveled off and started to decline in Europe, Russia, Japan, and among non-Hispanic whites in the USA. This happened despite the fact that an increasing world population had not triggered any horrific famines, wars or plagues. It also happened in the absence of any draconian measures such as Ehrlich’s hypothetical contraceptive water supply. Economists coined the phrase “demographic-economic paradox” to describe the decreasing fertility among wealthy socioeconomic groups. What public policy triumph allowed population control to finally happen? Widespread access to affordable contraception, a remedy far easier to swallow than forced sterilization.

The success of birth control could be seen as the ultimate confirmation of Malthus’s thesis that limiting the population would improve quality of life. It has undoubtedly broken the Malthusian cycle of “increased living standards -> increased birth rate -> decreased living standards”. Recent predictions suggest that human population will peak in the mid-21st century and then decline. This predicted peak doesn’t happen due to food shortages, but because humans are choosing to have fewer children. Those children will not be limited to Malthus’s “14 pounds of wheat”, they will have much greater access to food and material goods.

Reverend Malthus’ ultimate objective was to decrease the worldwide fertility rate, and by that measure he has been wildly successful. What he could not have forseen was the method of this success. Malthusian doctrine gave birth to numerous population-limiting schemes over the centuries, many of which were impractical or inhumane. In the end, the global fertility decline occurred thanks to affordable contraception. Billions of human beings chose to have fewer children. No one forced them to do so. (except in China).

I wish that more policy thinkers would draw a lesson from this part of history. You can craft onerous laws to change people’s behavior, and they will fight you every step of the way. Or you could give people the freedom to choose. If the change in behavior is truly beneficial, people will gravitate toward it over time – as has happened in every high-income country over the past several decades.

Link

Hacking the Mind Epilogue: Psychosurgery

Hacking the Brain Epilogue: Psychosurgery

While we’re on the subject of “hacking the human mind“, it looks like there is renewed interest in psychosurgery. The link goes to an article about deep brain stimulation for alcoholic cravings, PTSD, and depression!

People have been triyng to control psychiatric conditions with surgery since the days of the prefrontal lobotomy. Electrical stimulation has the advantages of precision and reversibility. However, as with any neurosurgical procedure it relies upon localizing an unwanted symptom to a specific location in the brain. For example, deep brain stimulation works for Parkinson’s because the disease is localized to the basal ganglia.

No matter how much funding you throw at electroneurology, it won’t do any good if an unwanted emotion or compulsion is spread out over a large area of the brain. It remains to be seen how well localized things like alcoholism and PTSD are.

Hacking the Human Mind: Enter Reality

Image courtesy of Emotiv/ExtremeTech.

Hacking the Human Mind, Pt. 2: Enter Reality
In the first part of this post, I discuss the concept of “hacking the human mind” in mythology and fiction. Ever since antiquity, many people have tried to improve the human mind and body. The information era has contributed the term “hacking” to the idea of human-improvement. More recently, pop culture has adopted the idea of hacking humanity and turned it into a ubiquitous plot device.

 


Snap Back to Reality
Whoops there goes Gravity

Hollywood has portrayed hacker-like characters as superhumans, shadowy villains or even honest-to-goodness sorcerers. However, hacker culture in real life is a far cry from its fictional portrayal. While wizards and sorcerers jealously guard their knowledge, real-world hackers are famous for sharing knowledge. (especially when they’re not supposed to)

Possibly thanks to the popularity of “hacking the human mind” as an idea, medical researchers have started to promote the so-called hacker ethic. This philosophy holds that decentralized, open-source use of technology can improve the world. Traditional medical research goes through multiple cycles of proposal, review and revision before anything happens. Successes are often published in closed-access journals while failures are often buried. The hacker ethos encourages freewheeling experimentation and open-source sharing among the scientific community.

Among its many innovations, hacker culture has given birth to the idea of medical hackathons. A “hackathon” is defined as an short duration (often just a weekend), high-intensity multidisciplinary collaboration. During the event, participants make “60 second pitches” to attract other people who might have special skills. For example, a physician with a good idea for telemedicine might go around trying to find a coder who knows about Internet security. Then they could come across a hacker with machine-vision expertise and use him to improve their cameras.

Although they occur too quickly to really polish a product or conduct clinical trials, hackathons generate numerous bright ideas that can be worked on later. In a way they are the ultimate brainstorm.


Heroes of the Brainstorm
Harder, Better, Faster, Stronger

Hackathons are undoubtedly coming up with lots of very good ideas. However, even the best medical ideas take a long time to implement. The only ideas that can be implemented immediately are very small pieces of provider-side software. (ie, enhanced changeover sheets for hospitalists) Anything that touches a patient requires a lengthy process of requests, reviews, and consents before it is ever used… and only then can you figure out whether it is effective.

As of 2014, the medical hackathon simply hasn’t been around long enough to show much of an effect. It’s a bit like a drug in Phase I-Phase II studies: everyone has great hope that it will improve things, but you can’t point to a major innovation that would not have been possible without the hackathon.

Integrating small-scale hackathon products into larger suites of medical software is a much tougher problem. Even the large-vendor EHRs (Epic, Meditech, Cerner) have difficulty communicating with each other, let alone with smaller pieces of software. The greatest problem in healthcare IT is that the so-called “HL7 Standard” isn’t really a standard.

Standard file formats exist so that they can be consistently read by everyone. A PDF looks the same on a PC, Mac, iPhone or Google Glass. A Kindle file (.AZW) is the same on a Kindle, PC or phone. Even medical imaging has a true standard format. Whether your CT scanner is a GE, Phillips, or Siemens, when you export DICOM images to another physician, the CT slices will show up exactly the same.

HL7 is not like that at all. In my personal experience, naively transferring documents between two pieces of “HL7-compliant” software results in loss or misinterpretation of some of the data. In order to fix this, you need a highly trained IT expert to create a specialized “connectivity interface”, or sometimes you pay big bucks to purchase such an interface. I am amazed that things are still so difficult in the year 2014.

In the field of traditional software design, hackers have benefited from the uniform interoperability of Unix (Linux) for many decades. As of today, healthcare lacks this important feature.

Maybe the hackers could come up with a solution for interoperability?


Big Data: The Rise of the Machines
Thank God it’s not Big Lore, Big Bishop, or Big Terminator

One of the promises of “medical hacking” has been the application of “Big Data” techniques to healthcare. Data analysis in healthcare has always been difficult and often inconsistently performed. Many medical students and residents can tell you about painstaking research hours spent on manual data-entry. Big Data techniques could turn ten thousand med student hours into five minutes of computer script runtime. Unfortunately, to this date Big Data has been much less successful in real life.

So far, the two Biggest Data medical innovations have been Google Flu Trends and 23andMe. GFT purports to forecast the severity of the flu season, region by region, based on statistics on flu-related Google searches. 23andMe was originally supposed to predict your risk of numerous diseases and conditions using a $99 DNA microarray (SNP) analysis. Far from being a home run for Big Data, both of these tools are more reminiscent of a strikeout, if not a pick-six.

GFT was billed as a Big Data tool that would vastly improve the accuracy and granularity of infectious disease forecasting. When first introduced in 2008, GFT’s flu predictions were more accurate than any existing source. However, every year it became less and less accurate, until it became worse than simply measuring how many flu cases happened two weeks ago. GFT’s performance degraded so badly, it was described as a “parable of traps in data analysis” by Harvard researchers.

23andMe offered SNP testing of the entire genome, used both for ancestry analysis and disease prediction. Prior to November 2013, the website offered a vast number of predictors ranging from lung cancer to erectile dysfunction to Alzheimer’s dementia to drug side effects. It was held up as an exemplar of 21st-century genomic empowerment, giving individuals access to unprecedented information about themselves for the low, low price of $99.

The problem was, 23andMe never bothered to submit any scientific evidence of accuracy or reproducibility to the Food and Drug Administration. The FDA sent a cease and desist letter, forcing them to stop marketing their product as a predictive tool. They’re still selling their gene test, but they are only allowed to tell you about your ancestry. (not any health predictions) This move launched a firestorm, with some people arguing that the FDA was overstepping or even following “outdated laws“.

However, the bulk of the evidence suggested that 23andMe simply didn’t give accurate genetic info. Some molecular biologists pointed out the inherent flaws in SNP testing, which make it impossible for 23AndMe to be usably accurate. Others pointed out that even if accurate, most of the correlations were too weak to have any effect on lifestyle or healthcare. The New England Journal of Medicine concluded that the FDA was justified in issuing a warning, and that “serious dialogue” is required to set standards in the industry. Other commentators were “terrified” by 23andMe’s ability to use your genetic info for secondary studies. After all, how can 23andMe sell genetic tests for $99 when other companies charge thousands? Obviously they didn’t plan to make money from the consumers; instead, 23andMe hoped to make money selling genetic data to drug companies and the rest of the healthcare industry.

In the end, that is my biggest misgiving against medical Big Data. Thanks to social media (this blog included) we have already commoditized our browsing habits, our buying habits, our hobbies and fandoms. Do we really want to commoditize our DNA as well? If so, count me out.


Doctoring the Doctor
Damnit Jim, I’m a doctor, not a hologram!

Another big promise of the “hacker ethos” in medicine is that it could improve physician engagement and enthusiasm for technology. Small decentralized teams of hackers could communicate directly with physicians, skipping the multi-layered bureaucracy of larger healthcare companies.

Many healthcare commentators have (falsely) framed the issue of physician buy-in as a matter of technophobia. Doctors are “stuck in the past“, “Luddites in white coats”, and generally terrified of change. The thing is, it’s just not true. Just look at the speed at which new medical devices are popularized – everything from 4DCTs to surgical robots to neuronavigation units, insulin pumps, AICDs and deep brain stimulators. If physicians saw as much of a benefit from electronic health records (EHRs) as we were supposed to, we would be enthusiastic instead of skeptical.

I believe that EHR would be in much better shape today if there had never been an Obamacare EHR mandate. No one ever improved the state of the art by throwing a 158-page menu of mandates at it. Present-day EHRs care much more about Medicare and other billing rules than they do about doctor or nurse usability.

Back on subject, I do believe that medical hacking has the potential to get physicians more involved in technological innovation. So long as physicians are stuck dealing with massive corporate entities, we can provide feedback and suggestions but they are very unlikely to be implemented. Small-scale collaborations empower doctors with the ability to really change the direction of a project.

Now, not every medical hack will result in something useful. In fact, a lot of hacks will amount to little more than cool party tricks, but some of these hacks will evolve into more useful applications. Some easily-hackable projects may involve documents or files produced by older medical technology. During residency I worked on a research project involving radiation treatment plans from a very old, non DICOM-compliant system. We quickly discovered that the old CTs were not usable by modern treatment planning software. Fortunately, one of the physicists on our research team was familiar with DICOM. He coded a computer program that inserted the missing DICOM headers into the old CT images, allowing us to import old CTs without any problems.

Introducing more hackers to medicine can only increase the number of problems solved by astute coding.


What Happened to Superpowers?
Paging Dr. Manhattan…

The addition of hacker culture to medicine certainly has a lot of potential to improve the everyday practice of medicine. But what happened to the idea of “hacking the human mind” in order to develop super-strength and speed?

On a very rudimentary level, “hacking the mind” improves physical performance every time an athlete grows a beard for the playoffs or wears his college shorts under his NBA uniform. But true hacking should be more sophisticated than mere superstition!

Biofeedback is a common pre-game ritual for various athletes that could be construed as a minor form of “hacking the mind/body”. Dietary habits such as carb loading could also be considered a mild form of hacking. For less legal mind-body hacking you could always turn to performance enhancing drugs.

Speaking of drugs, there’s a long-held belief that people high on drugs (mostly PCP, sometimes meth or bath salts) gain superhuman strength. While the evidence is mostly anecdotal, there’s a plausible medical explanation. The Golgi tendon reflex normally prevents muscles from over-exerting themselves, and it can be suppressed in desperate situations (the “mother lifts a car off her child” scenario). It’s reasonable to assume that some drugs could have a similar effect.

It’s also reasonable to assume that military physicians have spent decades (the entire Cold War for sure) trying to produce a super-strength drug with fewer side effects than PCP. The fact that our entire army doesn’t have the physique of Captain America suggests that those efforts were unsuccessful. Granted, this doesn’t rule out the existence of a super serum that only worked on one guy ever.

Evolutionarily speaking, it is highly implausible that humans would have tremendous physiological potential locked behind some mental gate. If the human body had such great power, our prehistoric ancestors would have needed every ounce of it to outrun or outfight angry lions and hippos and crocs. It would make no sense for humans to have a mental block on our strength. Unless removing that mental block led to instant death or infertility, the first caveman to lose his mental block would be evolutionarily favored over the rest of proto-humanity. Therefore, it’s very unlikely to think that human performance can be “magically” improved with drugs, meditation or other techniques.


 

So let’s cap off this long ramble with a little teaser on evolution and human strength. This National Geographic feature suggests that early humans directly traded muscle strength for brain power.

http://news.nationalgeographic.com/news/2014/05/140527-brain-muscle-metabolism-genes-apes-science/

What is wrong with this argument?

Hacking the Human Mind: The Other 90%

Image courtesy of Emotiv and ExtremeTech.

Hacking the Human Mind: The Other 90% (Pt. 1 of 2)
Luminous beings are we. Not this crude matter.

Can the Nervous System be Hacked?”, asks a New York Times headline? The article examines recent developments and ongoing research in peripheral nerve stimulation. To its credit, the NYT avoids the rampant sci-fi speculation all-too-common to biomedical research articles. Which is strange, because according to the Internet the NYT is supposed to reinvent itself for the digital age by turning into BuzzFeed. Guess the Grey Lady hasn’t gone full Upworthy – yet. Fortunately, blending fantasy with reality is what I do. So let’s get to it!

The meme of “hacking the human mind” fascinates me. While the idea of modifying humanity through clever tinkering has been around since time immemorial, it is deeply entrenched in 21st century popular culture. Human-hacking is frequently justified by the myth that humans only use 10% of their brains. If only a hacker could unleash that other 90% we’d be able to cure disease, boost intelligence, maybe even develop superhuman abilities. In a superhero-dominated Hollywood, “hacking the human mind” and/or “using the other 90%” is used as a convenient excuse for all sorts of ridiculously unrealistic abilities. In the real world of biology and medicine, hacking is used more as a workflow metaphor, encouraging loosely-organized cross-disciplinary teams instead of the rigid hierarchy prevalent in medicine.

In the first of a 2-part series on “Hacking the Human Mind”, I will focus on mythological and fictional influences on the concept of human-hacking. In a second half I will discuss the real-world implications.


Older than Dirt
Powered by Green Energy

As I mentioned, the concept of “hacking the human body” vastly predates the concept of hacking. Since antiquity, numerous martial arts orders have claimed that their training does more than just improve physical fitness and coordination. In traditional Chinese belief, the body has a large number of “energy (Qi) gates” that can be opened by practice, meditation, and/or acupuncture. Variations on this belief are common in fiction, especially Anime. However, the Asian belief in opening the gates of the body is fundamentally different from “hacking”. Traditional Asian techniques draw from mysticism and spirituality, emptying the mind so that the spirit can take control. Hacking is about filling your mind with rigorous logic and calculation. While the outcome may appear “magical”, the process of hacking is strictly scientific. As in the NYT article, in order to control millions of neurons you start by studying 7 neurons at a time.

So what about hacking the body in the scientific tradition? The earliest Western version of “hacking the mind” dates back to 1937, when E.E. Smith and the Lensman series fought a galactic-scale war against aliens strangely reminiscent of Nazis. The Lensmen were genetically superior humans, the product of aeons of selective breeding for psychic powers. Using their Lens as a focus, they could conjure matter, negamatter (antimatter) and energy from their minds. Later on, DC Comics would popularize the concept of a galactic police corps with superpowers based on focusing their imagination through a small trinket. Both of these Western examples are still closer to “magical powers” than to science, although you could argue that there’s no meaningful difference at the galactic scale.


Into the Age of Hackers
Two Hiros and a Stark

The modern concept of “hacking the human mind” could be credited to Neal Stephenson’s Snow Crash. People could contract the Snow Crash virus by viewing a computer graphic, causing them to lose much of their personality and become susceptible to mind control. This was explained by suggesting that ancient Sumerian was an “assembly code of the brain”, capable of re-programming humans on a fundamental level. The ancient sorcerer Enki created a “nam-shub” that prevented all other humans from understanding Sumerian. This protected them from mind control but caused human language to fragment into incomprehensible tongues, an event known as the Tower of Babel. Snow Crash is remarkable for equating the spread of information with that of a virus (in fact, people infected via computer would also transmit viruses in their bloodstream), over a decade before the phrase “going viral” infected the English language. The Snow Crash version of mind-hacking is remarkable for its negativity – hacking takes away your free will and doesn’t give you any superpowers. The characters with super-strength or super-speed got those the old-fashioned way: radiation exposure.

The idea of hackers learning the secrets of the human mind in order to gain supernatural abilities is much more recent than Snow Crash. As far as I can tell, the first major work to use this trope was Heroes (2006). Just like Snow CrashHeroes featured a lovable hero named Hiro. (Yatta!) Mohinder was the first hacker-like character on the show, a geeky fellow who studied supernormals but didn’t actually have superpowers. But we all know that the dominant hacker of Heroes was the brain-dissecting villain Sylar. Sylar personifies the trope of hacker as a selfish, unpredictable criminal, hidden behind layers of secrecy. Like the victims of Snow Crash, Sylar could alter his biology/physiology simply by gaining information (in his case, studying the brains of other superhumans). Unlike a Snow Crash victim, Sylar could control the information that he gained from their brains, a truly gruesome method of increasing his power level.

No mention of human-brain-hacking is complete without mentioning Aldrich Killian of Iron Man 3. He invents the drug Extremis, which has the ability to cure disease, grant super-strength, super-speed, super-durability, and light yourself on fire, all with the small risk of exploding like an incredibly powerful bomb. How is Extremis so powerful? Well, Aldrich explains that he “hacked the human genome”, so of course it makes sense. At least, it makes about as much sense as Tony Stark’s arc reactor, and much more sense than Captain America or the Incredible Hulk. (let’s not get started on Asgardians…)


Wrap-Up: Fiction
Less Strange than Reality

I hope you have enjoyed Part 1 of my article on hacking the human mind. In the second part of my article I will discuss the real-world effects of the “hacker ethos” on medical research and practice.

Bruno, Semmelweis, and McCarthy

329px-Giordano_Bruno_Campo_dei_Fiori

Bruno, Semmelweis, and McCarthy:
Declaring war against the Establishment; what is it good for?

Over the past several months, Neil DeGrasse Tyson has done a masterful job of narration on “COSMOS: A Spacetime Odyssey“. The very first episode of this show introduced viewers to the historical cosmologist Giordano Bruno. A quick recap:

Giordano Bruno promoted a heliocentric view of the universe way before it was cool. In the 16th century, spouting garbage about the Earth revolving around the Sun was a dangerous heresy. After all, everyone knew the Earth was created at the center of the universe – it says right there in the Bible. Geocentrism was supported by an overwhelming consensus among the educated classes, as well as foolproof scientific evidence – the absence of stellar parallax.

Astronomers have charted the stars since time immemorial, and the “fixed stars” traced the same paths year after year. Any village idiot could rotate an astrolabe around its axis – the Earth – and see for themselves. An astrolabe was precise. You could navigate by an astrolabe. If you placed the Earth anywhere off the central axis, the geometry would fall apart and the damned thing would never work.

So of course Bruno was a heretic. He was imprisoned, tortured and executed by the Church.

Over a decade after Bruno’s death, Galileo Galilei popularized the heliocentric model of the stars. Galileo was also persecuted, but was allowed to live under house arrest.

Stellar parallax would not be directly observed until two centuries later. By then, the Church had no problem with heliocentricity.


Now that we’re in the mid-19th century, we can look around for our second tragic genius. Dr. Ignaz Semmelweis witnessed the epidemic of fatal childbed fever that was sweeping Europe at the time. The good Doctor became convinced that disease was transmitted by “cadaveric particles” that could be removed by handwashing with chlorinated lime (aka bleach). He performed clinical trials, showing a dramatic improvement in survival with antiseptic handwashing.

Had there had been an Affordable Care Act of 1847, it would have made bleach-based handwashing a quality reporting measure. 19th century telegraph operators would have been busy copying “Did you wash your hands with bleach?” into dots and dashes on bronze templates, fully compliant with His Royal Apostolic Majesty’s Meaningfulle-Utilization Decree. Unfortunately for Dr. Semmelweis, Emperor Ferdinand I was too busy being deposed to pass comprehensive healthcare reform.

So Semmelweis did what any good physician would do, if he were an actor playing a physician on a medical television drama. He went around accusing his medical colleagues of being unclean, irresponsible, even “murderers”. The establishment rejected him so violently that he went insane and was imprisoned against his will in a mental asylum. Or was it the other way around?

Over a decade after his death, Semmelweis was finally recognized as correct. Louis Pasteur published the germ theory of contagious disease, which immediately went viral.

Much, much later, people coined the term “Semmelweis Reflex” to describe the human tendency to reject new information.


 

Both Bruno and Semmelweis had a revolutionary idea that contradicted everything the “scientific establishment” believed at the time. Both men decided to fight the establishment despite considerable risk to their health and sanity. In both cases, the clash ended like you’d expect.

This tragic fate has elevated Bruno and Semmelweis to Leonidas status among many people with unpopular beliefs. If Bruno and Semmelweis were crushed under the heel of the establishment, then surely more geniuses were suppressed to the point where we never heard about them. God only knows how many transformative worldviews were lost to mankind thanks to the reactionary mainstream… In fact, any time you see an idea forcefully suppressed, that idea must be trueOtherwise the establishment wouldn’t waste its time on oppression.

Semmelweis has been quoted by a crowd as diverse as anti-vaccination activistsclimate change activists, climate change deniers, and Major League Baseball agents. The idea that “conventional thinking is wrong” has obvious appeal to anyone with beliefs just crazy enough to be true. (not least of all the agent representing an athlete who is so much more skilled than what he shows on film, or in workouts, or in interviews)

As you might expect, plenty of nonsense-peddlers quote the Semmelweis Reflex to justify their beliefs.


 

The problem with Bruno and Semmelweis is two-fold:

First, neither one was actually right. Giordano Bruno based his cosmology on speculation and (weird) theology. He wasn’t an astronomer, he didn’t have any evidence, nor did he bother collecting any. Semmelweis based his handwashing practice on the theory of “cadaverous particles”. He didn’t try to explain what cadaverous particles were, how to measure them, or how they fit into our understanding of biology. Both Bruno and Semmelweis stumbled into correct conclusions through methods that were closer to magical thinking than to science.

Second, both men went out of their way to antagonize the establishment. While being a jerk doesn’t justify a painful early death (usually), there’s no doubt that Bruno and Semmelweis did a lot to harm their own causes. Bruno publicly doubted the Trinity, the virgin birth, and the divinity of Christ. He called his fellow friars “asses” and went around claiming to teach people magic. He probably pissed in the holy water too. It’s not a surprise that the Church killed him for his heresies.

Semmelweis wasn’t quite as far-out there, but he also did not help his own cause. He performed a controlled trial to demonstrate the efficacy of handwashing with antiseptic technique (good science!) and then firmly tied handwashing to his belief in harmful “cadaverous particles” / “cadaveric matter” (bad science!). When his contemporaries presented him with evidence that sepsis could occur even without a cadaver, Semmelweis mostly ignored them and continued to push his wrong-headed cadaver theory. Semmelweis’s attachment to his pet theory worked against the adoption of his real-life practice of antisepsis. If he’d been a little more flexible on cadaver theory, antiseptic handwashing may have been popularized years before Louis Pasteur, saving hundreds of thousands more lives.

With that in mind, Bruno and Semmelweis can teach us more than just “groupthink = bad”. The fact is, many of the great ideas throughout history were quite unconventional at the time. Before Isaac Newton, the “scientific consensus” would have said that a large weight falls faster than a small weight. Before Albert Einstein, only madmen believed that movement through space could distort the passage of time. Before Jim Watson, everyone knew that genes were made of proteins and not DNA. All three men were celebrated, not ridiculed for their unconventional genius.

The problem with Bruno and Semmelweis is that they went beyond “unconventional” and into what I’ll call “anti-conventional”. They didn’t just spit in the eye of the establishment, they turned around, dropped their drawers and farted. They pissed off their peers just because they could, and then it turned out that they really couldn’t.

No human is completely immune to ad hominem bias; when someone you dislike presents the facts, your first reflex is suspicion. You search for the deception or manipulation behind his logic, and nit-pick any small flaws in his data. Then you present a rebuttal with your own data and theory, and your opponent quickly sets about refuting your evidence. A demented version of Clarke’s Law takes hold: any science sufficiently politicized is indistinguishable from bullshit.

That’s why even though I agree that anti-vaxxers are dangerously wrong, I also disagree with the strategy of shaming and blaming them (complete with the occasional wave of anti-anti-vax Facebook “Share”s). People with fixed false beliefs are not going to change just because someone tells them how wrong they are. A better strategy is a combination of harm reduction (not making vaccine exemptions trivially easy to get), education (infectious disease is bad, y’all), and limiting the number of public platforms where they can shout nonsense.

It’s very unlikely that any of us can change anyone’s deeply held wrong beliefs, but we can all hope to limit the spread of such beliefs. After all, an idea is the deadliest parasite.

Now if only we had a mental equivalent of handwashing with bleach.