The Culture War Is Iraq

trumpmeteor

Donald Trump was so unlikable that even some of his own voters were terrified of him. In exit polls, as many as 17% of Trump voters were “concerned or scared” about Trump being President. Amazingly, that didn’t stop them from casting a ballot for Trump, who will enter the White House as the most disliked American President in recent history.

Much ink and pixels have been spilled over how such a bad candidate could have won a Presidential election. Is America an irredemably racist and sexist country of deplorables? Or was the election all about trade, jobs, and the dwindling power of labor unions?  Maybe it was really all about Hillary Clinton’s political incompetence? Or is the Republican Party is really quite strong despite all evidence to the contrary?

Personally, I have a different theory:

The Culture War is Iraq

(And Liberalism is George W Bush)

 

DomNoupcV3-P1.tiff

When President George W. Bush invaded Iraq, everyone knew the war was going to be utterly one-sided. The US had all of the weapons, all of the aircraft, all of the satellites. The US had far more troops with far better training and morale. We had a vast coalition of allies, some more enthusiastic than others.

And above all else, the US was equipped an overweening sense of superiority. History had ended, the West had won, and the sorry barbarians in Iraq just hadn’t realized it yet. Once we overthrew their corrupt and brutal government, surely the people would recognize what a big favor we’ve done them. Dick Cheney infamously said that we would be greeted as liberators.

Of course, none of that happened. We won the war in record time, smashing the Baathist government of Saddam Hussein. Baghdad Bob tried to claim that they could fight off the Americans but ended up in the ludicrous position of claiming that “there are no American tanks in Baghdad” while American tanks rolled down the street behind him. Within two months of the invasion, Saddam’s military was annihilated, Saddam was hiding in a spider hole, and George W Bush was on an aircraft carrier proclaiming “Mission Accomplished”.

But hindsight shows that the mission never was. We may have won the war, but we lost the peace, and during the fighting we lost our own moral values. The Iraq war eventually led to radical Islamic terrorism becoming far more influential than ever before.

So what does this have to do with President-Elect Donald Trump?


votingstars_hillary

Quite simply, Liberalism has won the Culture Wars and hoisted a giant Mission Accomplished banner over the metaphorical aircraft carrier of popular culture.

If you examine the “armies” and “weapons” of the culture wars, liberalism has all of the firepower on its side. Hillary Clinton was endorsed by 167 Hollywood stars, while Donald Trump’s only Hollywood star was vandalized. The newspapers and television news are heavily left-leaning. Almost every high-ranking university has endorsed the liberal “safe spaces” movement, to the point that the University of Chicago attracted widespread attention, and criticism, for rejecting it.

During the 2016 election cycle, many pundits on both sides of the aisle stated that the Culture War is over, liberals have won, therefore the Republican Party is dead. Even Republican pundits agreed that their party was in a meltdown.

Even if the GOP failed to keel over this year, they reassured everyone that “demography is destiny”. The GOP’s unpopularity with young and nonwhite populations would doom it to complete irrelevance within a few years. Much like what the “End of History” argument said for foreign policy, the “Demographic Destiny” argument assumes that the good guys will always win in the end because we are good and they are bad.

It’s almost reminiscent of the saying that God Is On Our Side, but minus the God. That in and of itself should be a red flag.

As we all found out, the God of Demographics was a no-show at the ballot box on November 8th. Despite criticizing and sometimes insulting Hispanic Americans, Trump won more Latino votes than Mitt Romney. And down-ballot candidates outperformed the wildest expectations of the Republican party.

Why did this happen?


 

missionaccomplished

Liberalism won the Culture War. Like US tanks rolling through Baghdad with F-15s circling overhead, liberalism won in crushing, annihilating fashion. And just like Dubya, they won the war so easily that they completely forgot about the need to win the ensuing peace.

When the US took over Iraq in 2003, we made the infamous mistake of “de-Baathification“. Believing that Saddam Hussein’s old Baath party was the root of all evils, the US-led occupation completely dismantled anything and anyone that may have been linked to the old party. This generated an immense amount of ill-will, plus overall chaos and disorganization, all of which provided a fertile field for the later rise of ISIS.

In an analogous move, victorious liberal culture warriors have demanded a level of political purity that is grossly unsustainable. When Yale professor Erika Christakis wrote an email about Halloween costumes in October 2015, the ensuing protests led to her resignation from Yale. A New York University professor was placed on leave and questioned about his mental health when he expressed support for Donald Trump.

Worse yet, during this election cycle, many people talked about the white working class in an absolutely demeaning manner. In the same way that culturally-insensitive Americans assumed that recalcitrant Iraqis must have been “derka derka Muhammad jihad“, culturally-insensitive big city elites jumped to the conclusion that Trump-supporting whites must be an irredeemably racist and sexist basket of deplorables.

Hillary Rodham Clinton

You’re not going to convince anyone to vote for Hillary (or even to stay home from Trump) by calling them deplorable. You’re doing the exact opposite. So many people were turned off by liberal tactics and messaging that they voted for Trump despite worrying that he was unqualified and unfit for the Presidency.

As stated by New York Times columnist Mark Lilla, liberalism has become “largely expressive, not persuasive.” I wish that more liberals would take this to heart. I agree with a lot of liberal ideals, but I find it devilishly frustrating to watch liberal political statements land like a drone strike in Mosul. If you win a battle and kill the enemy’s troops, but you create three times as many enemies as before, you haven’t won a battle at all.

Just like the US government killed Saddam only to empower the rise of  ISIS, liberalism may have killed off Cheney and McCain only to empower the rise of Trump and Bannon.


So this brings us to the question of, “What now?”

I think it’s simple. We need to stop being so damn expressive and start being more persuasive. A lot of high-profile liberal actions seem like they were done without any regard to whether it would win more supporters or detractors.

When Ruth Bader Ginsberg criticized Colin Kaepernick for protesting, it wasn’t because she disagreed with the message, a protest against police brutality. She was criticizing the way the message was delivered. When people see a political messenger disrespect the flag, a large percentage of Americans won’t even listen to the message. They’ll simply assume that we are wrong.

Sure, you may believe that it’s silly to get all wee-wee’d up about perceived disrespect to the flag. You may even be factually correct. But if millions of Americans are already upset, mocking them for over-sensitivity will not win any friends.

I completely agree that police brutality is a terrible problem. It’s one of many factors contributing to racial and economic inequality. But those of us who are pro-police-reform can’t possibly win a national debate by using a strategy that inspires two opponents for every one supporter.

A more persuasive strategy would be to highlight the areas where community policing has workedDraw more attention and charity dollars to events like police-community cookouts and other goodwill-building measures. Have police-reform liberals and police-reform conservatives sit down and come to an agreement on best practices. Yeah, this strategy is hard work. No, it won’t draw nearly as many television eyeballs as a football-stadium protest. But it’s better to gain 100 friends and 0 enemies than 1,000 friends and 2,000 enemies.

* * * * *

This strategy doesn’t just apply to one issue. It should apply to all political statements, campaigns, and causes. Before you get all fired up by a cause that you agree with… go around and listen to people who disagree with you.

You’ll quickly get a sense of which political messages generate sympathy, and which ones provoke resentment, hostility, or even hatred. Don’t go around baiting the “deplorables” with the latter, no matter how satisfying it may feel to win arguments with obviously irrational people. In the end you’re just creating more resentment and hatred.

Liberalism has won the Culture War in the USA. Gay marriage went from being unspeakable to being supported by a majority of Americans. Marijuana legalization has made progress in a large number of states. Americans are much less sympathetic toward crony-capitalism and abusive lending than before the Great Recession. And we are much more skeptical of the military-industrial complex and military adventurism.

But if liberals continue to behave like a victorious occupying force, clamping down on dissent with heavy handed shame-and-blame tactics… the victorious Culture War will drag on into a cultural quagmire of mistrust and anger. (as it has already done in 2016)

moab

It’s time to stop dropping Hellfire Missiles and MOABs on the Culture War. It’s time to focus on winning hearts and minds.

Advertisements

Why Neuropsych Studies are Big Liars

Bad Science Of The Day:

Why Big Liars Often Start Out as Small Ones

I came across this article in the “Science” section of New York Times. It is a link to a Nature Neuroscience paper out of the University College of London, which amazingly enough appears to have free fulltext. Naturally, I pulled up the actual article and spent quite some time trying to make heads/tails out of it. Sadly, it wasn’t worth the time.

soniamdisappoint

The original article, as well as the NYT piece, makes the very plausible claim that the human brain desensitizes itself to dishonesty in the same way that you become desensitized to bad smells. So slimy corporate executives, crooked politicians, and hustling street vendors aren’t actually trying to lie and cheat. They’ve just gone nose-blind to the stink of their own deception.

That’s certainly a plausible hypothesis, and it passes the Bayesian common-sense test. The problem is, after reading the Nature Neuroscience article, I have a hard time washing away the stink of their poor methodology. It smells like an Unreproducible Neuropsych Study, suffering from many of their common Bad Habits:

* Very small n
* Really stretching it with experimental design
* Really stretching it with synthetic endpoints
* Running minimally-bothersome trial stimuli on subjects stuck in a highly-bothersome fMRI scanner
* Data-torturing statistical methods
* Shoehorning hard numerical data into a Touchy Feely Narrative

***
First of all, their subjects were 25 college students with an average age of 20. I can understand only having 25 subjects, as it’s not exactly cheap/easy to recruit people into fMRI neuropsych experiments. But they actually scanned 35 kids. 10 of them caught on to their trial design and were excluded.

Really? One third of their subjects “figured out” the trial and had to be excluded? Actually, it was probably more, only one-third admitted to figuring out the trial design. For being a study about deception, the researchers sure were terrible at decieving their test subjects.

Alanis Morisette would be proud of the irony, as would Iron Deficiency Tony Stark.

***
The experimental design was questionable as well. The researchers used the Advisor-Estimator experiment, a commonly cited psychological model of Conflict of Interest.

Normally an advisor-estimator experiment involves a biased advisor (who is rewarded for higher estimates) assisting an unbiased estimator (who is rewarded for accurate estimates).

This is a great surrogate model for real-world conflicts of interest, like consultants who make more money if you are convinced to buy ancillary services. But it seems like a terrible surrogate for deception. As the experimenters themselves noted, there was no direct personal interaction between the subject and the estimator, no actual monetary stakes involved, and no risk of the subject being caught or punished for lying.

Worse yet, the magnitude of deception involved is incredibly minimal: skewing an estimate by a few pounds in the hopes of being paid a pound or two. That’s a minimal level of emotional manipulation of the subjects. I don’t know about British college kids, but I’d be much more emotionally disturbed by the fact that I’m stuck in a fMRI scanner.

Radiographic measurement, as with photographic image quality, is all about signal to noise ratio. In this case the emotional “signal” (distress caused by lying) is tiny compared to the ambient emotional “noise”.

***
Things get really silly when you read their composite endpoint, something called “Prediction beta”. It appears to be a statistical mess: a 2nd-order metric divided by a 2nd-order metric and averaged into something that resembles a correlation coefficient but is numerically less than 0.1.

Somehow this was statistically significant at p=0.021. But then you read that the authors also tested a crapload of other brain regions, and none of them were nearly as “predictive” as the amygdala. That’s a textbook case of multiple-comparisons data torturing, and it means that their p-values should have been Bonferroni’d into oblivion. The significance threshold shouldn’t have been 0.05, it should have been much, much lower.

***
When all is said and done, the authors should be congratulated for having taken a common sense anecdote (“Small lies lead to bigger ones”) and spent an immense amount of time and money coming up with super-unconvincing scientific data to back it up.

I imagine their next Amazing Rigorous Neuro-Psycho-Radiology trial will demonstrate, after testing twenty hypotheses with thirty different regressions, a borderline-statistically-significant correlation between insufficient parental affection and abusive bullying behavior.

Bullcrap like this is why common-sense driven people are losing their faith in science.

Is America Already Socialist?

America-2014

Socialism has been a hot topic in the 2016 election cycle. Bernie Sanders has drawn an unexpected amount of support, and he is promoting something called “Democratic Socialism” in the US. In response, POLITICO Magazine published an anti-socialist polemic titled “How Did America Forget What ‘Socialist’ Means“.

 This brings up the obvious question: What exactly does the word ‘socialist’ mean? Well, you could turn to various dictionaries. There’s the Oxford Online Dictionary, Merriam-Webster, Random House, and Wikipedia. They’re each slightly different but basically: Socialism is a system where the means of production are owned by everyone.

The POLITICO article references socialist systems in Cuba, China, Vietnam, Laos and North Korea, as well as the old Soviet Union. All of these countries followed a Marxist concept of socialism, in which private property is outlawed and the government owns all of the means of production. Based on this definition, Bernie Sanders is not a socialist. Bernie has loudly disavowed any plans to nationalize the corporations or outlaw private ownership of capital. This is fortunate, as the POLITICO polemic is absolutely correct in one thing. Outright nationalization has been a disaster in every country that’s attempted it.


However, European and Canadian socialists haven’t nationalized the corporations either. (except for healthcare, which Bernie would nationalize as well) Despite the fact that the Danish do not describe themselves as ‘socialist’, Bernie Sanders has cited Denmark, Sweden and Norway as shining exemplars of democratic socialism. This requires a definition of socialism far different from the Marxist ‘state control of the means of production’.

In a country like Denmark, Sweden or Norway, the government and labor unions have a great degree of control over corporate practices such as hiring practices, work hours, wages, and pensions. This follows the spirit of the word ‘socialism’ by giving the public a sense of ownership in business decision-making. However, it avoids actual public ownership of businesses, which is the dictionary definition of ‘socialism’. So it’s entirely reasonable to say that Denmark is not a socialist country, but it’s also reasonable to say that Denmark is a socialist country. (as Bernie does)

Many economists have argued that under this relaxed definition of socialism, the US is every bit as socialist as Europe. They rightfully point out that the US has a higher regulatory burden than most European countries – especially when it comes to licensure, registration, permitting, and tax laws. These complicated and expensive-to-follow laws are a method of public control of private businesses, so under the relaxed definition they are ‘socialism’. And these laws cast a powerful shadow on American businesses. The World Bank rates the US’s Ease of Doing Business lower than Denmark, the UK and New Zealand. The Heritage Economic Freedom Index rates the US and Denmark roughly the same. If capitalism is supposed to stand for freedom of doing business, the US is no more capitalist than Denmark.

The Danish regulatory regime has a strong focus on ‘fair distribution of wealth’, something that American socialists envy greatly. In comparison, US regulatory bodies are non-redistributive by nature, largely because socialism has been a toxic word in the US for so many decades. Medicare was prohibited from negotiating drug prices for exactly this reason. Instead of addressing inequalities, our unfree and uncapitalist regulatory bodies mostly serve narrow special interest groups. The RFS corn ethanol standard was supposed to benefit the environment, yet most environmentalists believe it to be harmful. Despite this fact, very few politicians are willing to run against the Big Corn lobby.


You could accurately describe the US economic system as being ‘capitalist in name only’, or CINO. You could also say that the difference between the US and Denmark is that Denmark places government regulators in corporate boardrooms, while the US places corporate executives in government regulatory committees.

Both conservative and liberal groups have written many jeremiads about “regulatory capture“, the tendency for government regulators to serve special interests instead of regular citizens. Established businesses push restrictive licensure laws to prevent competitors from setting up shop. Megabank executives pressure their friends at the Federal Reserve to carve out exceptions to banking laws. Car dealerships twist state laws to prevent competitors from entering the state. Environmental agencies willfully ignore mass poisonings. Startup businesses run into ridiculous restrictions all across the US.


Unfortunately, while almost everyone can agree that corrupt regulatory bodies are a major problem in the US, there is very little agreement on how to fix this problem. Some people believe that regulatory bodies can be improved by making the regulations more strict and punishing those who game the system. Others believe that regulatory bodies can be fixed by putting ‘the right people’ in charge – someone altruistic and incorruptible.

Libertarians believe in George Stigler‘s theory of regulatory capture – which is that regulatory capture is inevitable. No matter how strict the rules or how well-intentioned the personnel, corruption can only increase over time. This is because there is a strong financial incentive for corrupt individuals to influence a regulatory body, but there is a weak or nonexistent incentive for righteous individuals to fight back. This may sound like a nihilistic theory, but it certainly seems consistent with the present-day US economy. According to this theory, any attempt to “purify” corrupt agencies will have at most a temporary effect. Instead, the best antidote to corruption is to limit the number of levers and knobs that a corrupt bureaucrat could possibly touch. This means keeping regulations as simple as possible, eliminating special incentives, and cutting down on subjective discretion as much as possible.

Steve Forbes famously said that “Capitalism is the world’s greatest economic success story.” Unfortunately, it is also the world’s greatest political failure story. Over the course of the the 20th century, capitalist economic systems repeatedly triumphed over communist economic systems in productivity and wealth. Yet at the same time capitalism itself has degenerated into the corrupt system of CINO-ism.

How, or if, capitalism can be saved may be the most important question of the 21st century.

Unless robots decide to kill all of mankind. That might be slightly more important. Until then, capitalism is the most important question.

Was Malthus Right?

http://www.pbs.org/newshour/making-sense/world-woe-malthus-right/

I ran across this very interesting PBS article recently (link above). It is an excellent summary of Malthusian philosophy that got me musing about Malthusianism and public policy.

Reverend Thomas Malthus first published his theories in the late 18th century, a time of dramatic social upheaval. The might of England had fallen short against the rebellious colonies, while the Ancien Régime had lost its head to the rebellious Jacobins. The only thing certain in this era was uncertainty.

Against this backdrop, Malthus proclaimed that there were a finite quantity of resources on Earth, and that the human population will always proliferate until those resources are consumed. Once the resources are exhausted, the world is doomed either to widespread famine or violence. If the overall resource level is increased by social or technological developments, humans will simply proliferate to a larger population and our overall misery will remain unchanged.

Malthus wrote that the median income of the common folk, expressed in the amount of food (pounds of wheat) they could afford, had remained constant from prehistoric times to the end of the 18th century – and this number was barely enough food to survive. The central dogma of Malthusian belief was that increasing living standards led to higher populations which led to decreasing living standards, causing a long-term equilibrium of famine and poverty.

Malthus believed that this negative feedback cycle could only be broken if the whole world decided to have fewer children. In an era where reliable contraception was nonexistent and many children died at a young age, this must have sounded as loony as putting a man on the moon.

Malthus also suggested that any large-scale charity (such as social welfare programs) would prove useless or harmful in the long run. According to Malthusian dynamics, the only thing keeping poverty in check is the death rate of poor people. Therefore, anything you did to help poor people would only cause more people to become poor. This part of his philosophy was attractive to an aristocracy terrified of the proletariat mob at their gates. As such, 19th century Malthusianism was staunchly conservative.


 

By the time of World War II, every civilized country had major social welfare programs in place. Thus, the “charity is harmful” portion of Malthusian philosophy was largely ignored (as it remains to this day). Instead, 20th century Malthusians focused the importance of population control. In the pre-WWII era this often meant eugenics and forced sterilization – the Malthusian Belt of Brave New World. Again, this placed Malthusianism firmly on the conservative end of the political spectrum.

Adolf Hitler proceeded to Godwin the eugenics movement, taking it to its most horrific extreme and making it unmentionable in polite society. However, a pharmaceutical innovation revived interest in Malthus – The Pill. Oral contraceptives allowed a new generation to have kids only when they wanted to. Birth control was immediately opposed by the religious right, so Malthusian philosophy was suddenly liberal. This right-to-left shift was completed when many early environmentalists started preaching Malthusian population control as a way to decrease environmental impact.

Malthus believed that food production was the crucial limiting factor for population growth. The Earth had a “carrying capacity”, a maximum number of mouths that the planet could feed. Back in the 1950s and 1960s, food was a central dogma in Malthusian environmentalism. In The Population Bomb(1968), Paul Ehrlich stated that hundreds of millions of people would starve to death by the end of the 1970s. He suggested putting contraceptives in the water supply or in staple foods, while noting the sociopolitical impossibility of doing so.

Instead, a social and technological revolution occurred. Basic farming techniques such as irrigation, fertilizers and pesticides spread from the First World to the Third. New crop cultivars, developed first by conventional breeding and later by genetic modification, massively increased farm yields. Food prices dropped so low that many industrialized countries had to pay farmers not to farm. Even as the human population of Earth increased from a few hundred million to over 7 billion, Malthus’s prediction of widespread food shortages never came true.


 

A funny thing happened between the 1970s and now. Populations leveled off and started to decline in Europe, Russia, Japan, and among non-Hispanic whites in the USA. This happened despite the fact that an increasing world population had not triggered any horrific famines, wars or plagues. It also happened in the absence of any draconian measures such as Ehrlich’s hypothetical contraceptive water supply. Economists coined the phrase “demographic-economic paradox” to describe the decreasing fertility among wealthy socioeconomic groups. What public policy triumph allowed population control to finally happen? Widespread access to affordable contraception, a remedy far easier to swallow than forced sterilization.

The success of birth control could be seen as the ultimate confirmation of Malthus’s thesis that limiting the population would improve quality of life. It has undoubtedly broken the Malthusian cycle of “increased living standards -> increased birth rate -> decreased living standards”. Recent predictions suggest that human population will peak in the mid-21st century and then decline. This predicted peak doesn’t happen due to food shortages, but because humans are choosing to have fewer children. Those children will not be limited to Malthus’s “14 pounds of wheat”, they will have much greater access to food and material goods.

Reverend Malthus’ ultimate objective was to decrease the worldwide fertility rate, and by that measure he has been wildly successful. What he could not have forseen was the method of this success. Malthusian doctrine gave birth to numerous population-limiting schemes over the centuries, many of which were impractical or inhumane. In the end, the global fertility decline occurred thanks to affordable contraception. Billions of human beings chose to have fewer children. No one forced them to do so. (except in China).

I wish that more policy thinkers would draw a lesson from this part of history. You can craft onerous laws to change people’s behavior, and they will fight you every step of the way. Or you could give people the freedom to choose. If the change in behavior is truly beneficial, people will gravitate toward it over time – as has happened in every high-income country over the past several decades.

Link

Hacking the Mind Epilogue: Psychosurgery

Hacking the Brain Epilogue: Psychosurgery

While we’re on the subject of “hacking the human mind“, it looks like there is renewed interest in psychosurgery. The link goes to an article about deep brain stimulation for alcoholic cravings, PTSD, and depression!

People have been triyng to control psychiatric conditions with surgery since the days of the prefrontal lobotomy. Electrical stimulation has the advantages of precision and reversibility. However, as with any neurosurgical procedure it relies upon localizing an unwanted symptom to a specific location in the brain. For example, deep brain stimulation works for Parkinson’s because the disease is localized to the basal ganglia.

No matter how much funding you throw at electroneurology, it won’t do any good if an unwanted emotion or compulsion is spread out over a large area of the brain. It remains to be seen how well localized things like alcoholism and PTSD are.

Hacking the Human Mind: Enter Reality

Image courtesy of Emotiv/ExtremeTech.

Hacking the Human Mind, Pt. 2: Enter Reality
In the first part of this post, I discuss the concept of “hacking the human mind” in mythology and fiction. Ever since antiquity, many people have tried to improve the human mind and body. The information era has contributed the term “hacking” to the idea of human-improvement. More recently, pop culture has adopted the idea of hacking humanity and turned it into a ubiquitous plot device.

 


Snap Back to Reality
Whoops there goes Gravity

Hollywood has portrayed hacker-like characters as superhumans, shadowy villains or even honest-to-goodness sorcerers. However, hacker culture in real life is a far cry from its fictional portrayal. While wizards and sorcerers jealously guard their knowledge, real-world hackers are famous for sharing knowledge. (especially when they’re not supposed to)

Possibly thanks to the popularity of “hacking the human mind” as an idea, medical researchers have started to promote the so-called hacker ethic. This philosophy holds that decentralized, open-source use of technology can improve the world. Traditional medical research goes through multiple cycles of proposal, review and revision before anything happens. Successes are often published in closed-access journals while failures are often buried. The hacker ethos encourages freewheeling experimentation and open-source sharing among the scientific community.

Among its many innovations, hacker culture has given birth to the idea of medical hackathons. A “hackathon” is defined as an short duration (often just a weekend), high-intensity multidisciplinary collaboration. During the event, participants make “60 second pitches” to attract other people who might have special skills. For example, a physician with a good idea for telemedicine might go around trying to find a coder who knows about Internet security. Then they could come across a hacker with machine-vision expertise and use him to improve their cameras.

Although they occur too quickly to really polish a product or conduct clinical trials, hackathons generate numerous bright ideas that can be worked on later. In a way they are the ultimate brainstorm.


Heroes of the Brainstorm
Harder, Better, Faster, Stronger

Hackathons are undoubtedly coming up with lots of very good ideas. However, even the best medical ideas take a long time to implement. The only ideas that can be implemented immediately are very small pieces of provider-side software. (ie, enhanced changeover sheets for hospitalists) Anything that touches a patient requires a lengthy process of requests, reviews, and consents before it is ever used… and only then can you figure out whether it is effective.

As of 2014, the medical hackathon simply hasn’t been around long enough to show much of an effect. It’s a bit like a drug in Phase I-Phase II studies: everyone has great hope that it will improve things, but you can’t point to a major innovation that would not have been possible without the hackathon.

Integrating small-scale hackathon products into larger suites of medical software is a much tougher problem. Even the large-vendor EHRs (Epic, Meditech, Cerner) have difficulty communicating with each other, let alone with smaller pieces of software. The greatest problem in healthcare IT is that the so-called “HL7 Standard” isn’t really a standard.

Standard file formats exist so that they can be consistently read by everyone. A PDF looks the same on a PC, Mac, iPhone or Google Glass. A Kindle file (.AZW) is the same on a Kindle, PC or phone. Even medical imaging has a true standard format. Whether your CT scanner is a GE, Phillips, or Siemens, when you export DICOM images to another physician, the CT slices will show up exactly the same.

HL7 is not like that at all. In my personal experience, naively transferring documents between two pieces of “HL7-compliant” software results in loss or misinterpretation of some of the data. In order to fix this, you need a highly trained IT expert to create a specialized “connectivity interface”, or sometimes you pay big bucks to purchase such an interface. I am amazed that things are still so difficult in the year 2014.

In the field of traditional software design, hackers have benefited from the uniform interoperability of Unix (Linux) for many decades. As of today, healthcare lacks this important feature.

Maybe the hackers could come up with a solution for interoperability?


Big Data: The Rise of the Machines
Thank God it’s not Big Lore, Big Bishop, or Big Terminator

One of the promises of “medical hacking” has been the application of “Big Data” techniques to healthcare. Data analysis in healthcare has always been difficult and often inconsistently performed. Many medical students and residents can tell you about painstaking research hours spent on manual data-entry. Big Data techniques could turn ten thousand med student hours into five minutes of computer script runtime. Unfortunately, to this date Big Data has been much less successful in real life.

So far, the two Biggest Data medical innovations have been Google Flu Trends and 23andMe. GFT purports to forecast the severity of the flu season, region by region, based on statistics on flu-related Google searches. 23andMe was originally supposed to predict your risk of numerous diseases and conditions using a $99 DNA microarray (SNP) analysis. Far from being a home run for Big Data, both of these tools are more reminiscent of a strikeout, if not a pick-six.

GFT was billed as a Big Data tool that would vastly improve the accuracy and granularity of infectious disease forecasting. When first introduced in 2008, GFT’s flu predictions were more accurate than any existing source. However, every year it became less and less accurate, until it became worse than simply measuring how many flu cases happened two weeks ago. GFT’s performance degraded so badly, it was described as a “parable of traps in data analysis” by Harvard researchers.

23andMe offered SNP testing of the entire genome, used both for ancestry analysis and disease prediction. Prior to November 2013, the website offered a vast number of predictors ranging from lung cancer to erectile dysfunction to Alzheimer’s dementia to drug side effects. It was held up as an exemplar of 21st-century genomic empowerment, giving individuals access to unprecedented information about themselves for the low, low price of $99.

The problem was, 23andMe never bothered to submit any scientific evidence of accuracy or reproducibility to the Food and Drug Administration. The FDA sent a cease and desist letter, forcing them to stop marketing their product as a predictive tool. They’re still selling their gene test, but they are only allowed to tell you about your ancestry. (not any health predictions) This move launched a firestorm, with some people arguing that the FDA was overstepping or even following “outdated laws“.

However, the bulk of the evidence suggested that 23andMe simply didn’t give accurate genetic info. Some molecular biologists pointed out the inherent flaws in SNP testing, which make it impossible for 23AndMe to be usably accurate. Others pointed out that even if accurate, most of the correlations were too weak to have any effect on lifestyle or healthcare. The New England Journal of Medicine concluded that the FDA was justified in issuing a warning, and that “serious dialogue” is required to set standards in the industry. Other commentators were “terrified” by 23andMe’s ability to use your genetic info for secondary studies. After all, how can 23andMe sell genetic tests for $99 when other companies charge thousands? Obviously they didn’t plan to make money from the consumers; instead, 23andMe hoped to make money selling genetic data to drug companies and the rest of the healthcare industry.

In the end, that is my biggest misgiving against medical Big Data. Thanks to social media (this blog included) we have already commoditized our browsing habits, our buying habits, our hobbies and fandoms. Do we really want to commoditize our DNA as well? If so, count me out.


Doctoring the Doctor
Damnit Jim, I’m a doctor, not a hologram!

Another big promise of the “hacker ethos” in medicine is that it could improve physician engagement and enthusiasm for technology. Small decentralized teams of hackers could communicate directly with physicians, skipping the multi-layered bureaucracy of larger healthcare companies.

Many healthcare commentators have (falsely) framed the issue of physician buy-in as a matter of technophobia. Doctors are “stuck in the past“, “Luddites in white coats”, and generally terrified of change. The thing is, it’s just not true. Just look at the speed at which new medical devices are popularized – everything from 4DCTs to surgical robots to neuronavigation units, insulin pumps, AICDs and deep brain stimulators. If physicians saw as much of a benefit from electronic health records (EHRs) as we were supposed to, we would be enthusiastic instead of skeptical.

I believe that EHR would be in much better shape today if there had never been an Obamacare EHR mandate. No one ever improved the state of the art by throwing a 158-page menu of mandates at it. Present-day EHRs care much more about Medicare and other billing rules than they do about doctor or nurse usability.

Back on subject, I do believe that medical hacking has the potential to get physicians more involved in technological innovation. So long as physicians are stuck dealing with massive corporate entities, we can provide feedback and suggestions but they are very unlikely to be implemented. Small-scale collaborations empower doctors with the ability to really change the direction of a project.

Now, not every medical hack will result in something useful. In fact, a lot of hacks will amount to little more than cool party tricks, but some of these hacks will evolve into more useful applications. Some easily-hackable projects may involve documents or files produced by older medical technology. During residency I worked on a research project involving radiation treatment plans from a very old, non DICOM-compliant system. We quickly discovered that the old CTs were not usable by modern treatment planning software. Fortunately, one of the physicists on our research team was familiar with DICOM. He coded a computer program that inserted the missing DICOM headers into the old CT images, allowing us to import old CTs without any problems.

Introducing more hackers to medicine can only increase the number of problems solved by astute coding.


What Happened to Superpowers?
Paging Dr. Manhattan…

The addition of hacker culture to medicine certainly has a lot of potential to improve the everyday practice of medicine. But what happened to the idea of “hacking the human mind” in order to develop super-strength and speed?

On a very rudimentary level, “hacking the mind” improves physical performance every time an athlete grows a beard for the playoffs or wears his college shorts under his NBA uniform. But true hacking should be more sophisticated than mere superstition!

Biofeedback is a common pre-game ritual for various athletes that could be construed as a minor form of “hacking the mind/body”. Dietary habits such as carb loading could also be considered a mild form of hacking. For less legal mind-body hacking you could always turn to performance enhancing drugs.

Speaking of drugs, there’s a long-held belief that people high on drugs (mostly PCP, sometimes meth or bath salts) gain superhuman strength. While the evidence is mostly anecdotal, there’s a plausible medical explanation. The Golgi tendon reflex normally prevents muscles from over-exerting themselves, and it can be suppressed in desperate situations (the “mother lifts a car off her child” scenario). It’s reasonable to assume that some drugs could have a similar effect.

It’s also reasonable to assume that military physicians have spent decades (the entire Cold War for sure) trying to produce a super-strength drug with fewer side effects than PCP. The fact that our entire army doesn’t have the physique of Captain America suggests that those efforts were unsuccessful. Granted, this doesn’t rule out the existence of a super serum that only worked on one guy ever.

Evolutionarily speaking, it is highly implausible that humans would have tremendous physiological potential locked behind some mental gate. If the human body had such great power, our prehistoric ancestors would have needed every ounce of it to outrun or outfight angry lions and hippos and crocs. It would make no sense for humans to have a mental block on our strength. Unless removing that mental block led to instant death or infertility, the first caveman to lose his mental block would be evolutionarily favored over the rest of proto-humanity. Therefore, it’s very unlikely to think that human performance can be “magically” improved with drugs, meditation or other techniques.


 

So let’s cap off this long ramble with a little teaser on evolution and human strength. This National Geographic feature suggests that early humans directly traded muscle strength for brain power.

http://news.nationalgeographic.com/news/2014/05/140527-brain-muscle-metabolism-genes-apes-science/

What is wrong with this argument?