Why Neuropsych Studies are Big Liars

Bad Science Of The Day:

Why Big Liars Often Start Out as Small Ones

I came across this article in the “Science” section of New York Times. It is a link to a Nature Neuroscience paper out of the University College of London, which amazingly enough appears to have free fulltext. Naturally, I pulled up the actual article and spent quite some time trying to make heads/tails out of it. Sadly, it wasn’t worth the time.

soniamdisappoint

The original article, as well as the NYT piece, makes the very plausible claim that the human brain desensitizes itself to dishonesty in the same way that you become desensitized to bad smells. So slimy corporate executives, crooked politicians, and hustling street vendors aren’t actually trying to lie and cheat. They’ve just gone nose-blind to the stink of their own deception.

That’s certainly a plausible hypothesis, and it passes the Bayesian common-sense test. The problem is, after reading the Nature Neuroscience article, I have a hard time washing away the stink of their poor methodology. It smells like an Unreproducible Neuropsych Study, suffering from many of their common Bad Habits:

* Very small n
* Really stretching it with experimental design
* Really stretching it with synthetic endpoints
* Running minimally-bothersome trial stimuli on subjects stuck in a highly-bothersome fMRI scanner
* Data-torturing statistical methods
* Shoehorning hard numerical data into a Touchy Feely Narrative

***
First of all, their subjects were 25 college students with an average age of 20. I can understand only having 25 subjects, as it’s not exactly cheap/easy to recruit people into fMRI neuropsych experiments. But they actually scanned 35 kids. 10 of them caught on to their trial design and were excluded.

Really? One third of their subjects “figured out” the trial and had to be excluded? Actually, it was probably more, only one-third admitted to figuring out the trial design. For being a study about deception, the researchers sure were terrible at decieving their test subjects.

Alanis Morisette would be proud of the irony, as would Iron Deficiency Tony Stark.

***
The experimental design was questionable as well. The researchers used the Advisor-Estimator experiment, a commonly cited psychological model of Conflict of Interest.

Normally an advisor-estimator experiment involves a biased advisor (who is rewarded for higher estimates) assisting an unbiased estimator (who is rewarded for accurate estimates).

This is a great surrogate model for real-world conflicts of interest, like consultants who make more money if you are convinced to buy ancillary services. But it seems like a terrible surrogate for deception. As the experimenters themselves noted, there was no direct personal interaction between the subject and the estimator, no actual monetary stakes involved, and no risk of the subject being caught or punished for lying.

Worse yet, the magnitude of deception involved is incredibly minimal: skewing an estimate by a few pounds in the hopes of being paid a pound or two. That’s a minimal level of emotional manipulation of the subjects. I don’t know about British college kids, but I’d be much more emotionally disturbed by the fact that I’m stuck in a fMRI scanner.

Radiographic measurement, as with photographic image quality, is all about signal to noise ratio. In this case the emotional “signal” (distress caused by lying) is tiny compared to the ambient emotional “noise”.

***
Things get really silly when you read their composite endpoint, something called “Prediction beta”. It appears to be a statistical mess: a 2nd-order metric divided by a 2nd-order metric and averaged into something that resembles a correlation coefficient but is numerically less than 0.1.

Somehow this was statistically significant at p=0.021. But then you read that the authors also tested a crapload of other brain regions, and none of them were nearly as “predictive” as the amygdala. That’s a textbook case of multiple-comparisons data torturing, and it means that their p-values should have been Bonferroni’d into oblivion. The significance threshold shouldn’t have been 0.05, it should have been much, much lower.

***
When all is said and done, the authors should be congratulated for having taken a common sense anecdote (“Small lies lead to bigger ones”) and spent an immense amount of time and money coming up with super-unconvincing scientific data to back it up.

I imagine their next Amazing Rigorous Neuro-Psycho-Radiology trial will demonstrate, after testing twenty hypotheses with thirty different regressions, a borderline-statistically-significant correlation between insufficient parental affection and abusive bullying behavior.

Bullcrap like this is why common-sense driven people are losing their faith in science.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s