Sunday, 26 April 2009

It's those Voodoo correlations again ... brain imagers accused of "double dipping"

This time there's no explicit naming and shaming, and the title may not be as colourful, but a new study out today in prestige journal Nature Neuroscience echoes many of the same concerns voiced earlier this year in the leaked paper "Voodoo Correlations in Social Neuroscience" (since renamed as "Puzzlingly High Correlations ..."). And the new paper's implications are surely just as profound for the cognitive neuroscience community.

Nikolaus Kriegeskorte and colleagues analysed all the fMRI studies published in Nature, Science, Nature Neuroscience, Neuron and Journal of Neuroscience, in 2008, and found that 42 per cent of these 134 papers were guilty of performing at least one non-independent selective analysis - what Kriegeskorte's team dub "double dipping".

This is the procedure, also condemned by the Voodoo paper, in which researchers first perform an all-over analysis to find a brain region(s) that responds to the condition of interest, before going on to test their hypothesis on data collected in just that brain region. The cardinal sin is that the same data are used in both stages.

A similarly flawed approach can be seen in brain imaging studies that claim to be able to discern a presented stimulus from patterns of activity recorded in a given brain area. These are the kind of studies that lead to "mind reading" headlines in the popular press. In this case, the alleged statistical crime is to use the same data for the training phase of pattern extraction and the subsequent hypothesis testing phase.

Kriegeskorte's claim is not that all the studies guilty of this procedure are invalid, but that their data will have been distorted to varying degrees. "To decide which neuroscientific claims hold, the community needs to carefully consider each particular case, guided by both neuroscientific and statistical expertise," they wrote.

To support their case, Kriegeskorte's team performed two "mock" experiments of the "region of interest" and "pattern extraction" types. In each case they showed how double-dipping can drastically distort results. For example, in a mock pattern-information analysis they achieved a significant result with double-dipping even after feeding purely random data into the analysis.

The ramifications of these statistical observations don't end with brain imaging. They also have implications for work with electroencephalography, in which researchers are prone to use the same data for selecting relevant channels and testing hypotheses, and for research using single-cell recording.

"A circular analysis is one whose assumptions distort its results," the authors concluded. "We have demonstrated that practices that are widespread in neuroimaging are affected by circularity."

UPDATE: A freely available PDF of supplementary info, including how to spot circular analyses and a proposed policy for preventing distortion of data, is now available at Nature Neuroscience.
_________________________________

ResearchBlogging.orgKriegeskorte, N., Simmons, W.K., Bellgowan, P.S.F., & Baker, C.I. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience. In Press.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

9 comments:

  1. Ah, I knew someone was going to do this sooner or later. I'm relieved it was only 42% of the papers because I was worried it might be a lot more...

    I'll blog about this tomorrow.

    ReplyDelete
  2. Anonymous1:35 am

    If the authors of this new paper don't name names, then how is anyone going to know which half of the 2008 fMRI literature to believe? And even if they did publish their list, that wouldn't help with all the papers from 2007, 2006, etc, not to mention all the stuff published in less prestigious journals. Presumably at least half of all that is untrustworthy, too.

    So what kind of field is this neuroimaging business, anyway? The word that springs to mind is 'junkheap'... :-)

    ReplyDelete
  3. Anonymous7:09 am

    I agree with anonymous -- they should list the specific flawed papers. What is the point of going through 134 studies, dissecting all their results, only to make a general statement that 42% of them were using nonindependent analyses. This can be done "politely" in the appendix, or supplementary materials but it should be done. After all, this is supposed to be about science -- a self-correcting enterprise, not about a mutual support society.

    ReplyDelete
  4. A cynic might say that by not publishing the list, the authors make their own paper difficult to critique. :-)

    ReplyDelete
  5. Well, if the Supplementary Info contains a toolkit for detecting circularity, it should be possible for someone to make a list of circular papers.

    Sounds like a job for... The Internet!

    ReplyDelete
  6. Anonymous3:22 pm

    Has anyone noticed that Kriegeskorte works for the US government? Which means that if you ask for the list, he will have to give it to you. Federal employees can hardly ever keep any document produced in their workplace private--and they sure as heck can't make lists of defective work done mostly on US government grants and contracts, and keep that secret so as not to hurt anybody's feelings.

    Freedom of Information Act requests take about 10 minutes; this site even has sample letters:

    http://www.rcfp.org/fogg/index.php

    I can't do this myself but I hope someone does. I would guess the first blog to post the list is going to get a huge amount of attention...

    ReplyDelete
  7. Anonymous4:11 am

    Here is another imporant reason for publishing the list. I highly suspect that Science is responsible for the majority of bad papers. Nature seems (!) more cautious and selective. Shouldn't we know that? Shouldn't Nature want readers to know that? Without it, their reputation is tarnished by association with the voodoo crowd.

    ReplyDelete
  8. Anonymous9:13 am

    In response to anonymous who posted at 3:11AM.

    I simply do not agree with your statement that Nature seems to be more selective or cautious than other scientific journals out there. For example, we critiqued several Nature papers in my Neurovirology class, all seemed to have either methodological errors, misinterpretation of data, selective reporting of data, in some cases the authors would leave out a good chunk of data and state "data not shown" yet we are expected to take it all up as if it were decreed in stone.

    I started as a believer, now I am skeptic.

    ReplyDelete
  9. Anonymous9:14 pm

    You are probably right. Both Science and Nature are in the same publicity game. But it is an 'empirical question'. It would be good to know where exactly the bad papers appeared and running some statistics assessing the journal "fluff" ration (bad / good + bad papers). But I don't know how to find the exact list of papers. Until then, one should just remain generally distrustful of this literature (well, 42% of it).

    I bet there are lots of interesting discussions going on now in Science and Nature editorial boards. After all, someone let these papers through. So, perhaps there is a huge pressure from the editors that the authors keep the list to themselves. Yuck . . .

    On the other hand, the paper did get published, so not all hope is lost . . . .

    ReplyDelete

Note: only a member of this blog may post a comment.

Google+