Thursday, 20 September 2012

Most brain imaging papers fail to provide enough methodological detail to allow replication

Amidst recent fraud scandals in social psychology and other sciences, leading academics are calling for a greater emphasis to be placed on the replicability of research. "Replication is our best friend because it keeps us honest," wrote the psychologists Chris Chambers and Petroc Sumner recently.

For replication to be possible, scientists need to provide sufficient methodological detail in their papers for other labs to copy their procedures. Focusing specifically on fMRI-based brain imaging research (a field that's no stranger to controversy), University of Michigan psychology grad student Joshua Carp has reported a worrying observation - the vast majority of papers he sampled failed to provide enough methodological detail to allow other labs to replicate their work.

Carp searched the literature from 2007 to 2011 looking for open-access human studies that mentioned "fMRI" and "brain" in their abstracts. Of the 1392 papers he identified, Carp analysed a random sample of 241 brain imaging articles from 68 journals, including PLoS One, NeuroImage, PNAS, Cerebral Cortex and the Journal of Neuroscience. Where an article featured supplementary information published elsewhere, Carp considered this too.

There was huge variability in the methodological detail reported in different studies, and often the amount of detail was woeful, as Carp explains:
"Over one third of studies did not describe the number of trials, trial duration, and the range and distribution of inter-trial intervals. Fewer than half reported the number of subjects rejected from analysis; the reasons for rejection; how or whether subjects were compensated for participation; and the resolution, coverage, and slice order of functional brain images."
Other crucial detail that was often omitted included information on correcting for slice acquisition timing, co-registering to high-resolution scans, and the modelling of temporal auto-correlations. In all, Carp looked at 179 methodological decisions. To non-specialists, some of these will sound like highly technical detail, but brain imagers know that varying these parameters can make a major difference to the results that are obtained.

One factor that non-specialists will appreciate relates to corrections made for problematic head-movements in the scanner. Only 21.6 per cent of analysed studies described the criteria for rejecting data based on head movements. Another factor that non-specialists can easily relate to is the need to correct for multiple comparisons. Of the 59 per cent of studies that reported using a formal correction technique, nearly one third failed to reveal what that technique was.

"The widespread omission of these parameters from research reports, documented here, poses a serious challenge to researchers who seek to replicate and build on published studies," Carp said.

As well as looking at the amount of methodological detail shared by brain imagers, Carp was also interested in the variety of techniques used. This is important because the more analytical techniques and parameters available for tweaking, the more risk there is of researchers trying different approaches until they hit on a significant result.

Carp found 207 combinations of analytical techniques (including 16 unique data analysis software packages) - that's nearly as many different methodological approaches as studies. Although there's no evidence that brain imagers are indulging in selective reporting, the abundance of analytical techniques and parameters is worrying. "If some methods yield more favourable results than others," Carp said, "investigators may choose to report only the pipelines that yield favourable results, a practice known as selective analysis reporting."

The field of medical research has adopted standardised guidelines for reporting randomised clinical trials. Carp advocates the adoption of similar standardised reporting rules for fMRI-based brain imaging research. Relevant guidelines were proposed by Russell Poldrack and colleagues in 2008, although these may now need updating.

Carp said the reporting practices he uncovered were unlikely to reflect malice or dishonesty. He thinks researchers are merely following the norms in the field. "Unfortunately," he said, "these norms do not encourage researchers to provide enough methodological detail for the independent replication of their findings."


Carp J (2012). The secret lives of experiments: Methods reporting in the fMRI literature. NeuroImage, 63 (1), 289-300 PMID: 22796459

--Further reading-- Psychologist magazine opinion special on replication.
An uncanny number of psychology findings manage to scrape into statistical significance.
Questionable research practices are rife in psychology, survey finds.

Post written by Christian Jarrett for the BPS Research Digest.


Andrew said...

Tony Chemero identified a similar problem in the object exploration neuroscience literature; no control over object affordances and not enough information to replicate with the required control. He and a colleague then ran a study showing how much of a problem this was:

gutscheine zum ausdrucken said...

very good comment

Anonymous said...

If someone wished to replicate these studies I suspect that contacting the author directly would provide a lot of the necessary information. I suspect it would make the paper quite inaccessible if every detail were published.

Unknown said...

Hi It's possible for journals to use a "supplementary information" system, posted online or as an appendix, so as not to make the papers unreadable.

Andrew said...

Also a methods section is supposed to be enough. At least, that's what we all teach our students.

Anonymous said...

As someone who's tried to contact authors of studies just to get simple information (for instance, group means not published in analyses) I've gotten about a 10% response rate. Sometimes the authors of studies have died or moved out of the field and the information is lost forever. The current system is unacceptable.

Post a Comment

Note: only a member of this blog may post a comment.