Showing posts with label Methods. Show all posts
Showing posts with label Methods. Show all posts

Monday, 27 June 2016

Are the benefits of brain training no more than a placebo effect?

If you spend time playing mentally taxing games on your smartphone or computer, will it make you more intelligent? A billion dollar "brain training" industry is premised on the idea that it will. Academic psychologists are divided – the majority view is that by playing brain training games you will only improve at those games, you won't become smarter. But there are scholars who believe in the wider benefits of computer-based brain training and some reviews support their position, such as the 2015 meta-analysis that combined findings from 20 prior studies to conclude "short-term cognitive training on the order of weeks can result in beneficial effects in important cognitive functions".

But what if those prior studies supporting brain training were fundamentally flawed by the presence of a powerful placebo effect? That's the implication of a new study in PNAS that suggests the advertising used to recruit participants into brain training research fosters expectations of mental benefits.

Cyrus Foroughi and his colleagues produced two different recruitment adverts (see image below) to attract participants into a brain training study – one explicitly mentioned that the study was about brain training and mentioned that this training can lead to cognitive enhancements; the other was neutral and simply stated that participants were needed for a study. Nearly all previously published brain training research has used an overt, suggestive style of recruitment advertising.

Nineteen young men and 31 young women signed up in response to the two ads, with no gender or age differences between those who responded to each ad. Next, they completed baseline intelligence tests before spending an hour on a task that features in many commercial brain training programmes – the so-called dual n-back task, which involves listening to one stream of numbers or letters and watching another, and spotting whenever the latest item in one of the streams is a repeat of one presented "n" number of items earlier in that stream. As participants improve, "n" is increased, making the task more difficult. The next day, the participants completed more intelligence tests. They also answered questions about their beliefs in the possibility for people's intelligence to increase.

The participants who'd responded to the overt, suggestive advert showed gains in intelligence after completing just one hour of brain training – a length of training too short to plausibly have produced any genuine benefit linked to the actual experience of doing the training. In contrast, the participants who responded to the neutral ad showed no intelligence gains. This group difference was despite the fact that the two groups performed just as well on the training task, suggesting no group differences in motivation or ability. Also, the group who'd responded to the suggestive ad reported stronger beliefs in the malleability of intelligence. This could be because people with these beliefs were more likely to respond to the suggestive ad, or because they'd been influenced by the claims of the ad – either way, it shows how the use of unsubtle recruitment advertising could be distorting research in this area.

The researchers said they'd provided "strong evidence that placebo effects from overt and suggestive recruitment can affect cognitive training outcomes". They added that future brain training research should aim to better reduce or account for these placebo effects, for example avoiding hinting to participants what the goals of the study are, or what outcomes are expected. Their call comes after a group of psychologists warned in 2013 that intervention studies in psychology are afflicted by a "pernicious and pervasive" problem, namely the failure to adequately control for the placebo effect.

--Placebo effects in cognitive training

_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 13 May 2016

We all differ in our ability to cope with contradictions and paradoxes. Introducing the "aintegration" test

Life is full of paradoxes and uncertainty – good people who do bad things, and questions with no right or wrong answer. But the human mind abhors doubt and contradictions, which provoke an uncomfortable state of "cognitive dissonance". In turn, this motivates us to see the world in neat, black and white terms. For example, we'll decide the good person must really have been bad all along, or conversely that the bad thing they did wasn't really too bad after all. But a pair of researchers in Israel point out that some of us are better than others at coping with incongruence and doubt than others – an ability they call "aintegration" for which they've concocted a new questionnaire. The full version, together with background theory, is published in the Journal of Adult Development.

If you want to hear what the researchers found out about who copes best with uncertainty, skip past the two example items coming up next.

Jacob Lomranz and Yael Benyamini's test begins: This questionnaire explores the way people think and feel about various attitudes. In the following pages you will be presented with attitudes held by different people. Please read each attitudinal position carefully and use the ratings scale to state your general and personal reaction as to such attitudes.

The test then features 11 items similar to these two:
EXAMPLE ITEM 1 There are people who will avoid making decisions under conditions of uncertainty and ambiguity. In contrast, other people would make decisions even under conditions of uncertainty and ambiguity. 
(a) In general, to what extent do you think it is possible to make decisions under
conditions of uncertainty and ambiguity?
1,2,3,4, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)
(b) Assuming someone does make decisions under conditions of uncertainty and
ambiguity, to what extent do you think this would cause her/him discomfort?
12345
(c) To what extent do you make decisions under conditions of uncertainty and
ambiguity?
12345
(d) Assuming you made a decision under conditions of uncertainty and ambiguity, to
what extent would that cause you discomfort?
12345
EXAMPLE ITEM 2 There is an opinion that in every relationship between couples there are contradictory feelings; on the one hand, the individual benefits from the relationship (for example, love) and on the other hand loses from the relationship (for example, loss of independence).
- Some people claim that even when the couple has contradictory feelings about their relationship, a good relationship can still exist.
- In contrast, there are those who claim that when there are contradictory feelings about the couple relationship, it is impossible to maintain a good relationship.
(a) In general, to what extent do you think it is possible to have a good relationship when a couple has contradictory feelings about that relationship?
1234, or 5 (choose 1 to 5 where 1= not at all and 5=to a very great extent)
(b) Assuming someone persists with a relationship about which they have contradictory feelings, to what extent do you think this would cause her/him discomfort?
12345
(c) To what extent do you have contradictory feelings about your relationship(s)?
12345
(d) Assuming you have contradictory feelings, to what extent would that cause you discomfort?
12345
Higher scores for (a) and (c) questions and lower scores for (b) and (d) questions mean that you have higher aintegration – that is, that you are better able to cope with uncertainty and contradictions.

To road test their questionnaire, the researchers gave the full version with 11 items to hundreds of people across three studies and they found that it had high levels of "internal reliability" – that is, people who scored high for aintegration on one item tended to do so on the others.

Lomranz and Benyamini also found some evidence that older people (middle-aged and up), divorcees, the highly educated and the less religious tended to score higher on aintegration. So too did people who had experienced more positive events in life, and those who saw their negative experiences in more complex terms, as having both good and bad elements. Moreover, higher scorers on aintegration reported experiencing fewer symptoms of trauma after negative events in life.

This last finding raises the possibility that aintegration may grant resilience to hardship, although longer-term research is needed to test this (an alternative possibility is that finding a way to cope with trauma promotes aintegration).

Higher scores on aintegration also tended to correlate negatively with the established psychological construct of "need for structure".

The researchers said their paper was just a "first step" in establishing the validity of aintegration and that the concept could help inform future research especially with people "who dwell in states of transitions or 'betweenness', for example, struggling with national identities, cultural adjustment or conflicting values."

_________________________________ ResearchBlogging.org

Lomranz, J., & Benyamini, Y. (2015). The Ability to Live with Incongruence: Aintegration—The Concept and Its Operationalization Journal of Adult Development, 23 (2), 79-92 DOI: 10.1007/s10804-015-9223-4

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 1 April 2016

Psychologists don’t REALLY think their field is in crisis (but finding fails to replicate)

Update: This was an April Fools' joke. (Check out our April Fools' articles from previous years).

In the wake of recent failures to repeat some of psychology’s most famous findings, not to mention a few cases of outright research fraud, it’s been claimed that psychological science is in a bit of state. Many psychologists have responded by proposing ways to improve research practices, such as making data freely available online, and preregistering planned methods, to avoid issues of data tinkering later on. However, not everyone actually agrees that psychology is in crisis – at least some psychologists take a more rosy view and think the replication problem has been overblown, as illustrated by a recent ebullient opinion piece published in Science.

We hear a lot of commentary about all this from just a few high profile individuals, but no one really knows what the average psychology researcher really thinks. To find out, a team of psychologists in the UK recruited hundreds of psych researchers around the world to complete what’s known as the “implicit association test” tailored to reveal subconscious attitudes towards psychology. The idea was to find out what psychology researchers think of psychology at a subconscious level. Also, in keeping with the growing awareness of the importance of replicability in science, it was planned in advance that a second team in the USA would subsequently perform the same test with hundreds more international researchers. Dr Cass Andra, a reforming psychologist and leader of the UK arm, said she expected to find that psychologists are in denial about the crisis in their field.

The test involved psychologists pressing one of two keyboard keys as fast as possible whenever they saw different categories of word on-screen – words pertaining to psychology or other sciences, and positive and negative words. On some trials, the same key was allocated to psychology terms (e.g. “social psychology”) and positive words (“robust”), with the other key allocated to other sciences and negative words. On other trials, the set-up was switched.

The main finding in the UK arm of the research is that psychology researchers showed an implicit positive bias towards psychology research – that is, they showed their fastest response times when the same key was allocated to psychology terms and positive words, suggesting that they see psychology research in a positive light. Andra and her colleagues said this was as they expected but also extremely worrying –  suggesting that deep down psychologists are confident in their discipline and do not see any need for reform.

However, the American replication attempt failed. These researchers, who also recruited psychology researchers from around the world, made the exact opposite finding – in this case, psychologists were particularly slow to respond when the same key was allocated to psychology terms and positive words, and much quicker when the same key was used for responding to psychology terms and negative words. Professor Polly Anna, who is sceptical about the idea of a replication crisis and a well-known figure through her popular TED talks, said she was disappointed by this result – “Firstly, it’s disappointing from a purely methodological point of view that we failed to replicate the first phase, but also I'm sorry to see that psychologists seem to believe deep-down that their field is in trouble. I think this shows the harm to morale that's been done by all the talk of a replication crisis.”

Unfortunately, despite the initial collaborative spirit, the two teams are now in dispute. In a surprise move, the American team led by Professor Anna, has written a letter to The International Journal of Psychological Research calling for their own replication attempt to be retracted. Anna and her colleagues acknowledged that they might not have sufficient training in the implicit association test, and that it’s possible their own anxieties influenced their participants, thus invalidating their results. “Our finding that psychology researchers think psychology is in crisis is questionable – it can take skill and creativity to get the right results sometimes, and hand on heart, we might have lacked those things here ” Anna and her team told us. "We think the British finding, showing positive views among psychologists toward psychology, should stand, and we want our own replication attempt removed from the record".

But in turn, the British researchers have written a letter to the journal calling for a retraction of the American's retraction letter. “While we would normally hope for a successful replication attempt," the letter states, "we actually welcome the US finding because it helps to show once again the difficulty of conducting replicable psychological science. It may well be the case that their finding that psychologists think psychology is not robust is more robust than our finding that psychologists think psychology is robust. Either way, we hope the message gets through that we need to work together to make psychology more robust."
_________________________________

  ResearchBlogging.orgAndra C. et al. (2016). Implicit attitudes toward psychology held by psychological scientists. International Journal of Psychological Research, 1-9 DOI: 10.1090/02699931.2015.1129413

Anna P. et al. (2016). An attempt to replicate the finding of implicit positive bias toward psychology held by psychology researchers. International Journal of Psychological Research, 10-19 DOI: 10.1080/027249931.2015.1129313

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 9 March 2016

How trustworthy is the data that psychologists collect online?

The internet has changed the way that many psychologists collect their data. It's now cheap and easy to recruit hundreds of people to complete questionnaires and tests online, for example through Amazon's Mechanical Turk website. This is a good thing in terms of reducing the dependence on student samples, but there are concerns about the quality of data collected through websites. For example, how do researchers know that the participants have read the questions properly or that they weren't watching TV at the same time as they completed the study?

Good news about the quality of online psychology data comes from a new paper in Computers in Human Behavior. Sarah Ramsey and her colleagues at Northern Illinois University first asked hundreds of university students to complete two questionnaires on computer – half of them did this on campus in the presence of a researcher, the others did it remotely, off campus.

The questions were about a whole range of topics from sex to coffee. The researchers started off leading the participants to believe they were really interested in their attitudes to these topics. But when the students started the second questionnaire they were told the real test was to spot how many of the questions on the second questionnaire were repeats from the first. The idea was to see whether the students had really been paying attention to the questions – if they hadn't, they wouldn't be very good at spotting duplicates in the second questionnaire.

In fact, both groups of students – those supervised on campus and those who could do the questionnaire anywhere – performed well at spotting when questions were repeated. This suggests that even those who'd completed the questionnaires at home, or out and about, had been paying attention – good news for any researchers who like to collect data online.

A follow-up study was similar, but this time there were three participant groups: students on-campus, students off-campus, and 246 people recruited via Amazon's Mechanical Turk. Also, the researchers added a trick to see if the participants had read the questionnaire instructions properly – they did this by making an unusual request for how participants should indicate the time they completed the questionnaires.

In terms of the participants' paying attention to the questionnaire items, the results were again promising – all groups did well at spotting duplicate items. Regarding the reading of instructions, the results were more disappointing in general, but actually the Turkers performed the best. Just under 15 per cent of students on-campus appeared to have read the instructions closely compared with 8.5 per cent of off-campus students and 49.6 per cent of Turkers. Perhaps users of sites like Amazon's Mechanical Turk are actually more motivated to pay attention than students because they have an inherent interest in participating whereas students might just be fulfilling their course requirements.

Of course this paper has only looked at two specific aspects of conducting psychology research online, both relating to the use of questionnaires. However, the researchers were relatively upbeat – "These results should increase our confidence in data provided by crowdsourced participants [those recruited via Amazon and other sites]" they said. But they also added that their findings raise general concerns about how closely participants read task instructions. There are easy ways round this though – for example, instructions can include a compliance test that must be completed before the proper questionnaire or other task begins, or researchers could try using audio to provide spoken instructions.

_________________________________ ResearchBlogging.org

Ramsey, S., Thompson, K., McKenzie, M., & Rosenbaum, A. (2016). Psychological research in the internet age: The quality of web-based data Computers in Human Behavior, 58, 354-360 DOI: 10.1016/j.chb.2015.12.049

--further reading--
What are participants really up to when they complete an online questionnaire?
Anonymity may spoil the accuracy of data collected through questionnaires

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 19 February 2016

A US sociologist has accused baby psych labs of being creative with their results

A US academic who spent 16 months embedded in three American psychology baby labs reports that he observed numerous examples of researchers cutting corners and bending the rules of science. Writing in Socius, David Peterson at Northwestern University in Chicago says that doing psychology research with babies is so challenging and costly that developmental psychologists routinely do things like: checking early in a study whether their results are going to be significant (and abandoning or changing tack if they don't look promising); comparing notes with other supposedly independent judges when coding whether babies are looking at a stimulus; taking a relaxed approach to task instructions (for example, telling mothers that it doesn't really matter too much if their eyes are closed or not during a task); and making up post-hoc explanatory stories to account for surprising results, with those stories later presented as the initial impetus for the research. As an example of that last point, Peterson quotes an exchange between a grad student and her mentor: "You don't have to reconstruct your logic. You have the results now. If you can come up with an interpretation that works, that will motivate the hypothesis."

The open-access paper, presented as an ethnographic study of baby labs, comes at a time when psychology is working hard to tighten up its research practices, for example through the Center for Open Science and the introduction of registered reports in which planned hypothesis-driven methodologies are accepted for publication before their results are in. Peterson says that he "took part in nearly every aspect of laboratory life", that he took notes throughout the course of each day, and recorded all direct quotations immediately. "Ultimately I argue that developmental psychologists meet disciplinary requirements through a set of strategies that bend results toward statistical significance," he writes.

-The Baby Factory
Difficult Research Objects, Disciplinary Standards, and the Production of Statistical Significance
_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 9 February 2016

New research challenges the idea that women have more elaborate autobiographical memories than men 

The longest autobiographical narratives were produced by men talking to women 
Prior research has found that women elaborate more than men when talking about their autobiographical memories, going into more detailmentioning more emotions and providing more interpretation. One problem with this research, though, is that it hasn't paid much attention to who is listening or whether the memories are spoken or written.

This is unfortunate because findings like these can fuel overly simplistic gender-based assumptions – in this case, the idea that women have more elaborate and emotional autobiographical memories than men. A new study in the journal Memory reminds us, in the words of Robyn Fivush, that "autobiographical memory is not something we have but something we do in interaction". Specifically, the new research finds that the way people recall their memories depends on who is listening. In fact, when the listening researcher was a woman, the male participants provided more long-winded descriptions of their memories than the female participants.

Azriel Grysman and Amelia Denney at Hamilton College, New York recruited 178 student participants (average age 19; 101 women) and asked them to describe "an episode in your life that was stressful to you", with further guidance that it must be a single event lasting no longer than a day, and that they should "try to imagine the event in as much detail as possible" before beginning their description, for which "there is no correct or incorrect length". Crucially, half the students performed this exercise alone in the psych lab with a female researcher, and half alone with a male researcher. Also, half described their memory out loud (they were told the researcher would simply nod periodically), while the others were instructed to type their memories into a computer.

The researchers coded the length and content of all the memories which were about things like academic stress, arguments, injuries and the death of pets. Contrary to prior research, the longest autobiographical memories were those produced by male participants speaking to a female researcher. The actual content of men's memories didn't vary according to gender of the listener, nor whether they were writing or speaking. By contrast, the female participants' memories contained fewer mentions of internal states (people's emotions and feelings) when speaking or writing with a male researcher,  and they provided fewer opinions when verbally describing their memories as compared with typing them (regardless of the gender of the listener).

We need to be aware that the results could be different if older and non-student participants were tested, and also if the memory prompt were different. There was also a confound in the study, in that the two male researchers who took turns to accompany the (predominantly White) participants were White, whereas the three female researchers were Asian-American and non-White Hispanic, although the researchers couldn't find any evidence that one or more of the researchers was having an influence on the results.

"The findings reported here emphasise the importance of context in autobiographical memory report," the researchers concluded. "The implications of these findings are that autobiographical memories include the constantly interacting influences of person, audience, and the experimental or conversational context."

_________________________________ ResearchBlogging.org

Grysman, A., & Denney, A. (2016). Gender, experimenter gender and medium of report influence the content of autobiographical memory report Memory, 1-14 DOI: 10.1080/09658211.2015.1133829

--further reading--
Some perfectly healthy people can't remember their own lives
Total recall: The man who can remember every day of his life in detail
Repression redux? It is possible to deliberately forget details from our past

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 28 October 2015

Survey that revealed widespread iffy research practices in psychology was itself iffy

Four years ago we were the first to break the disconcerting news that a survey of thousands of US psychologists had found their use of "questionable research practices" was commonplace: that is, their tendency to do things like failing to report all the measures they'd taken, or collecting more data after looking to see if their results were significant.

The story went viral, further aggravating the storm cloud sitting over the discipline at that time (it wasn't long since one of social psychology's most prolific professors had been found guilty of fraud). But now in an ironic development, two leading psychologists have published a damning critique of the "questionable research practices" survey, raising concerns about the methods that were used and the way the findings were interpreted. "Claims about violations of the standards of good science deserve to be held to the high standards they endorse," they write, "not the least in light of the damage that misleading inferences can cause."

Klaus Fiedler at the University of Heidelberg and Norbert Schwarz at the University of Southern California, Los Angeles, point out that many of the survey items were hopelessly vague and ambiguous. For example, the survey asked whether respondents had "failed to report all of a study's dependent measures". Fiedler and Schwarz say it would be unrealistic for any psychologist to always report every single thing they measure. Really, they argue, the question should have asked whether respondents had failed to report all of a study's dependent measures that were relevant for a particular finding. The pair go on to highlight similar concerns with other items in the survey.

Another issue they highlight is that for a respondent to demonstrate 100 per cent innocence (in terms of their use of questionable research practices), they would need to answer "No" repeatedly to all 10 items on the survey. When people complete surveys, they tend to show an aversion to always providing the same answer, so really a survey should be compiled such that scores toward a given construct or characteristic should be based on a mix of "Yes" and "No" answers.

In terms of interpreting the survey, Fiedler and Schwarz argue that a fundamental error was made by the authors of the survey and by the media reports of its findings. The original survey asked if respondents "had ever" conducted any of the questionable practices in question, which speaks to the proportion of the sample who'd ever committed a given research "sin", but the authors and media went beyond this, to make assumptions about the prevalence of these behaviours. Fiedler and Schwarz liken this logical error to making inferences about church attendance based on the proportion of people who have ever entered a church.

Fiedler and Schwarz go on to report the findings of their own "questionable research practices" survey, which they gave to 1138 members of the German Psychological Association. Their survey contained the same 10 items that were used in the original 2011 survey, but with the wording modified to be less ambiguous. They also included a measure of prevalence, asking their respondents not only if they'd ever committed the dubious practices but also in what proportion of their published work they had done so.

The new survey finds firstly that admission rates for ever having committed questionable practices were lower than in the 2011 survey – this could be because of the tightened wording, or because this was a sample of psychologists from a different culture. Secondly and more importantly, argue Fiedler and Schwarz, is that by combining the information they collected about prevalence, they find that the survey outcomes drop by an order of magnitude. For example, the new survey found that 47 per cent of respondents admitted to at least once claiming to have predicted an unexpected finding. Yet the average prevalence figure for this practice was just 10 per cent (i.e. respondents on average said they did this for 10 per cent of their published work).

Fiedler and Schwarz agree it is important to address issues of scientific misconduct, but they worry that the misinterpretation of a poorly executed survey risks spreading a harmful message – the idea that questionable research practices are rife, which could encourage more people to follow suit, thinking to themselves "everybody else is doing it, why shouldn't I?".

_________________________________ ResearchBlogging.org

Fiedler, K., & Schwarz, N. (2015). Questionable Research Practices Revisited Social Psychological and Personality Science DOI: 10.1177/1948550615612150

--further reading--
Questionable research practices are rife in psychology, survey suggests

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 3 September 2015

Using brain imaging to reevaluate psychology's three most famous cases

It's 50 years since the American neurologist Norman Geschwind published his hugely influential Disconnexion Syndromes in Animals and Man, in which he argued that many brain disorders and injuries could best be understood in terms of the damage incurred to the white-matter pathways connecting different areas of the brain.

To mark this anniversary, an international team of researchers has used modern brain imaging techniques to reveal, in an open-access article for Cerebral Cortex, the likely damage to brain connectivity suffered by three of psychology's most famous cases: the 19th century rail worker Phineas Gage, who survived an iron rod passing through his brain; Louis Victor Leborgne, the 19th century aphasic patient studied by Paul Broca who played a key role in our understanding of language function in the brain; and the most studied amnesiac in history, Henry Molaison (known as H.M. in the literature), who died in 2008.

Michel Thiebaut de Schotten and his colleagues first obtained existing information about the brains of these three cases. For Gage they used a CT scan taken of his skull by researchers in 2004 and mapped the signs of injury onto a simulation of his brain. Leborgne's brain is in preservation at the Dupuytren Museum in Paris and they used an MRI scan of it taken in 2007. For Molaison's brain they used an MRI scan taken while he was still alive in 1993.

Next, the researchers created an intricately detailed map of the human brain's connective pathways. They used an advanced version of a technique known as diffusion tensor imaging to plot the connective tissues in the brains of 129 healthy volunteers (aged 18-79; 59 men). They combined the data from all these healthy people's brains to create an average road-map of the human brain's connective tracts.

The final step involved applying the information on the brain damage incurred by the three famous cases onto this road map of the human brain's connective pathways, to see which important tracts had probably been affected.

In the case of Gage, the researchers estimate that he suffered widespread damage to several connective pathways in his frontal lobes, beyond the specific damage thought to have been inflicted by the passage of the iron bar. These pathways include the uncinate fasciculus, the frontal intralobar networks, and the fronto-striatal-thalamal-frontal network, with likely implications for his decision-making and emotional functioning.

Mapping Leborgne's brain lesions onto the connectivity roadmap, the researchers estimate that he suffered extensive damage to many tracts, including almost all the dorsolateral tracts of the left hemisphere which would have had profound implications for his language function (on top of the effects caused by localised damage to what is now known as Broca's area in the left frontal lobe). The researchers think Leborgne also likely suffered damage to pathways not involved in language, such as the left cortico-spinal tract (which could explain the documented paralysis of his right arm and leg).

Finally, turning to Molaison, the researchers again estimate wide-spread damage to connective tissues beyond the main brain regions, including the hippocampus, that were directly removed by surgery (Molaison became amnesic after radical neurosurgery to treat his epilepsy). This includes the fornix, the ventral cingulum, uncinate fasciculus and anterior commissure. Damage to that last tract, which is involved in processing smell, might explain lab reports that Molaison had problems with his odour discrimination.

What to make of these new insights? The researchers said they have "demonstrated the validity of applying an atlas based approach to reappraise the effects of disconnection in 3 historic patients". Their research is certainly a fitting tribute to the legacy of Geshwind, showing how "social behaviour, language, and memory depend on the coordinated activity of different regions rather than single areas in the frontal or temporal lobes." However, the researchers also admitted that much caution is needed: their research involved many ambitious leaps and generalisations. What is for sure is that these new insights will further fuel the mythical status of these three patients. Gage, Leborgne and Molaison are the psychological case studies that just keep giving.

_________________________________ ResearchBlogging.org

Thiebaut de Schotten M, Dell'Acqua F, Ratiu P, Leslie A, Howells H, Cabanis E, Iba-Zizen MT, Plaisant O, Simmons A, Dronkers NF, Corkin S, & Catani M (2015). From Phineas Gage and Monsieur Leborgne to H.M.: Revisiting Disconnection Syndromes. Cerebral cortex (New York, N.Y. : 1991) PMID: 26271113

--further reading--
Neuroscience still haunted by Phineas Gage
What the textbooks don't tell you about psychology's most famous case study
Glimpsed at last - the life of neuropsychology's most important patient
Looking Back: Understanding amnesia – Is it time to forget HM?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 27 August 2015

This is what happened when psychologists tried to replicate 100 previously published findings

While 97 per cent of the original results showed a statistically significant
effect, this was reproduced in only 36 per cent of the replications 
After some high-profile and at times acrimonious failures to replicate past landmark findings, psychology as a discipline and scientific community has led the way in trying to find out more about why some scientific findings reproduce and others don't, including instituting reporting practices to improve the reliability of future results. Much of this endevour is thanks to the Center for Open Science, co-founded by the University of Virginia psychologist Brian Nosek.

Today, the Center has published its latest large-scale project: an attempt by 270 psychologists to replicate findings from 100 psychology studies published in 2008 in three prestigious journals that cover cognitive and social psychology: Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory and Cognition.

The Reproducibility Project is designed to estimate the "reproducibility" of psychological findings and complements the Many Labs Replication Project which published its initial results last year. The new effort aimed to replicate many different prior results to try to establish the distinguishing features of replicable versus unreliable findings: in this sense it was broad and shallow and looking for general rules that apply across the fields studied. By contrast, the Many Labs Project involved many different teams all attempting to replicate a smaller number of past findings – in that sense it was narrow and deep, providing more detailed insights into specific psychological phenomena.

The headline result from the new Reproducibility Project report is that whereas 97 per cent of the original results showed a statistically significant effect, this was reproduced in only 36 per cent of the replication attempts. Some replications found the opposite effect to the one they were trying to recreate. This is despite the fact that the Project went to incredible lengths to make the replication attempts true to the original studies, including consulting with the original authors.

Just because a finding doesn't replicate doesn't mean the original result was false – there are many possible reasons for a replication failure, including unknown or unavoidable deviations from the original methodology. Overall, however, the results of the Project are likely indicative of the biases that researchers and journals show towards producing and publishing positive findings. For example, a survey published a few years ago revealed the questionable practices many researchers use to achieve positive results, and it's well known that journals are less likely to publish negative results.

The Project found that studies that initially reported weaker or more surprising results were less likely to replicate. In contrast, the expertise of the original research team or replication research team were not related to the chances of replication success. Meanwhile, social psychology replications were less than half as likely to achieve a significant finding compared with cognitive psychology replication attempts, but in terms of declines in size of effect, both fields showed the same average reduction from original study to replication attempt, to less than half (cognitive psychology studies started out with larger effects and this is why more of the replications in this area retained statistical significance).

Among the studies that failed to replicate was research on loneliness increasing supernatural beliefs; conceptual fluency increasing a preference for concrete descriptions (e.g. if I prime you with the name of a city, that increases your conceptual fluency for the city, which supposedly makes you prefer concrete descriptions of that city); and research on links between people's racial prejudice and their response times to pictures showing people from different ethnic groups alongside guns. A full list of the findings that the researchers attempted to replicate can be found on the Reproducibility Project website (as can all the data and replication analyses).

This may sound like a disappointing day for psychology, but in fact really the opposite is true. Through the Reproducibility Project, psychology and psychologists are blazing a trail, helping shed light on a problem that afflicts all of science, not just psychology. The Project, which was backed by the Association for Psychological Science (publisher of the journal Psychological Science), is a model of constructive collaboration showing how original authors and the authors of replication attempts can work together to further their field. In fact, some investigators on the Project were in the position of being both an original author and a replication researcher.

"The present results suggest there is room to improve reproducibility in psychology," the authors of the Reproducibility Project concluded. But they added: "Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should" – that is, being constantly sceptical of its own explanatory claims and striving for improvement. "This isn't a pessimistic story", added Brian Nosek in a press conference for the new results. "The project shows science demonstrating an essential quality, self-correction – a community of researchers volunteered their time to contribute to a large project for which they would receive little individual credit."
_________________________________

  ResearchBlogging.orgOpen Science Collaboration (2015). Estimating the reproducibility of psychological science Science

--further reading--
How did it feel to be part of the Reproducibility Project?
A replication tour de force
Do psychology findings replicate outside the lab?
A recipe for (attempting to) replicate existing findings in psychology
A special issue of The Psychologist on issues surrounding replication in psychology.
Serious power failure threatens the entire field of neuroscience 

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 26 August 2015

Having a brain scan changed how these children think about minds and brains

The link between the mind and brain is tricky enough for expert psychologists and neuroscientists to grapple with, let alone young children. Nonetheless, they grow up with their own naive understanding. For example, there's some cute research from the 90s that found, somewhere between age 7 and 9, most children come to see the brain as containing thoughts and memories – they'll say that a skunk with a brain transplant from a rabbit will have memories of being a rabbit. Younger kids, by contrast, recognise the brain is involved in mental activity, but not that it contains thoughts and memories (they think the skunk with a rabbit brain will still have memories of being a skunk).

Now researchers in France have explored how taking part in a neuroimaging experiment influences young children's understanding of the mind-brain link. Their results, published recently in Trends in Neuroscience and Education, suggested that the experience led the children to have a more sophisticated, brain-based understanding of at least some mental functions.

Sandrine Rossi and her colleagues recruited 37 eight-year-olds who two years previously had taken part in a brain scan study. For this, they'd completed some numerical tasks in the scanner and they were also shown images of their brain. Thirty-seven eight-year-olds with no brain-scan experience acted as controls. Both groups of kids were from similar middle-class backgrounds.

To test their understanding of the mind-brain link, the children were introduced to a cartoon character, Julie, and asked to select which parts of her body (hand, eye, mind, mouth, brain or heart) she needed to perform various functions: seeing, talking, reading, counting, dreaming and imagining.

The main differences between the groups occurred when judging what Julie needed to dream and imagine. Here, a majority (nearly 70 per cent) of the children who'd undertaken a brain scan said she'd need both her mind and her brain, compared with around 40 per cent of the controls. The control kids were more likely to say she'd need her mind, without also mentioning she'd need her brain. In other words, the children with brain scan experience appeared to see the mind and brain as more closely linked, at least for dreaming and imagining. Furthermore, when judging what Julie would need to see and talk, the controls more often neglected to mention either her brain or mind, compared with the brain scan kids. In contrast, the groups didn't differ in how often they chose the mouth, hand, heart and eye for the different functions.

This research was inspired by parents' reports that their children had become more brain aware after undertaking a brain scan. Now there is some data to support their anecdotes. "The present study is the first to show an educational effect of participating in an MRI protocol on children's naive mind-brain conceptions," the researchers said.

_________________________________ ResearchBlogging.org

Rossi, S., Lanoë, C., Poirel, N., Pineau, A., Houdé, O., & Lubin, A. (2015). When I Met my brain: Participating in a neuroimaging study influences children’s naïve mind–brain conceptions Trends in Neuroscience and Education DOI: 10.1016/j.tine.2015.07.001

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 21 August 2015

Free personality tests are more reliable and efficient than the paid variety

In most areas of life, we expect the free versions of products to be sub-standard compared with the "premium" paid-for versions. After all, why would anyone pay for something if the free equivalent were better? However, a new study of personality tests boots this logic off the park – psychologists at the University of Texas report in the Journal of Psychology that free tests are more reliable and efficient than their paid-for, proprietary counterparts.

To measure test reliability, Tyler Hamby and his colleagues dug out personality test data collected in five prior meta-analyses of the Big Five personality traits. Meta-analyes combine data from many studies in a given field, and the Big Five is the dominant personality theory in contemporary psychology, which breaks personality down into five main dimensions, including Extraversion and Conscientiousness. In the end, the researchers ended up with usable data from 345 samples from 288 studies involving 161,091 participants.

Crucially, 142 of these research samples had completed free personality tests such as the Big Five Inventory and various versions of the International Personality Item Pool. The other samples had completed paid-for personality tests such as versions of the NEO Five Factor Inventory. For their analysis, Hamby and his team compared the "alpha coefficients" of the different personality tests – for any given test, this essentially involves looking at the scores from the questionnaire items that supposedly measure the same trait and seeing how well they correlate with each other. If a test has what's known as good internal consistency, then the scores for its items that measure the same construct should show a high correlation.

A note of caution: The researchers didn't look at test–re-test reliability, a different measure which tells you how well participants' test scores correlate when they take the same test at different times. Nor did they compare the tests' validity, which is the evidence for whether the tests are truly measuring what they're supposed to be measuring. In other words, this study certainly shouldn't be taken as the final word on the merit of free and paid-for personality tests.

These caveats aside, overall there was a small, but inconsistent (applying to some traits but not others) difference in reliability between free and paid tests, in favour of the free tests. But it doesn't end there. When you use alpha coefficients to measure internal consistency in this way, the outcome is confounded by the number of items in the test. Longer tests with more items tend to achieve higher reliability scores. This is relevant to the current investigation because paid-for tests tend to be much longer (80 per cent on average) than the free versions. When Hamby and his colleagues controlled for test length (by estimating reliability at 12 items for each trait), they found the free tests had higher reliabilities than the paid-for tests for all five personality traits.

Is there any reason for paid-for tests to be less reliable? The authors say their findings are not entirely surprising – one possible explanation is that researchers or practitioners who use paid-for tests are often forbidden from adapting them in any way (for example, adding/removing items or changing the wording of items). This is to protect the proprietary status of the product, but of course forbidding any changes is unscientific because it prevents progress by making it impossible to test whether revised versions would be superior.

At least for research purposes (as opposed to in applied settings), these new results stack heavily in favour of free tests. Not only do free tests match or exceed the reliability of paid-for tests, they are also shorter which helps encourage participants to complete all test items and reduces participant drop-out rates. "Assuming that a particular scale has been properly validated, we tentatively recommend using free scales to measure Big Five traits in personality research," the researchers said. It will be interesting to see if this finding applies to other areas of psychology research where free and paid-for tests are available.

_________________________________ ResearchBlogging.org

Hamby, T., Taylor, W., Snowden, A., & Peterson, R. (2015). A Meta-Analysis of the Reliability of Free and For-Pay Big Five Scales The Journal of Psychology, 1-12 DOI: 10.1080/00223980.2015.1060186

--further reading--
Our bias for the left-hand side of space could be distorting large-scale surveys.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 20 August 2015

Why do more intelligent people live longer?

By guest blogger Stuart Ritchie

It’s always gratifying, as a psychologist, to feel like you’re studying something important. So you can imagine the excitement when it was discovered that intelligence predicts life expectancy. This finding is now supported by a large literature including systematic reviews, the most recent of which estimated that a difference of one standard deviation in childhood or youth intelligence (that’s 15 IQ points on a standardised scale) is linked to a 24 per cent lower mortality risk in the subsequent decades of life. That’s a pretty impressive link, but it immediately raises a critical question: why do brighter people live longer?

A new study (pdf) published in the International Journal of Epidemiology attempts to provide new, biological evidence to answer this question. But first, let’s think through the possibilities. We know that people with higher IQ scores tend to be healthier, possibly because they eat better, exercise more, are better able to understand health advice, are less likely to be injured in accidents and deliberate violence, and also because they tend to have better jobs. Here, the causal arrow is pointing from IQ to longevity – the effects of being smarter cause you to die later. But there are other explanations: what if having a lower IQ is just an indicator of an underlying health condition that’s the real cause of earlier death? Or what if the genes for having a healthier body are also the genes for having a healthier brain, and the causal pathway is from this third variable (i.e. genetics) to both IQ and longevity?

The authors of the new study, Rosalind Arden and colleagues, tested this last hypothesis, known as "genetic pleiotropy" (the idea that the same genes influence multiple different traits). They took three twin datasets, selecting in total 1,312 twin pairs where one or both of the twins had died. Then they correlated the twins’ IQ scores with the lengths of their lives (or their life expectancies, for those still living).

As they expected, the researchers found an overall lifespan-IQ correlation, albeit a small one (r = 0.12, where 1.00 would be a perfect match). Importantly, by comparing the correlations in identical twins (who share all their genes) versus fraternal twins (who share approximately half), they were also able to estimate the "genetic correlation" – the overlap in the two traits that’s caused by genetic differences. They found that, overall, 95 per cent of the correlation in IQ and longevity was due to genetics.

So, is this a final answer to the debate over the IQ-mortality connection? Does this show that, perhaps depressingly, the link isn’t due to changeable lifestyle factors, but actually some kind of genetic "system integrity" that underlies brightness and longer lives?

Ritchie's critically acclaimed
new book is out now.
Not so fast. The important part is in the phrase "due to genetics". In a 2013 Nature Reviews Genetics article, geneticist Nadia Solovieff and colleagues outlined all the potential causal mechanisms that might make two traits genetically correlated. They drew a critical distinction between "biological" and "mediated" pleiotropy. The former is the "obvious" inference, which is that the same genes cause both intelligence and longevity. But the latter possibility is that the variables only appear to be genetically correlated, because genes cause one factor, which then goes on to cause the other. That is, if genes cause intelligence, and intelligence (via lifestyle choices etc.) causes a longer lifespan, we’d still see the same genetic correlation, even if those genes have no direct effect on lifespan itself. If true, this would still be pleiotropy of a sort: the genes linked to intelligence are having an indirect effect on lifespan. But as the authors acknowledge in their paper, this "pleiotropy-lite" interpretation of the new findings would mean we don’t yet have knockdown evidence for the genetic "system integrity" idea.

So how do we tease apart the two possible explanations for the genetic correlation? In the paper, the authors suggest we study non-human animals (for which the literature on cognitive ability is growing fast) where we can more readily control the "lifestyle" factors, thereby isolating any potential direct effects of the same genes on both intelligence and longevity. Really, though, we might have to wait until we have a long list of genes that are reliably linked to human intelligence. If we knew a good number of those, we could test whether they also influence health and lifespan – if they did, this would be evidence for true "biological" pleiotropy. We’d know then that the link between IQ and lifespan is down to some people simply winning the genetic lottery, rather than to lifestyle factors that any of us could change.

Conflict of interest: Stuart Ritchie is a postdoc in the lab of Ian Deary, one of the co-authors of the paper discussed here.

_________________________________ ResearchBlogging.org

Arden, R., Luciano, M., Deary, I., Reynolds, C., Pedersen, N., Plassman, B., McGue, M., Christensen, K., & Visscher, P. (2015). The association between intelligence and lifespan is mostly genetic International Journal of Epidemiology DOI: 10.1093/ije/dyv112

--further reading--
How do you prove that reading boosts IQ?

Post written by Stuart J. Ritchie, a Research Fellow in the Centre for Cognitive Ageing and Cognitive Epidemiology at the University of Edinburgh. His new book, Intelligence: All That Matters, is available now. Follow him on Twitter: @StuartJRitchie

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Google+