Showing posts with label Methodological. Show all posts
Showing posts with label Methodological. Show all posts

Thursday, 14 April 2011

Out of the lab and into the waiting room - research on where we look gets real

You know how when you're in an elevator or an underground train, everybody seems to try their darnedest not to look anyone else in the eye. This everyday experience completely contradicts hundreds of psychology studies conducted in the lab, which show how rapidly our attention is drawn to other people's faces and especially their eyes.

Why the contradiction? Because psychologists have used pared down, highly controlled situations to study where people look, often involving faces and social scenes presented on a computer screen. And crucially, when participants look at a monitor, they generally know that the other person can't look back. In real life, things get more complicated - we might not want to engage in eye contact for all the social messages that can send.

Now psychologists are realising it's time to step out of the lab to see how social attention operates in the real world. One step at a time though - they've still kept things fairly basic. Kaitlin Laidlaw and her colleagues rigged 26 student participants up with a mobile eye-tracking head-set and had them sit in a waiting room for a short time.

There was some minor trickery. The participants thought they were waiting for a navigation task, in which their eye movements would be recorded as they went from room to room. That really did happen, but first, for two minutes, whilst an experimenter went to fetch an instruction sheet, the participants' eye movements were recorded for the purposes of the current study.

For half the participants, another student (female, aged 24, and actually an assistant working for the researchers) was sat nearby, fifty inches to the left and 40 inches in front. She was filling in a questionnaire quietly and looked directly at them, with a neutral expression, three times during the two-minute wait. For the other participants, no other person was physically present, but there was a TV monitor located a similar distance away to the right, on which was shown a student filling in a questionnaire (this was the same person as in the other condition, behaving in exactly the same way). The question - how would the participants' head and eye movements differ between the groups?

The participants in the video condition looked at the other student (shown on the monitor) far more often than they looked at a blank computer monitor located elsewhere in the room, and far more often than the participants in the physical presence condition looked at the student sat near them. In fact, the participants in the latter condition didn't look at the physically present student any more than they looked at a blank computer monitor in the room. 'Through the simple act of introducing the potential for social interaction, visual behaviour changed dramatically,' the researchers said.

A further detail was that participants who scored lower on a self-report measure of social skills tended to look more at the other student in the physically present condition. The researchers said this association could be because of their reduced awareness of social etiquette, and could help explain why studies of people diagnosed with autistic spectrum disorders have identified anomalies in social attention in real world scenarios, but have often failed to find them in the lab (looking behaviour was unrelated to self-reported social skills in the video monitor condition).

This study is just the start - all sorts of questions remain unanswered, from the effect of wearing sunglasses, so your gaze can't be seen, to cross-cultural comparisons. 'It is important to note that our results do not imply that humans do not possess a bias in real life to attend to other people, as the video-taped confederate condition clearly demonstrates that we do,' the researchers said. 'However, our live-confederate condition provides strong evidence that this behaviour is malleable, and can be influenced by the opportunity for an interaction with the other individual.'
_________________________________

ResearchBlogging.orgLaidlaw, K., Foulsham, T., Kuhn, G., and Kingstone, A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences, 108 (14), 5548-5553 DOI: 10.1073/pnas.1017022108 [Hat tip: Sarcastic_f]

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 13 December 2010

When and how psychological data is collected affects the kind of students who volunteer

Psychology has a serious problem. You may have heard about its over-dependence on WEIRD participants - that is, those from Western, Educated, Industrialised, Rich Democracies. More specifically, as regular readers will be aware, countless psychology studies involve undergraduate students, particularly psych undergrads. Apart from the obvious fact that this limits the generalisability of the findings, Edward Witt and his colleagues provide evidence in a new paper for two further problems, this time involving self-selection biases.

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version. The data collection was always arranged for Wednesdays at 12.30pm to control for time of day/week effects. Also, the same personality survey was administered by computer in the same way in both experiment types, it's just that in the face-to-face version it was made clear that the students had to attend the research lab, and an experimenter would be present.

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What's more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

In other words, the kind of people who volunteer for research will likely vary according to the time of semester and the mode of data collection. Imagine you used false negative feedback on a cognitive task to explore effects on confidence and performance. Participants tested at the start of semester, who are typically more conscientious and motivated, are likely to be affected in a different way than participants who volunteer later in the semester.

This isn't the first time that self-selection biases have been reported in psychology. A 2007 study, for example, suggested that people who volunteer for a 'prison study' are likely to score higher than average on aggressiveness and social dominance, thus challenging the generalisability of Zimbardo's seminal work. However, despite the occasional study highlighting these effects, there seems to be little enthusiasm in the social psychological community to do much about it.

So what to do? The specific issues raised in the current study could be addressed by sampling throughout a semester and replicating effects using different data collection methods. 'Many papers based on college students make reference to the real world implications of their findings for phenomena like aggression, basic cognitive processes, prejudice, and mental health,' the researchers said. 'Nonetheless, the use of convenience samples place limitations on the kinds of inferences drawn from research. In the end, we strongly endorse the idea that psychological science will be improved as researchers pay increased attention to the attributes of the participants in their studies.'
_________________________________

ResearchBlogging.orgWitt, E., Donnellan, M., and Orlando, M. (2011). Timing and selection effects within a psychology subject pool: Personality and sex matter. Personality and Individual Differences, 50 (3), 355-359 DOI: 10.1016/j.paid.2010.10.019

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Previously on the Digest: Just how non-clinical are so-called non-clinical community samples?
Just how representative are the people who volunteer for psychology experiments?

Friday, 22 October 2010

Asch's "conformity study" without the confederates

With the help of five to eight 'confederates' (research assistants posing as naive participants), Solomon Asch in the 1950s found that when it came to making public judgments about the relative lengths of lines, some people were willing to agree with a majority view that was clearly wrong.

Asch's finding was hugely influential, but a key criticism has been his use of confederates who pretended to believe unanimously that a line was a different length than it really was. They might well have behaved in a stilted, unnatural manner. And attempts to replicate the study could be confounded by the fact that some confederates will be more convincing than others. To solve these problems Kazuo Mori and Miho Arai adapted the MORI technique (Manipulation of Overlapping Rivalrous Images by polarizing filters; pdf), used previously in eye-witness research. By donning filter glasses similar to those used for watching 3-D movies, participants can view the same display and yet see different things.

Mori and Arai replicated Asch's line comparison task with 104 participants tested in groups of four at a time (on successive trials participants said aloud which of three comparison lines matched a single target line). In each group, three participants wore identical glasses, with one participant wearing a different set, thereby causing them to observe that a different comparison line matched the target line. As in Asch's studies, the participants stated their answers publicly, with the minority participant always going third.

Whereas Asch used male participants only, the new study involved both men and women. For women only, the new findings closely matched the seminal research, with the minority participant being swayed by the majority on an average of 4.41 times out of 12 key trials (compared with 3.44 times in the original). However, the male participants in the new study were not swayed by the majority view.

There are many possible reasons why men in the new study were not swayed by the majority as they were in Asch's studies, including cultural differences (the current study was conducted in Japan) and generational changes. Mori and Arai highlighted another reason - the fact that the minority and majority participants in their study knew each other, whereas participants in Asch's study did not know the confederates. The researchers argue that this is a strength of their new approach: 'Conforming behaviour among acquaintances is more important as a psychological research topic than conforming among strangers,' they said. 'Conformity generally takes place among acquainted persons, such as family members, friends or colleagues, and in daily life we seldom experience a situation like the Asch experiment in which we make decisions among total strangers.'

Looking ahead, Mori and Arai believe their approach will provide a powerful means of re-examining Asch's classic work, including in situations - for example, with young children - in which the use of confederates would not be practical.
_________________________________

ResearchBlogging.orgMori, K., and Arai, M. (2010). No need to fake it: Reproduction of the Asch experiment without confederates. International Journal of Psychology, 45 (5), 390-397 DOI: 10.1080/00207591003774485

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 14 September 2010

What are participants really up to when they complete an online questionnaire?

Internet surveys are an increasingly popular method for collecting data in psychology, for obvious reasons, but they have some serious shortcomings. How do you know if a participant read the instructions properly? What if they clicked through randomly, completed it drunk or maybe their cat walked across the keyboard? Now a possible solution has arrived in the form of a tool, called the UserActionTracer (UAT), developed by Stefan Stieger and Ulf-Dietrich Reips.

The UAT is a piece of code that tells the participant's web browser to store information, including timings, on all mouse clicks (single and double), choices in drop-down menus, radio buttons, all inserted text, key presses and the position of the mouse pointer. Stieger and Reips tested this out with a survey of 1046 participants on the subject of instant messaging. The new tool revealed that 31 participants changed their reported age; 5.9 per cent made suspicious changes to opinions they'd given; 46 per cent clicked through at least some parts of the questionnaire at a suspiciously fast rate (mainly for so-called 'semantic differential' items in which the participant must choose a position between two contrasting adjectives); 3.6 per cent of participants left the questionnaire inactive for long periods; 6.3 per cent displayed excessive clicking; and 11 per cent showed excessive mouse movements (it's that cat again).

As a way of checking the usefulness of this extra behavioural data, the researchers concentrated on the fraction of participants for whom they had access to a secondary source of information that could be used to verify the questionnaire answers. This showed that participants who'd displayed more suspicious behaviour while filling out the questionnaire also tended to provide answers that didn't match up with the other information source.

'Our study shows that the UAT was successful in collecting highly detailed information about individual answering processes in online questionnaires,' Stieger and Reips said. Another application of the tool is in pre-testing of online questionnaires. Researchers could use the tool to test which items tend to prompt corrections or inappropriate click-throughs before rolling out a questionnaire to a larger sample.
_________________________________

ResearchBlogging.orgStieger, S., & Reips, U. (2010). What are participants doing while filling in an online questionnaire: A paradata collection tool and an empirical study. Computers in Human Behavior, 26 (6), 1488-1495 DOI: 10.1016/j.chb.2010.05.013

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 2 June 2010

The homeless man and his audio cave

We're defined in part by where we are, the places we go and what we do there. We adorn our homes with paraphernalia caught in the net of life - the photos, the books and pictures. But what happens when you're homeless? How do you define your space and identity when your home is a public place? To find out, Darrin Hodgetts and colleagues have conducted an unusual 'ethnographic' case study with 'Brett', a 44-year-old homeless man in Auckland.

The researchers gave Brett a camera, asked him to take photos representative of his life and then they conducted two in-depth interviews with him, using the photos as spring-boards for discussion.

The clearest finding to emerge was the way that Brett used a portable radio to insulate himself from the outside world - what the researchers called an 'audio cave'. 'I've got a sound bubble around me,' Brett said, 'and I can wander through the streets without paying attention to what's going on around me.' At the same time, by consistently listening to his favourite station George FM, Brett was able to develop a sense of belonging with the station's other listeners. This provided Brett with a 'fleeting sense of companionship and "we-ness",' the researchers said.

Brett is a self-confessed loner who avoids contact with other people where possible and who tries to conceal his homeless status. He told the researchers about the places he went that enabled him to do this, including a former gun emplacement with stunning views of the sea; Judges Bay where there are free showers and gas barbecues; and in the city centre, the church, bookshops and libraries. These places allow Brett to experience 'life as a "normal" person who has interest in books and reading, or simply escaping the city to sit and reflect,' the researchers said. By contrast, returning to photograph the public toilets on Pitt Street was an ordeal for Brett, reminding him of this time as a drug addict.

Brett referred to how other homeless people spend a lot of time sitting round talking and how it [homelessness] psychologically unhinges them. By contrast, the researchers said Brett had never 'lost himself' to the streets. '...[H]is memories, imagination, and daily practices, including his use of space, provide anchorage to an adaptive sense of self and belonging.'
_________________________________

ResearchBlogging.orgHodgetts D, Stolte O, Chamberlain K, Radley A, Groot S, & Nikora LW (2010). The mobile hermit and the city: Considering links between places, objects, and identities in social psychological research on homelessness. The British journal of social psychology / the British Psychological Society PMID: 19531282

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

-The image, courtesy of Darrin Hodgetts, shows Brett's sun-glasses, portable radio and book, which help him create a personal space in public.
-For more on the psychology of homelessness, see this recent 'Helping the Homeless' feature article in The Psychologist magazine.

Friday, 23 April 2010

Face-to-face in a brain scanner

Many neuro-imaging studies claim to have investigated what happens in the brain when people interact socially. To overcome the awkward fact that participants have to lie entombed in the bore of a large magnet, these studies have used various means to simulate a social interaction. This includes: having participants watch videos of social interactions; interact with an animated character; or play a game with a human opponent (usually computer controlled) supposedly located in another room. Such methods score marks for improvisation but arguably none of them fully capture the dynamic cut and thrust of a real face-to-face social interaction between two people. That's why Elizabeth Redcay and her colleagues have devised the first ever experimental set up that allows for live face-to-face (via video link) interaction whilst participants are prostrate inside a brain-imaging magnet.

Participants in this study watched a live video feed of the experimenter. The experimenter in turn had a display showing them a live feed of where the participant was looking. Experimenter and participant then engaged in a series of 'games' that required social interaction. For example, in one, the experimenter picked up various toys and the participant had to look in the direction of the appropriately coloured bucket to which the toy belonged. Compared with watching a recording of this same interaction, the live interaction itself triggered increased activation in a swathe of social-cognitive, attention-related and reward processing brain regions.

The second experiment involved the participant identifying which screen quadrant a mouse was hidden in. In the live 'joint attention' condition, the experimenter's gaze direction cued the mouse's location and only when both experimenter and participant looked at the correct quadrant did the mouse appear. Compared with a solo condition in which a house symbol cued the mouse location, the interactive joint attention condition triggered increased activation in the right superior temporal sulcus and right temporal parietal junction. The former brain region has previously been associated with processing socially relevant stimuli such as eye gaze and reaching, whereas the latter temporal-parietal region is associated with thinking about other people's thoughts.

Past research using simulations of social interaction has identified the dorso-medial prefrontal cortex as a key area involved in social engagement. The quietness of this region in the current study suggests it may have been the competitive or social judgement elements of previous paradigms, rather than social interaction per se, that led to its activation.

'Social interaction in the presence of a live person (compared to a visually identical recording) resulted in activation of multiple neural systems which may be critical to real-world social interactions but are missed in more constrained, offline experiments,' the researchers said.

Redcay's group said their new set-up would be ideal for studying the social difficulties associated with autistic spectrum disorders (ASD). Attempts to identify the neural bases of these difficulties have previously met with mixed success. 'A neuroimaging task that includes the complexity of dynamic, multi-modal social interactions may provide a more sensitive measure of the neural basis of social and communicative impairments in ASD,' the researchers said.
_________________________________

ResearchBlogging.orgRedcay E, Dodell-Feder D, Pearrow MJ, Mavros PL, Kleiner M, Gabrieli JD, & Saxe R (2010). Live face-to-face interaction during fMRI: a new tool for social cognitive neuroscience. NeuroImage, 50 (4), 1639-47 PMID: 20096792

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Image courtesy of Elizabeth Redcay.

Friday, 15 January 2010

Psychology researchers aren't paying enough attention to debriefing their participants

Deception was a fundamental part of some of the most famous experiments in psychology - just think of Milgram's obedience studies, in which participants thought they were administering an electric shock, or Asch's conformity research, during which participants were tricked into believing everyone else in the room thought a line was a different length than it was. Although ethical standards have been tightened, deception is still used widely in psychology. It's not uncommon for even the most sedate studies to involve giving participants false test feedback or misleading them about the true aims of the research. A vital element of psychological science, therefore, is to debrief participants after experimenting on them - telling them the truth about what happened and why, and listening to their feedback.

Even studies that don't deploy trickery have the potential to leave a lasting impression - consider all the tests of new interventions aimed at outcomes from improving memory to ameliorating depression. We know from past research that simply asking someone about a behaviour, such as drug taking, increases their likelihood of indulging in that behaviour. Of course, telling participants too much up front can be detrimental to the results, and fully informed consent is therefore far rarer than most researchers would care to admit. That's why it's so important to debrief them fully afterwards. And yet, having said all this, an alarming new survey of researchers by Donald Sharpe and Cathy Faye suggests that debriefing is a neglected practice in contemporary psychology. Ironically for a science that's supposed to be about people and behaviour, there's also scant research on what kinds of debriefing are even effective - for example is it enough to tell participants they were given false feedback or should they have the chance to complete a real test?

Sharpe and Faye surveyed over two hundred researchers who'd published during a twelve month period from 2006 to 2007, either in the American Psychological Association's flagship social psychology journal The Journal of Personality and Social Psychology or in the Journal of Traumatic Stress. Just one third of articles in the social psychology journal had mentioned debriefing and fewer than one in ten of the trauma journal articles had done so. Those mentionings that were found were usually cursory, such as 'Participants in this and all following experiments were debriefed prior to dismissal.' If the purpose of a particular study was obvious, the survey suggested most researchers considered debriefing to be unnecessary, with nearly all their focus placed instead on informed consent prior to the study.

Set against this worrying picture, Sharpe and Faye make a strong case for just how vital debriefing ought to be to good quality research. Taking their lead from a provocative article published on this topic thirty years ago by Frederick Tesch, the pair say that effective debriefing is vital not only for the ethical reasons outlined above, but for educational and methodological functions too.

Explaining to participants why and how a study was performed ought to be given far higher priority, they argue, especially when one considers how many studies are performed on psychology students. Even with non-psychology students, the exercise of carefully explaining the rationale, methodology, and perhaps even results, of a study, could help to promote the scientific cause. 'Participants would learn about doing research, the joys and frustrations, and the excitement of discovery,' Sharpe and Faye said.

Regarding the methodological benefits of debriefing, the authors said that the process ought to be two-way, and that information garnered from participants can illuminate study findings and help improve future procedures. 'Researchers would learn about how participants view the experimental task, what makes sense and what does not, and what the participants think it was all about,' Sharpe and Faye said.

Their paper ends with seven recommendations for how to improve the situation, including greater discussion of debriefing in the research literature; more thorough reporting of debriefing practices in journals' methods sections; use of online overflow pages for discussing debriefing; and formalising the debriefing procedure. 'Progress will be made when researchers recognise the importance of debriefing or when some unfortunate circumstance forces such recognition,' the authors said.
_________________________________

ResearchBlogging.orgSharpe, D., & Faye, C. (2009). A Second Look at Debriefing Practices: Madness in Our Method? Ethics & Behavior, 19 (5), 432-447 DOI: 10.1080/10508420903035455

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 10 November 2009

Ten statisticians every psychologist should know about

As psychology students past and present will be only too aware, statistics are a key part of every psychology undergrad course and they also appear in nearly every published journal article. And yet have we ever stopped to recognise the statisticians who have brought us these wonderful mathematical tools? As psychologist Daniel Wright puts it: "Statistical techniques are often taught as if they were brought down from some statistical mount only to magically appear in [the software package] SPSS."

To help address this oversight, Wright has compiled a list of ten statisticians he thinks every psychologist should know about. The list is strict in the sense that it only includes statisticians, whilst omitting psychologists, such as Jacob Cohen and Lee Cronbach, who have made significant contributions to statistical science in psychology.

Wright divides his list in three, beginning with three founding fathers of modern statistics. First up is Karl Pearson (pictured), best known to psychologists for the Pearson Correlation and Pearson's chi-square test. He was a socialist who turned down a knighthood in 1935. His first momentous achievement was his 1932 book The Grammar of Science and he also founded the world's first university statistics department at UCL in 1911.

Ronald Fisher was the author of Statistical Methods for Research Workers, which Wright describes as "one of the most important books of science." Fisher was also instrumental in the development of p values in null hypothesis significance testing.

Together with Pearson's son, Egon, Jerzy Neyman produced the framework of null and alternative hypothesis testing that dominates stats to this day. He also created the notion of confidence intervals. Neyman and Fisher were big critics of each other's theories. After a brief spell at UCL with Fisher, Neyman moved later to Berkeley where he set up the stats department - now one of the top such departments in the world.

Wright also lists three of his statistical heroes: John Tukey of post-hoc test fame, who made major contributions in robust methods and graphing (and who coined the terms ANOVA, software and bit); Donald Rubin who has conducted influential work on effect sizes and meta-analyses; and Brad Efron who developed the computer-intensive bootstrap resampling technique.

Wright devotes the last section of his list to four statisticians who have gifted psychology particular statistical techniques: David Cox and the Box-Cox transformation; Leo Goodman and categorical data analysis; John Nelder and the Generalised Linear Model; and Robert Tibshirani and the lasso data reduction technique.

"The list is meant to introduce some of the main statistical pioneers and their important achievements in psychology," Wright concludes. "It is hoped learning about the people behind the statistical procedures will make the procedures seem more humane than many psychologists perceive them to be."

What do you think of Wright's list? Is there anyone he's overlooked?
_________________________________

ResearchBlogging.orgDaniel B Wright (2009). Ten Statisticians and Their Impacts for Psychologists. Perspectives on Psychological Science, 4 (6), 587-597. [Draft pdf via author website].

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.


Bookmark and Share

Friday, 2 October 2009

Resting-state brain networks are stable

The world doesn't stop when night falls. From rabbits to night-club bouncers, there's a whole cast of nocturnal characters who come out to play. It's a similar story with the brain. When we disengage from the outside world, the brain doesn't go to sleep. Rather, there's a suite of neural regions, known collectively as the "default mode network", that spring to life. Over the last decade, this recognition of the brain's intrinsic functioning has led neuroscientists to perform numerous studies in which they scan the brain during rest, looking for areas that correlate with each other in the rise and fall of their activity.

We now have maps of "intrinsic connectivity networks" and findings showing how these patterns are affected by ageing and illness. But not everyone is happy. Many psychologists are concerned that this field reflects a "decognitivisation" of neuroscience, and that studies of a person's brain at rest are, almost by definition, poorly controlled (see "the Resting Brain" in this month's Psychologist magazine).

It is against this backdrop that Zarrar Shehzad and twelve co-authors have examined the reliability of resting-state networks across multiple brain scans, separated by up to 16 months. They say this is the first time the test-retest reliability of these networks has been explicitly tested. Their finding is that the functioning of a resting brain may well be unconstrained, but that some of the intrinsic networks that emerge are nonetheless stable over time.

The researchers scanned the brains of 26 participants three times. An initial scan was performed and then 5 to 16 months later, scans two and three were performed within 45 minutes of each other. Each scan lasted about five minutes during which time the participants were told to rest with their eyes open, whilst before them was a black background with the word "relax" written in white.

Shehzad's team focused on three regions of interest including those areas previously identified as the "default mode network" and its opposite number, the "task positive network". As in previous resting-state studies they found ample evidence of correlations in neural activity across various regions of the brain. What's more, these patterns of correlation tended to be moderately to robustly stable across the three scans. Stability of linkage between regions was greatest for those that were positively correlated (when one goes up, the other goes up), rather than negatively correlated (when ones goes up, the other goes down). The researchers also found that intrinsic correlations were particularly stable within the "default mode network", which extends from the prefrontal cortex along the midline to the parietal and medial temporal lobes.

Intrinsic correlations within the "task positive network", which is anti-correlated with the "default mode network" and which tends to be more active during extrinsic tasks, were less stable over time. The researchers said this probably reflects the fact that the task positive network is a superordinate system consisting of numerous sub-networks.

"Our findings support the audacious hypothesis that intrinsic connectivity networks, which are readily observed during resting state fMRI studies (as well as during task-based studies), reflect the fundamental self-organising properties of the brain," the researchers said.
_________________________________

ResearchBlogging.orgShehzad, Z., Kelly, A., Reiss, P., Gee, D., Gotimer, K., Uddin, L., Lee, S., Margulies, D., Roy, A., Biswal, B., Petkova, E., Castellanos, F., & Milham, M. (2009). The Resting Brain: Unconstrained yet Reliable Cerebral Cortex, 19 (10), 2209-2229 DOI: 10.1093/cercor/bhn256

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.


Bookmark and Share

Sunday, 6 September 2009

The tantalising potential of mobile phones for social research

Nearly everyone seems to carry a mobile phone these days. What if social scientists could exploit this technology to spy on our social behaviour: who we speak to and who we spend time with? It turns out they already are. Nathan Eagle, named recently as a leading young innovator by Technology Review, and his colleagues, have published one of the first studies into social network analysis using spy software loaded onto Nokia smartphones.

For nine months, Eagle's team recorded data from the phones of 94 students and staff at MIT. By using blue-tooth technology and phone masts, they could monitor the movements of the participants, as well as their phone calls. Their main goal with this preliminary study was to compare data collected from the phones with subjective self-report data collected through traditional survey methodology.

The participants were asked to estimate their average spatial proximity to the other participants, whether they were close friends, and to indicate how satisfied they were at work.

Some intriguing findings emerged. For example, the researchers could predict with around 95 per cent accuracy who was friends with whom by looking at how much time participants spent with each other during key periods, such as Saturday nights.

There were also discrepancies between the two data sets. For example, participants tended to overestimate how much time they spent with friends, and underestimate how much time they spent with non-friends. Also, the accuracy of the self-report proximity data tended to peak over the previous seven days (at which point it correlated highly with the phone records), but then its accuracy tailed off. This provides useful information about the validity of survey records over time, and an interesting insight into people's memories for their social interactions.

As regards satisfaction at work, it turned out that people who were in closer proximity to their friends during work time, tended to be happier at work, whilst participants less happy at work tended to make more phone calls to friends during work hours.

"Data collected from mobile phones have the potential to provide insight into the underlying relational dynamics of organisations, communities and potentially societies," the researchers said.
_________________________________

ResearchBlogging.orgEagle, N., Pentland, A., & Lazer, D. (2009). Inferring friendship network structure by using mobile phone data. Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0900282106

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Sunday, 26 April 2009

It's those Voodoo correlations again ... brain imagers accused of "double dipping"

This time there's no explicit naming and shaming, and the title may not be as colourful, but a new study out today in prestige journal Nature Neuroscience echoes many of the same concerns voiced earlier this year in the leaked paper "Voodoo Correlations in Social Neuroscience" (since renamed as "Puzzlingly High Correlations ..."). And the new paper's implications are surely just as profound for the cognitive neuroscience community.

Nikolaus Kriegeskorte and colleagues analysed all the fMRI studies published in Nature, Science, Nature Neuroscience, Neuron and Journal of Neuroscience, in 2008, and found that 42 per cent of these 134 papers were guilty of performing at least one non-independent selective analysis - what Kriegeskorte's team dub "double dipping".

This is the procedure, also condemned by the Voodoo paper, in which researchers first perform an all-over analysis to find a brain region(s) that responds to the condition of interest, before going on to test their hypothesis on data collected in just that brain region. The cardinal sin is that the same data are used in both stages.

A similarly flawed approach can be seen in brain imaging studies that claim to be able to discern a presented stimulus from patterns of activity recorded in a given brain area. These are the kind of studies that lead to "mind reading" headlines in the popular press. In this case, the alleged statistical crime is to use the same data for the training phase of pattern extraction and the subsequent hypothesis testing phase.

Kriegeskorte's claim is not that all the studies guilty of this procedure are invalid, but that their data will have been distorted to varying degrees. "To decide which neuroscientific claims hold, the community needs to carefully consider each particular case, guided by both neuroscientific and statistical expertise," they wrote.

To support their case, Kriegeskorte's team performed two "mock" experiments of the "region of interest" and "pattern extraction" types. In each case they showed how double-dipping can drastically distort results. For example, in a mock pattern-information analysis they achieved a significant result with double-dipping even after feeding purely random data into the analysis.

The ramifications of these statistical observations don't end with brain imaging. They also have implications for work with electroencephalography, in which researchers are prone to use the same data for selecting relevant channels and testing hypotheses, and for research using single-cell recording.

"A circular analysis is one whose assumptions distort its results," the authors concluded. "We have demonstrated that practices that are widespread in neuroimaging are affected by circularity."

UPDATE: A freely available PDF of supplementary info, including how to spot circular analyses and a proposed policy for preventing distortion of data, is now available at Nature Neuroscience.
_________________________________

ResearchBlogging.orgKriegeskorte, N., Simmons, W.K., Bellgowan, P.S.F., & Baker, C.I. (2009). Circular analysis in systems neuroscience: the dangers of double dipping Nature Neuroscience. In Press.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 21 January 2009

Another shock for brain imaging research - the signal isn't always linked to neuronal activity

The brain imaging community is about to experience another shockwave, just days after the online leak of a paper that challenged many of the brain-behaviour correlations reported in respected social neuroscience journals.

Now Yevgeniy Sirotin and Aniruddha Das have reported that blood flow changes in the brain - the signal measured by brain scanners - are not always linked to changes in neuronal activity. Experts have known for some time that the relationship between blood flow and neuronal activity might be rather complicated but this is the first time that such an extreme mismatch has been demonstrated.

Sirotin and Das used electrodes to directly record neuronal activity in the vision part of the brains of two awake monkeys, and at the same time they used a camera system and injected dyes to monitor blood flow to that region. This kind of thing couldn't be done with humans because it is too intrusive and physically harmful.

The monkeys were trained to look at a tiny dot when it was one colour and to relax when it was another colour. The dot alternated colours following a predictable rhythm, so the monkeys could predict when they'd need to concentrate and when they could relax. Sometimes, when the monkeys were required to fixate the dot, it was accompanied by intense visual stimuli, whereas on other trials there was nothing, leaving the monkeys in near darkness.

As you'd expect, when there was intense visual stimulation, the researchers observed increased neuronal activity in the visual area of the monkeys' brains and lots of blood flow to that region. But here's the important bit: they also observed increased blood flow to the visual brain even when there was nothing for the monkeys to look at, except for the minuscule dot, and even though neuronal activity was virtually silent. It's as though extra blood was being channelled to the visual cortex, in anticipation that there might be lots of visual material to look at.

There's a chance that this anticipatory blood flow could just reflect an increase in arousal, since the researchers also noted anticipatory changes to heart rate and pupil size just before an active phase of each trial was due to begin. However, Sirotin and Das were able to rule this out using an auditory task. Heart rate and pupil size changed in anticipation of the active phase of the auditory task, but there was no anticipatory blood flow to the visual parts of the brain.

The interpretation of human brain imaging experiments is founded on the idea that changes in blood flow reflect parallel changes in neuronal activity. This important new study shows that blood flow changes can be anticipatory and completely unconnected to any localised neuronal activity. It's up to future research to find out which brain areas and cognitive mechanisms are controlling this anticipatory blood flow. As the researchers said, their finding points to a "novel anticipatory brain mechanism".

Writing a commentary on this paper in the same journal issue, David Leopold at the National Institute of Mental Health, Bethesda, said the findings were "sure to raise eyebrows among the human fMRI research community."
_________________________________

ResearchBlogging.orgYevgeniy B. Sirotin, Aniruddha Das (2009). Anticipatory haemodynamic signals in sensory cortex not predicted by local neuronal activityNature, 457, 475-479.

Image shows blood vessel activation in the brain evoked by visual stimulus. White lightning bolt patterns outline arteries in the contraction phase of the anticipatory response; dark centre is the specific response to the visual stimulus. Credit Sirotin & Das.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 14 January 2009

Do you do voodoo?

They are beloved by prestigious journals and the popular press, but many recent social neuroscience studies are profoundly flawed, according to a devastating critique - Voodoo Correlations in Social Neuroscience - in press at Perspectives on Psychological Science (PDF).

The studies in question have tended to claim astonishingly high correlations between localised areas of brain activity and specific psychological measures. For example, in 2003, Naomi Eisenberger at the University of California and her colleagues published a paper purporting to show that levels of self-reported rejection correlated at r=.88 (1.0 would be a perfect correlation) with levels of activity in the anterior cingulate cortex.

According to Hal Pashler and his band of methodological whistle-blowers, if Eisenberg's study and others like it were accurate, this "would be a milestone in understanding of brain-behaviour linkages, full of promise for potential diagnostic and therapeutic spin-offs." Unfortunately, Pashler's group argue that the findings from many of these recent studies are virtually meaningless.

The suspicions of Pashler and his colleagues - Ed Vul (lead author), Christine Harris and Piotr Winkielman - were piqued when they realised that many of the cited levels of correlation in social neuroscience were impossibly high given the respective reliability of brain activity measures and measures of psychological factors, such as rejection. To investigate further they conducted a literature search and surveyed the authors of 54 studies claiming significant brain-behaviour correlations. The search wasn't exhaustive but was thought to be representative, with a slight bias towards higher impact journals.

Pashler and his team found that 54 per cent of the studies had used a seriously biased method of analysis, a problem that probably also undermines the findings of fMRI studies in other fields of psychology. These researchers had identified small areas of brain activity (called voxels) that varied according to the experimental condition of interest (e.g. being rejected or not), and had then focused on just those voxels that showed a correlation, higher than a given threshold, with the psychological measure of interest (e.g. feeling rejected). Finally, they had arrived at their published brain-behaviour correlation figures by taking the average correlation from among just this select group of voxels, or in some cases just one “peak voxel”. Pashler's team contend that by following this procedure, it would have been nearly impossible for the studies not to find a significant brain-behaviour correlation.

By analogy with a purely behavioural experiment, imagine the author of a new psychometric measure claiming that his new test correlated with a target psychological construct, when actually he had arrived at his significant correlation only after he had first identified and analysed just those items that showed the correlation with the target construct. Indeed, Pashler and his collaborators speculated that the editors and reviewers of mainstream psychology journals would routinely pick up on the kind of flaws seen in imaging-based social neuroscience, but that the novelty and complexity of this new field meant such mistakes have slipped through the net.

'...[I]n half of the studies we surveyed, the reported correlation coefficients mean almost nothing, because they are systematically inflated by the biased analysis,' Pashler's team wrote. Perhaps unsurprisingly, among the papers they surveyed, it was the papers that used this flawed approach that tended to have published the highest correlation figures. '...[W]e suspect that while in many cases the reported relationships probably reflect some underlying relationship (albeit a much weaker relationship than the numbers in the articles implied), it is quite possible that a considerable number of relationships reported in this literature are entirely illusory.'

On a more positive note, Pashler's team say there are ways to analyse social neuroscience data without bias and that it should be possible for many of the studies they've criticised to re-analyse their data. For example, one approach is to identify voxels of interest by region, before seeing if their activity levels correlate with a target psychological factor. An alternative approach is to use different sets of data to perform the different steps of analysis used previously. For example, by using one run in the scanner to identify those voxels that correlate with a psychological measure, and then using a second, independent run to assess how highly that subset of voxels correlates with the chosen measure. "We urge investigators whose results have been questioned here to perform such analyses and to correct the record by publishing follow-up errata that provide valid numbers," Pashler's team said.

Matthew Lieberman, a co-author on Eisenberger's social rejection study, told us that he and his colleagues have drafted a robust reply to these methodological accusations, which will be published in Perspectives on Psychological Science alongside the Pashler paper (now available online; PDF). In particular he stressed that concerns over multiple comparisons in fMRI research are not new, are not specific to social neuroscience, and that the methodological approach of the Pashler group, done correctly, would lead to similar results to those already published. "There are numerous errors in their handling of the data that they reanalyzed," he argued. "While trying to recreate their [most damning] Figure 5, we went through and pulled all the correlations from all the papers. We found around 50 correlations that were clearly in the papers Pashler's team reviewed but were not included in their analyses. Almost all of these overlooked correlations tend to work against their hypotheses."
_________________________________

ResearchBlogging.orgEdward Vul, Christine Harris, Piotr Winkielman, Harold Pashler (2009). Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science. In Press.

Update: Lead author of the Pashler-group Voodoo critique, Ed Vul, answers some questions about the group's paper here. He also answers a rebuttal here. Picked up from the comments, Lieberman's rebuttal is now also available online in full. 


Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 29 September 2008

Practice, practice, practice ... the benefits are ongoing

Whether you're talking about sport, chess or music, a surfeit of research has shown that the best performing experts practice more than their less able colleagues.

What's unclear is whether the benefits of this practice are ongoing throughout a person's career and secondly, whether the benefits of practice vary with a person's level of skill. Are the most elite performers of such a high standard because of all the practice they do, or is it because of their superior talent that this practice is beneficial?

These questions are addressed in a new study of elite teenage chess players in the Netherlands, taking advantage of what's known as linear mixed methods analysis to compare the effects of multiple factors over time, both within and between separate groups.

Anique de Bruin and colleagues were particularly interested in comparing the effects of deliberate, focused practice on those teenagers who remained in the Netherlands' elite chess training programme, compared with the effects of practice on the performance of those who continued competing but who dropped out of the national training.

Forty-eight elite teenage players who stayed in the training scheme and thirty-three who dropped out answered questions about how many hours a week they spent practising. Their performance over the years was measured via their official chess ratings, collected between two to four times a year.

The headline result is that the benefits of practice are ongoing through the years - not just once a person has become elite - and that the players who dropped out performed less well, not because they benefited less from practice, but because they practised less. Assuming these findings translate to other domains of skill besides chess, these findings have implications for all of us.

"Irrespective of skill level, stimulating deliberate practice will likely improve performance," the researchers said.

The Digest caught up with co-author Niels Smits and asked him about the statistical approach taken in this study:
"Linear mixed models are a very elegant method of analyzing longitudinal data. They are very flexible for at least three reasons. First, in contrast to older methods such as repeated measures (M)ANOVA, they do not ask for complete data on all time points for all subjects. Consequently, one does not have to deal with missing data such as removing observations or imputing data points. Second, they do not ask for equal time intervals between the measurements; therefore subjects are allowed to differ in the moment of measurement. Time of measurement is simply entered as a covariate in the model to allow for a time effect. A third virtue, is that time varying covariates can be easily added to the model to determine how changes in these them influence the dependent variable."
For more information on linear mixed models, see:

This talk given to the BPS Student Memebers' Group by Thom Baguley.
The Centre for Multilevel modelling at the University of Bristol.
The Adequacy of Repeated-Measures Regression for Multilevel Research (journal article).

_________________________________

ResearchBlogging.orgAnique B., Niels Smits, Remy, M., J., P. Rikers, Henk G. Schmidt (2008). Deliberate practice predicts performance over time in adolescent chess players and drop-outs: A linear mixed models analysis British Journal of Psychology DOI: 10.1348/000712608X295631

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 24 June 2008

Is the brain irrelevant to psychology?

Cognitive neuroscience explores how our mental faculties emerge from, and are organised in, the slimy tissue of our brains, and it's currently a thriving field. But some critics argue it's a dead-end, that biology is irrelevant to psychological accounts of how our minds work. In the words of philosopher Jerry Fodor, "If the mind happens in space at all, it happens somewhere north of the neck. What exactly turns on knowing how far north?"

Now, writing in a special journal issue on the interface between psychology and neuroscience, language expert Peter Hagoort has hit back, arguing that knowing something about the biology of cognition can help to shape psychological models.

Hagoort cites two key examples to support his claims. A little background is required.

When we encounter an unexpected word in a sentence ("He spread his warm bread with SOCKS."), a negative spike in electrical activity recorded from the surface of the scalp is detectable 400ms later and is thought to reflect the extra brain processing required for the surprise word.

Meanwhile, when we encounter a grammatical anomaly (e.g. "The boys kissES the girls") - there is a positive, more posterior, spike of activity, 600ms afterwards. This latter effect is observed even with nonsense sentences that violate grammatical rules, thus showing that the spike is independent from the processing of meaning.

Taken together, Hagoort says these findings have implications for psychological models of language processing because they endorse the idea that meaning and grammar are not handled by a "general-purpose language processor", as he puts it, but rather they are "domain specific" - in other words, processed independently.

For his second example, Hagoort points to a brain imaging study that showed the pleasantness of a smell was rated differently depending on whether it was accompanied by the label "cheese" or "body odour". Crucially, the brain imaging data showed the verbal label affected processing in the actual smell centre of the brain. "This example illustrates something that would not so easily be found out with a behavioral method: that language information acts directly in the olfactory input system," Hagoort said.
_________________________________

Hagoort, P. (2008). Should Psychology Ignore the Language of the Brain?. Current Directions in Psychological Science, 17(2), 96-101. DOI: 10.1111/j.1467-8721.2008.00556.x (Access is currently free).

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 12 May 2008

Just how non-clinical are so-called non-clinical community samples?

A practice common to psychology research is to take some measure - let's say amount of support from friends and family - and to compare people with mental health problems and people without mental health problems, on this measure. The trouble, according to Idia Thurston and her co-workers, is where to find people without mental health problems.

The tactic used by most researchers is to recruit from the wider community, for example by advertising in the local paper. But Thurston's team argue a large proportion of the general community actually have their own mental health problems, and many of them are receiving therapy, something many researchers fail to screen for. This means that what research papers describe as a "non-clinical community sample" may not be so "non-clinical" after all.

Thurston and her colleagues assessed 224 families recruited through adverts in local newspapers in south eastern USA as part of a larger study. They found 11 per cent of the teenagers, 20 per cent of the mothers and 13 per cent of the fathers met the diagnostic criteria for one or more psychiatric disorders. Moreover, 12 per cent of the teenagers, 20 per cent of the mothers and 11 per cent of the fathers were currently in therapy. These two groups didn't completely overlap - for instance, there were 25 mothers who met diagnostic criteria for a psychiatric disorder but who weren't in therapy.

Thurston's team said their findings have implications for research validity. Differences previously identified between clinical and so-called "non-clinical" groups may be caused by a factor other than the clinical status of the two groups.

Researchers should screen their community participants to find out if they are currently experiencing mental distress or participating in therapy, Thurston's team advised. But as regards whether such participants should then be excluded from research, Thurston and her colleagues said: "There is no perfect answer, but rather, researchers must weigh the costs and benefits of their exclusionary criteria in relation to the goals of the study."

Link to related Digest item.
Link to Psychologist magazine article on student participants.
_________________________________

Thurston, I.B., Curley, J., Fields, S., Kamboukos, D., Rojas, A., Phares, V. (2008). How nonclinical are community samples?. Journal of Community Psychology, 36(4), 411-420. DOI: 10.1002/jcop.20223

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Google+