Thursday, 28 October 2010

The Social Comparison Bias - or why we recommend new candidates who don't compete with our own strengths

Whether it's a gift for small talk or a knack for arithmetic, many of us have something we feel we're particularly good at. What happens from an early age is that this strength then becomes important for our self-esteem, which affects our behaviour in various ways. For example, children tend to choose friends who excel on different dimensions than themselves, presumably to protect their self-esteem from threat. A new study reveals another consequence - 'the social comparison bias' - that's relevant to business contexts. Stated simply, when making hiring decisions, people tend to favour potential candidates who don't compete with their own particular strengths.

Stephen Garcia and colleagues first demonstrated this idea in a hypothetical context. Twenty-nine undergrads were asked to imagine that they were a law professor with responsibility for recommending one of two new professorial candidates to join the law faculty. Half had to imagine they were a professor with a particularly high number of mixed-quality journal publications. These participants tended to say they would recommend the imaginary candidate with fewer but higher quality publications. By contrast, the other half of the participants were tasked with imagining that they were a professor with few but particularly high quality publications. You guessed it - they tended to recommend the candidate with the lower quality but more prolific publication record. In each case the participants favoured the candidate who didn't challenge their own particular area of (imaginary) strength, be that publication quality or quantity. The participants had been told that the department had a balanced mix of existing staff so it's unlikely their motive was a selfless one based on achieving a balanced team.

To make things more realistic, a second study involved a real decision. Forty undergrads completed verbal and maths tasks to which they were given false feedback. Next, they were presented with the scores achieved by two other students, one of whom they had to select to join their team for an up-coming group 'coordination task' that would involve throwing a tennis ball around. Participants tricked into thinking they'd excelled at the maths tended to choose the potential team member who was weak at maths but stronger verbally, and vice versa for those participants fed false feedback indicating they'd excelled verbally. Again, the researchers argued that it was unlikely the participants were simply striving for a balanced team because the maths and verbal skills in question weren't relevant to the tennis ball task.

A final study involved 55 employees at a Midwestern university - they were asked to imagine that they were in a company role with either high pay or great decision-making power. Next they had to recommend to their company that it either offer high pay or high decision-making power to a new recruit. The participants tended to advise offering the new recruit the opposite of whatever they had. The participants also said the particular perk of their imaginary post - pay or decision-making - would be the most important to their self-esteem.

'The present analysis introduces the social comparison bias: a social comparison-based bias that taints the recommendation process,' the researchers said. 'At a broader level, the social comparison bias might help partially to explain why some top-notch departments or organisational units lose prestige over time ... Individuals unwittingly fail to reproduce departmental strengths by protecting their personal standing instead of the standing of the broader department.'

ResearchBlogging.orgGarcia, S., Song, H., and Tesser, A. (2010). Tainted recommendations: The social comparison bias. Organizational Behavior and Human Decision Processes, 113 (2), 97-101 DOI: 10.1016/j.obhdp.2010.06.002

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 27 October 2010

Five minutes with the discoverer of the Scientific Impotence Excuse, Geoffrey Munro

When attempting to change people’s behaviour – for example, encouraging them to eat more healthily or recycle more – a common tactic is to present scientific findings that justify the behaviour change. A problem with this approach, according to recent research by Geoffrey Munro at Towson University in America, is that when people are faced with scientific research that clashes with their personal view, they invoke a range of strategies to discount the findings.

Perhaps the most common of these is to challenge the methodological soundness of the research. However, with newspaper reports and other brief summaries of science findings, that’s often not possible because of lack of detail. In this case, Munro's research suggests that people will often judge that the topic at hand is not amenable to scientific enquiry. What’s more, he’s found that, having come to this conclusion about the specific topic at hand, the sceptic will then generalise their belief about scientific impotence to other topics as well (further detail). Munro says that by embracing the general idea that some topics are beyond the reach of science, such people are able to maintain belief in their own intellectual credibility, rather than feeling that they’ve selectively dismissed unpalatable findings.

The Digest caught up with Professor Munro to ask him, first of all, whether he thinks there are any ways to combat the scientific impotence excuse or reduce the likelihood of it being deployed.
"One of the most difficult things to do is to admit that you are wrong. In cases where a person is exposed to scientific conclusions that contradict her or his existing beliefs, one option would be to accept the scientific conclusions and change one’s beliefs. It sounds simple enough, and, for many topics, it is that simple. However, some of our beliefs are much more resistant to change. These are the ones that are important to us. They may be linked to other important aspects of our identity or self-concept (e.g., “I’m an environmentalist ”) or relevant to values that are central to who we are (e.g., “I believe in the sanctity of human life”) or meaningful to the social groups to which we align ourselves (e.g., “I’m a union man like my father and grandfather before him”) or associated with deeply-held emotions (e.g., “Homosexuality disgusts me”). When scientific conclusions challenge these kinds of beliefs, it’s much harder to admit that we were wrong because it might require a rethinking of our sense of who we are, what values are important to us, who we align ourselves with, and what our gut feelings tell us. Thus, a cognitively easier solution might be to not admit our beliefs have been defeated but to question the validity of the scientific conclusions. We might question the methodological quality of the scientific evidence, the researcher’s impartiality, or even the ability of scientific methods to provide us with useful information about this topic (and other topics as well). This final resistance technique is what I called “scientific impotence”.

So, how can strongly-held beliefs be changed? How can scientific evidence break through the defensive tenacity of these beliefs? Well, I hope the paragraph above illustrates how scientific evidence can be threatening when it challenges an important belief. It makes you feel anxious, upset, and/or embarrassed. It makes you question your own intelligence, moral standing, and group alliances. Therefore, the most effective ways to break the resistance to belief-challenging scientific conclusions is to present such conclusions in non-threatening ways. For example, Cohen and his colleagues have shown that affirming a person’s values prior to presenting belief-challenging scientific conclusions breaks down the usual resistance. In other words, the science is not so threatening when you’ve had a chance to bolster your value system. Relatedly, framing scientific conclusions in a way that is consistent with the values of the audience is more effective than challenging those values. Research from my own laboratory shows that reducing the negative emotional reactions people feel in response to belief-challenging scientific evidence can make people more accepting of the evidence. We achieved this by giving participants another source (something other than the scientific conclusions they read) to which they could attribute their negative emotional reactions. While this might be difficult to implement outside of the laboratory, we believe that other factors can affect the degree to which negative emotional reactions occur. For example, a source who speaks with humility is less upsetting than a sarcastic and arrogant pundit. Similarly, the use of discovery-type scientific words and phrases (e.g., “we learned that…” or “the studies revealed that…”) might be less emotionally provocative than debate-type scientific words and phrases (e.g., “we argue that…” or “we disagree with so-and-so and contend that…”). In fact, anything that draws the ingroup-outgroup line in the sand is likely to lead to defensive resistance if it appears that the science or its source is the outgroup. So, avoiding culture war symbols is crucial. Finally, as a college professor, I believe that frequent exposure to critical thinking skills, practice with critical thinking situations, and quality feedback about critical thinking allows people to understand how their own biases can affect their analysis of information and result in open-minded thinkers who are skeptical yet not defensive."
Next, the Digest asked Prof Munro whether he thinks psychology findings are particularly prone to provoke scientific discounting cognitions - and if so, should we as a discipline make extra effort to combat this?
"Yes, I believe psychological research (and probably social science research in general) is prone to provoke scientific discounting. The term “soft science” illustrates how social sciences are perceived differently than the “hard sciences”. There are a number of reasons why this might be true. First, much psychological research is conducted without the use of technologically-sophisticated laboratories containing the fancy equipment that comes to many people’s minds when the word science is used. In other words, psychological research doesn’t always resemble the science prototype. Supporting this position, psychological research that is conducted in high-tech labs (e.g., neuroscience imaging studies) is, in my opinion, perceived with less skepticism by the general public. Second, psychological research often investigates topics about which people already have subjective opinions or, at least, can easily call to mind experiences from their own lives that serve as a comparison to the research conclusions. In other words, people often believe that they already have knowledge and expertise about human thought and behavior. When their opinions run counter to psychological research conclusions, then scientific discounting is likely. For example, there is a common belief that cathartic behaviors (e.g., punching a punching bag) can reduce the frustrations that sometimes lead to aggression. Psychological research, however, has contradicted the catharsis hypothesis, yet the belief remains entrenched, possibly because it has such a strong intuitive appeal. In contrast, people will quickly reveal their lack of expertise on topics in physics or chemistry and have a harder time calling to mind examples from their own lives. Third, there is likely some belief that people’s thoughts and behaviors are less predictable, more mysterious, and affected by more variables than are inanimate objects like chemical molecules, planets in motion, or even the functioning of some parts of the human body (e.g., the kidneys). Furthermore, psychological conclusions are based on probability (e.g., the presence of a particular variable makes a behavior more likely to happen), and probability introduces the kind of ambiguity that makes the conclusions easy to discount. Fourth, some psychological research is perceived to be derived from and possibly biased by a sociopolitical ideology. That is, there is the belief that some psychologists conduct their research with the goal of providing support for some political viewpoint. This is somewhat less common among the “hard sciences” although the controversy over climate change and the researchers who investigate it suggest that if the topic is one that elicits the ingroup-outgroup nature of the cultural divide, then the “hard sciences” are also not immune to the problem of scientific discounting.

I think that the discipline of psychology has already made vast improvements in managing its public impression and is probably held in higher esteem than it was 50 or even 20 years ago. However, continued vigilance is essential against those (both within and outside of the discipline) who contribute to the perception of psychology as something less than science. The field of psychology has much to offer – it can generate important knowledge that can inform public policy and increase people’s health and happiness, but it cannot do so if its scientific conclusions fall on deaf ears."

ResearchBlogging.orgMunro, G. (2010). The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts. Journal of Applied Social Psychology, 40 (3), 579-600 DOI: 10.1111/j.1559-1816.2010.00588.x

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

The Special Issue Spotter

We trawl the world's journals so you don't have to:

Developmental cascades - Part I and Part II (Development and Psychopathology). From the editorial: 'Given effects that spread over time for some kinds of psychopathology, well-timed and targeted interventions could interrupt negative or promote positive cascades; these efforts may work by counteracting negative cascades, by targeting the reduction of problems in domains that often cascade to cause other problems, or by targeting improvements in competence in domains that increase the probability of better function in other domains.'

Performance Psychology: Theory and Application in Industry, Sports, Human Services, and Behavioral Healthcare (Behaviour Modification).

Genetics, Personalized Medicine, and Behavioral Intervention—Can This Combination Improve Patient Care? (Perspectives on Psychological Science).

The Contributions of Robert Zajonc (Emotion Review). Among many other things, social psychologist Zajonc, who died in 2008, was known for a series of influential experiments on the mere exposure effect.

Identifying effective classwide interventions to promote positive outcomes for all students (Psychology in the Schools).

Sleep disturbances (Journal of Clinical Psychology).

Neurogenetics (Neuron) - free to access until Nov 17.

Language and birdsong (Brain and Language).

Tuesday, 26 October 2010

Unborn fetuses demonstrate their sociability after just 14 weeks gestation

The idea that humans are social animals has become a truism. Among other things, experts point to the gregarious behaviour of babies - their precocious talents for mimicry and face recognition. What about human behaviour pre-birth? Is that social too? Using what they call the 'experiment of nature' provided by twin fetuses, Umberto Castiello and his team have shown that by the 14th week of gestation, unborn twins are already directing arm movements at each other, and by the 18th week these 'social' gestures have increased to 29 per cent of all observed movements. In contrast, the proportion of self-directed actions reduced over the same period.

Furthermore, the 'kinematics' of the twins' 'caressing' arm movements to each other's head and back were distinct from movements aimed at the uterine wall or at most parts of the their own bodies. That is, the social movements were longer-lasting and slower to decelerate than most other fetal movements, making them similar to the kind of movements fetuses learn to make towards their own eyes. This suggests that the fetuses recognise on some level that there is something special about their twin.

The researchers made their observations using four-dimensional (3-D plus changes over time) ultrasound scans of five women pregnant with twins. These were performed twice for twenty minutes - at the 14th and 18th weeks of gestation.

'The prenatal "social" interactions described in this paper epitomise the congenital propensity for sociality of primates in general and of humans in particular,' the researchers said, 'grounding for the first time such long-held intuition on quantitative empirical results.' Castiello and his colleagues added that further research of this kind could one day reveal the links between social behaviour patterns in the uterus and the later appearance of developmental disorders associated with social impairments.

ResearchBlogging.orgCastiello, U., Becchio, C., Zoia, S., Nelini, C., Sartori, L., Blason, L., D'Ottavio, G., Bulgheroni, M., and Gallese, V. (2010). Wired to Be Social: The Ontogeny of Human Interaction. PLoS ONE, 5 (10) DOI: 10.1371/journal.pone.0013199

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 25 October 2010

'Don't do it!' - how your inner voice really does aid self-control

As you stretch for yet another delicious cup cake, the abstemious little voice in your head pleads 'Don't do it!'. Does this self talk really have any effect on your impulse control or is it merely providing a private commentary on your mental life? A new study using a laboratory test of self-control suggests that the inner voice really does help.

Alexa Tullett and Michael Inzlicht had 37 undergrads perform the Go/No Go task. Briefly, this involved one on-screen symbol indicating that a button should be pressed as quickly as possible (the Go command) whilst another indicated that the button press should not be performed (No Go). Because the Go symbol was far more common, participants tended to find it difficult to suppress making a button press on the rare occasions when a No Go command was given. People with more self-control would be expected to make fewer errors of this kind.

Crucially, Tullett and Inzlicht also had the participants perform a secondary task at the same time - either repeating the word 'computer' with their inner voice, or drawing circles with their free hand. The central finding was that participants made significantly more errors on the Go/No Go task (i.e. pressing the button at the wrong times) when they also had to repeat the word 'computer' to themselves, compared with when they had the additional task of drawing circles. This difference was exacerbated during a more difficult version of the Go/No Go task in which the command symbols were periodically switched (so that the Go command became the No Go command and vice versa). It seems that the participants' self-control was particularly compromised when their inner voice was kept busy saying 'computer' so that it couldn't be used to aid self-control.

'By examining performance on a classic self-control task, this study provides evidence that when we tell ourselves to "keep going" on the treadmill, or when we count to ten during an argument, we may be helping ourselves to successfully overcome our impulses in favour of goals like keeping fit, and preserving a relationship,' the researchers said.

ResearchBlogging.orgTullett AM, and Inzlicht M (2010). The voice of self-control: Blocking the inner voice increases impulsive responding. Acta psychologica, 135 (2), 252-6 PMID: 20692639

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 22 October 2010

Freelance blogging opportunity

Can you turn scientific research findings into engaging copy suitable for a non-specialist audience? Do you have an interest in occupational psychology?

The British Psychological Society, the learned body that’s represented UK psychology and psychologists since 1901, is seeking a talented writer to edit and compile an Occupational Psychology Research Digest blog and monthly email newsletter. This new project will build on the success of our internationally renowned, awarding-winning Research Digest blog and email.

The appointment will be made on a freelance basis, initially for twelve months. Candidates should have a background in occupational psychology and at least two years’ experience as a writer, preferably online. For a formal job description and further information, contact christianjarrett [@]

Application is by CV and three samples of writing, in print or online, to be sent to christianjarrett [@] Deadline 12 Nov.

Asch's "conformity study" without the confederates

With the help of five to eight 'confederates' (research assistants posing as naive participants), Solomon Asch in the 1950s found that when it came to making public judgments about the relative lengths of lines, some people were willing to agree with a majority view that was clearly wrong.

Asch's finding was hugely influential, but a key criticism has been his use of confederates who pretended to believe unanimously that a line was a different length than it really was. They might well have behaved in a stilted, unnatural manner. And attempts to replicate the study could be confounded by the fact that some confederates will be more convincing than others. To solve these problems Kazuo Mori and Miho Arai adapted the MORI technique (Manipulation of Overlapping Rivalrous Images by polarizing filters; pdf), used previously in eye-witness research. By donning filter glasses similar to those used for watching 3-D movies, participants can view the same display and yet see different things.

Mori and Arai replicated Asch's line comparison task with 104 participants tested in groups of four at a time (on successive trials participants said aloud which of three comparison lines matched a single target line). In each group, three participants wore identical glasses, with one participant wearing a different set, thereby causing them to observe that a different comparison line matched the target line. As in Asch's studies, the participants stated their answers publicly, with the minority participant always going third.

Whereas Asch used male participants only, the new study involved both men and women. For women only, the new findings closely matched the seminal research, with the minority participant being swayed by the majority on an average of 4.41 times out of 12 key trials (compared with 3.44 times in the original). However, the male participants in the new study were not swayed by the majority view.

There are many possible reasons why men in the new study were not swayed by the majority as they were in Asch's studies, including cultural differences (the current study was conducted in Japan) and generational changes. Mori and Arai highlighted another reason - the fact that the minority and majority participants in their study knew each other, whereas participants in Asch's study did not know the confederates. The researchers argue that this is a strength of their new approach: 'Conforming behaviour among acquaintances is more important as a psychological research topic than conforming among strangers,' they said. 'Conformity generally takes place among acquainted persons, such as family members, friends or colleagues, and in daily life we seldom experience a situation like the Asch experiment in which we make decisions among total strangers.'

Looking ahead, Mori and Arai believe their approach will provide a powerful means of re-examining Asch's classic work, including in situations - for example, with young children - in which the use of confederates would not be practical.

ResearchBlogging.orgMori, K., and Arai, M. (2010). No need to fake it: Reproduction of the Asch experiment without confederates. International Journal of Psychology, 45 (5), 390-397 DOI: 10.1080/00207591003774485

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 21 October 2010


Eye-catching studies that didn't make the final cut:

How and why people get lost in buildings.

The sound of music makes time fly.

Do monkeys think in metaphors?

East Asians and Westerners responded differently to the news of the Swine Flu outbreak of 2009. A new study suggests this is because the greater historical threat of pathogens in the Asian region has led to a 'culturally adapted' behavioural response in that area [pdf].

Does speaking a different language change your personality? [see earlier]

Children with autism were more easily tricked by the vanishing ball illusion, not less as the researchers predicted.

The psychological effects of going to a music festival.

Cordelia Fine, author of the widely publicised new book Delusions of Gender, also has a journal article out: From Scanner to Sound Bite: Issues in Interpreting and Reporting Sex Differences in the Brain.

Past research has shown that observers can accurately judge a person's sexuality from their face within a fraction of a second. A new study suggests the judgment is made based on the gendered appearance of the face - for example a more feminine male face is more likely to be perceived as belonging to a homosexual man.

Wednesday, 20 October 2010

Speakers with a foreign accent are perceived as less credible - and not just because of prejudice

Speakers with a foreign accent are perceived as less believable than native speakers. A new study shows this isn't just because of prejudice towards 'outsiders'. It also has to do with the fluency effect, one manifestation of which is our tendency to assume that how easily a message is processed is a mark of its truthfulness. The effort required to understand an accented utterance means that the same fact is judged as less credible when uttered by an accented speaker, compared with a native speaker. This remains true even if the accented speaker is merely passing on a message from a native speaker.

Shiri Lev-Ari and Boaz Keysar recruited 9 speakers to utter 45 trivia facts, such as 'A giraffe can go without water longer than a camel'. Three of the speakers were native (American) English speakers; three had mild foreign accents and originated from Poland, Turkey or Austria/Germany; and three had strong accents and were from either Korea, Turkey or Italy.

Twenty-eight undergrad participants rated the veracity of each of the spoken facts (which speakers uttered which facts varied from participant to participant in a balanced design). Crucially, participants were led to believe that the study was really about using intuition to judge facts. Also, it was made clear to them that the facts had been penned by the researchers - that the speakers were merely acting as messengers. To drill home this idea, the researchers also had the participants go through the charade of themselves uttering a few facts, ostensibly to be presented to other participants.

On a 0-14cm scale from 'definitely false' at one end to 'definitely true' at the other, the participants rated facts spoken by mild and heavily accented speakers as less believable than facts uttered by native English speakers (the mean ratings were 6.95, 6.84 and 7.59, respectively - a statistically significant difference).

What if participants are made aware that the difficulty they have processing a foreign accent could be interfering with their judgements? A second study with another 27 undergrads tested this very idea. It was similar to the first but this time participants were told the explicit aim of the study. Now, facts spoken by a speaker with a mild accent were judged to be just as credible as facts spoken by a native English speaker. However, facts spoken by a heavily accented speaker were still judged to be less true. It seems we can override our bias for assuming easily processed utterances are more truthful - but only up to a point. Also, it's worth remembering that in real life, prejudice towards foreign speakers is likely to augment the effects observed here.

'These results have important implications for how people perceive non-native speakers of a language, particularly as mobility increases in the modern world, leading millions of people to be non-native speakers of the language they use daily,' the researchers concluded. 'Accent might reduce the credibility of non-native job seekers, eye-witnesses, reporters or news anchors.'

ResearchBlogging.orgLev-Ari, S., and Keysar, B. (2010). Why don't we believe non-native speakers? The influence of accent on credibility. Journal of Experimental Social Psychology, 46 (6), 1093-1096 DOI: 10.1016/j.jesp.2010.05.025

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Link to related open-access feature article in The Psychologist magazine [Hyunjin Song and Norbert Schwarz describe some fascinating findings on how fluency affects judgement, choice and processing style].

Monday, 18 October 2010

Mothers who attend baby signing classes are more stressed

A survey of 178 mothers has found that those who take their children to Baby Signing classes are more stressed than those who don't. Baby Signing involves using gestures in an attempt to communicate with pre-verbal or minimally lingual infants. The idea is hugely popular. Tiny Talk, a UK company, runs over 400 classes each week.

One claim of Baby Signing classes is that it is beneficial to children's language development. The evidence for this is equivocal. Another claim is that by improving child-parent communication, the classes help relieve parental stress. It's this latter claim that Neil Howlett and his colleagues have examined in their study of mothers recruited via signing classes, internet sites, toddler groups and community organisations in the south east of England. Eighty-nine mothers who attended Baby Signing classes with their infants were compared with 89 mothers who did not.

Howlett's team used the 120-item self-report Parenting Stress Index (PSI) to measure the mothers' stress levels. Although mothers who attended signing classes reported being more stressed than those who didn't, the researchers didn't obtain baseline stress measures (prior to class attendance) so they have no way of knowing if the classes caused the increased stress or if stressed mothers are simply more likely to attend the classes. No evidence was found that more months spent signing with one's child was associated with even greater stress, so the idea that signing causes the stress looks unlikely.

Howlett's team think the signing mothers were probably more stressed in the first place and that's why they took their children to signing classes (a plausible suggestion given that the classes claim to help reduce stress). Consistent with this, the signing mothers recorded particularly high scores on the 'child domain' of the PSI, which indicates they were stressed about their child's behaviour. Moreover, the finding chimes with past research showing that mothers who enrol their preschool children in academic focused activities also have heightened anxiety.

'Gesture classes claim to reduce stress and create a better bond between child and mother,' the researchers concluded. 'Our results find no evidence for this and even suggest that the effect may be detrimental.'

ResearchBlogging.orgHowlett, N., Kirk, E., and Pine, K. (2010). Does ‘Wanting the Best’ create more stress? The link between baby sign classes and maternal anxiety. Infant and Child Development DOI: 10.1002/icd.705

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Link to Psychologist magazine article: 'The great baby signing debate'.

Wednesday, 13 October 2010

Don't touch! On the mixed effects of avoidant instructions

What happens if you tell a golfer not to over-shoot a putt? Does it make them more likely to overshoot (an ironic effect, like the way suppressing thoughts of white bears actually leads to bear-based thoughts) or does it provoke over-compensation - putts that are particularly short? The same question could be asked for similar situations in other sports and also for movement instructions in the psychology lab.

Past research has produced mixed results - sometimes ironic effects are observed, other times over-compensation. To find out what's going on, Christopher Russell and colleagues conducted a lab experiment in which 40 undergrads repeatedly traced an imaginary straight line between two on-screen dots with a computer mouse. On key trials, they were given an additional instruction not to move to the left (or right, etc) of the line.

The effect of this additional instruction was far from uniform across the participants. The largest sub-grouping over-compensated. So if the additional command was 'do not move to the left', these 26 participants actually traced a movement that was further to the right of the straight line than when no such additional instruction was given. However, there was another group of participants who demonstrated ironic effects - the instruction to avoid going left made them more likely to do just that. Although the ten participants in the latter group scored higher on trait and state anxiety, their errors were actually smaller than the over-compensators.

What about the effect of an extra distraction task? When participants had to trace straight lines while simultaneously holding in a mind a 7-digit number, the effect of an additional 'do not go left/right' instruction was reversed for about half the participants - if they'd shown ironic effects in the absence of the memory task, they now over-compensated, and vice versa. Curiously, for a minority of participants, the extra burden of the memory task actually had a beneficial effect - cancelling out their earlier ironic response to the additional task instruction, so that their mouse movements ended up straighter.

So, where does all this leave us? Although the theoretical implications are beyond the scope of this item, there are some take-home practical lessons. 'The avoidant instructions used here deliberately resembled those used by coaches to direct their athletes and by athletes in their self-talk,' Russell and his colleagues said, 'to show how ironic outcomes and over-compensations can be unwittingly inflicted in sporting contexts.'

Researchers should also take note that the avoidant instructions they give to their participants could have important, unpredictable effects that vary from one participant to another, and from one condition to another, depending on concurrent task demands. 'What experimenters often assume to be random error in their data may actually be explained by ironic and overcompensatory responding styles,' Russell's team said. 'In any event, experimenters may minimise their biasing influence by emphasising to participants what is to be achieved while neglecting to specify what should be avoided.'

ResearchBlogging.orgRussell, C., and Grealy, M. (2010). Avoidant instructions induce ironic and overcompensatory movement errors differently between and within individuals. The Quarterly Journal of Experimental Psychology, 63 (9), 1671-1682 DOI: 10.1080/17470210903572022

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 12 October 2010

The Special Issue Spotter

We trawl the world's journals so you don't have to:

Deep brain stimulation (Neurosurgical Focus).

Decision neuroscience (Journal of Economic Psychology).

Identity, place, and environmental behaviour (Journal of Environmental Psychology).

The natural and the normal in the history of sexuality (Psychology and Sexuality).

Integrating traditional healing practices into counselling and psychotherapy (Counselling Psychology Quarterly).

Monday, 11 October 2010

Cultivating little scientists from the age of two

Young children are little scientists. They instinctively stretch, prod, observe and categorise the world's offerings. This natural inquisitiveness can be cultivated even before school and several studies have shown the benefits, in terms of general learning ability and specific maths and science skills. But just how early can this 'sciencing', as it's known, start? A new study by Tessa van Schijndel and colleagues claims that a six-week sciencing programme for two to three-year-olds boosted their exploratory 'science-like' play.

Thirty-five two- to three-year-olds at an Amsterdam day-care centre were assigned to the six-week sciencing programme. This involved a specialist science teacher encouraging the children to play two kinds of games in their sandpit: 'sorting and sets', which had a cake-baking theme, and 'slope and speed' which had an 'on top of the mountain theme'. The children were free to join in or leave the sand-pit games as often as they wanted, but were encouraged to take part at least once a week. The games involved toys of different colours and materials, as well as plastic tubes and balls. The key elements of the guided play were manipulating the objects, repeatedly sorting them into various combinations, and observing the effects of these manipulations. The regular teachers complemented this play by reading from books that matched the cake and mountain themes.

Twelve age-matched kids at another day-care centre run by the same organisation acted as controls. They were provided with the exact same sand-pit toys but they weren't guided in how to interact with them.

The researchers devised a scale for rating the sophistication of spontaneous exploratory play and, using videos of the children's unguided sand-pit play during the five weeks preceding and following the sciencing programme, they were able to see if the programme had made any difference. Coding of the videos showed that the sciencing programme children's spontaneous exploratory play had become more sophisticated (including more manipulation, re-combining, observation, and more symbolic play) - especially among those whose initial exploratory play levels were lower. By contrast, the control children's play had actually become slightly less exploratory, probably as a result of their having grown bored with the same sand-pit toys.

van Schijndel's team acknowledged that more research is needed to identify the effective aspects of their sciencing intervention. Indeed, they admitted that the programme may have worked by altering the practices of the day-care centre's regular teaching staff, an outcome they said should also be considered a success.

'...[W]e plead for more attention in the initial and in-service training of teachers for science-related subjects,' the researchers concluded. 'Our study shows that the curiosity of young children in natural phenomena and in how things work, needs to be supported by playful and scaffolding teachers. Probably, this is especially true for children with a low level of exploratory play.'

Their plea comes at a time when primary school teachers in the UK with a science degree are a rare breed. Speaking to the Independent recently, Sir Martin Rees, outgoing head of the Royal Society, said there is just one such teacher for every three primary schools. 'It is depressing that a tiny, tiny fraction of primary school teachers have any higher education qualification with a scientific component,' he lamented.

ResearchBlogging.orgvan Schijndel, T., Singer, E., van der Maas, H., and Raijmakers, M. (2010). A sciencing programme and young children's exploratory play in the sandpit. European Journal of Developmental Psychology, 7 (5), 603-617 DOI: 10.1080/17405620903412344

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

--Further reading--
How children learn scientific thinking from their parents

Friday, 8 October 2010

The evolutionary roots of laughter

To evolutionary psychologists, the noise made by gorillas, chimps and bonobos when you tickle their feet is no laughing matter. These distinctive vocalisations suggest that rather than evolving separately, laughter evolved in a shared common ancestor before becoming tailored in each primate species, including humans.

To find support for this idea, Diana Szameitat and her colleagues scanned the brains of 18 men and women whilst they listened to the sound of human tickle-induced laughter as well as laughter prompted by joy and taunting. The researchers found a 'double-dissociation' - the tickle laughter provoked extra activity in the secondary auditory cortex, likely reflecting the acoustical complexity of this kind of laughter, whereas the joy and taunting laughter prompted more activity in the medial frontal cortex, a region associated with social and emotional processing. These differences were observed whether the participants were tasked with categorising the laughter they heard, or merely with counting the number of laughs. The finding suggests that humans produce and process an evolutionarily 'old' form of tickle-based laughter, which is shared with non-human primates, as well as a newer, more emotionally sophisticated variant.

The laughter stimuli were provided by a team of eight professional actors using 'auto induction' techniques. This means they used their imagination, memories, and body movements to provoke the required emotions and bodily sensations in themselves as far as they could. The researchers said they only selected laughter samples that had been accurately categorised (as joy, taunting, or tickle laughter) in pilot work at well above chance levels by naive listeners. The dependence on acted laughter does seem to be a weakness of the study, however, especially as it's a well-documented fact that people are unable to tickle themselves.

'Our study provides suggestive evidence that laughter, in the form of a reflex-like reaction to touch, has been adopted into human social behaviour from animal behaviour,' the researchers said. 'Through the differentiation of human social interaction over time this "simple" form of laughter may have diversified to become a spectrum of different laughter variants in order to accommodate increased complexity of human social interaction.'

ResearchBlogging.orgSzameitat, D., Kreifelts, B., Alter, K., Szameitat, A., Sterr, A., Grodd, W., and Wildgruber, D. (2010). It is not always tickling: Distinct cerebral responses during perception of different laughter types. NeuroImage, 53 (4), 1264-1271 DOI: 10.1016/j.neuroimage.2010.06.028

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 6 October 2010

How to form a habit

This has nothing to do with nuns' clothing. Habits are those behaviours that have become automatic, triggered by a cue in the environment rather than by conscious will. Health psychologists are interested for obvious reasons - they want to assist people in breaking unhealthy habits, while helping them adopt healthy ones. Remarkably, although there are plenty of habit-formation theories, before now, no-one had actually studied habits systematically as they are formed.

Phillippa Lally and her team recruited 96 undergrads (mean age 27) and asked them to adopt a new health-related behaviour, to be repeated once a day for the next 84 days. The new behaviour had to be linked to a daily cue. Examples chosen by the participants included going for a 15 minute run before dinner; eating a piece of fruit with lunch; and doing 50 sit-ups after morning coffee. The participants also logged onto a website each day, to report whether they'd performed the behaviour on the previous day, and to fill out a self-report measure of the behaviour's automaticity. Example items included 'I do it automatically', 'I do it without thinking' and 'I'd find it hard not to do'.

Of the 82 participants who saw the study through to the end, the most common pattern of habit formation was for early repetitions of the chosen behaviour to produce the largest increases in its automaticity. Over time, further increases in automaticity dwindled until a plateau was reached beyond which extra repetitions made no difference to the automaticity achieved.

The average time to reach maximum automaticity was 66 days, although this varied greatly between participants from 18 days to a predicted 254 days (assuming the still rising rate of change in automaticity at the study end were to be continued beyond the study's 84 days). This is much longer than most previous estimates of the time taken to acquire a new habit - for example a 1988 book claimed a behaviour is habitual once it's been performed at least twice a month, at least ten times. In fact, even after 84 days, about half of the current study participants had failed to achieve a high enough automaticity score for their new behaviour to be considered a habit.

Unsurprisingly perhaps, more complex behaviours were found to take longer to become habits. Participants who'd chosen an exercise behaviour took about one and a half times as long to reach their automaticity plateau compared with the participants who adopted new eating or drinking behaviours.

What about the effect of having a day off from the behaviour? Writing in 1890, William James said that a behaviour must be repeated without omission for it to become a habit. The new results found that a single missed day had little impact on later automaticity gains, either early in the study or later on, suggesting James may have overestimated the effect of a missed repetition. However, there was some evidence that too many missed repeats of the behaviour, even if spread out over time, had a cumulative effect, reducing the maximum automaticity level that was ultimately reached.

It seems the message of this research for those seeking to establish a new habit is to repeat the behaviour every day if you can, but don't worry excessively if you miss a day or two. Also be prepared for the long haul - remember the average time to reach peak automaticity was 66 days.

This research has a serious shortcoming, acknowledged by the researchers, which is that it depended entirely on participants' ability to report the automaticity of their own behaviour. Also, the amount of data made it hard to form clear conclusions about the need for consistency in building a habit. However, the study provides an exciting new approach for exploring habit formation and future research could easily remedy these shortcomings.

ResearchBlogging.orgLally, P., van Jaarsveld, C., Potts, H., and Wardle, J. (2010). How are habits formed: Modelling habit formation in the real world. European Journal of Social Psychology DOI: 10.1002/ejsp.674

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 4 October 2010

The allure of the lady (and man) in red

When female chimps are nearing ovulation they display red on their bodies. Male chimps respond by masturbating and attempting to mount them. A new study claims we humans have moved on from this, but not a lot. Daniela Kayser's team found that when a lady wears red it prompts men to ask her more intimate questions and to sit closer to her. Surprisingly, this is the first time that the effect of colour on human sexual attraction behaviour has been studied. Past research has relied on asking participants to report their attraction rather than measuring their actual behaviour.

Twenty-three heterosexual or bisexual male undergrads were shown a photo of a blonde-haired, blue-eyed female rated in pilot work by men as moderately attractive. Half the participants saw a version in which she wore a red shirt, the other half saw an identical version bar for the fact that the shirt was green. Next the participants were asked to select 5 questions from a choice of 24 to ask the woman (these ranged from 'Where are you from?' to 'How could a guy get your attention at a bar?'). The key finding was that men who'd viewed the woman wearing red opted to ask more intimate questions.

In a second study another 22 male undergrads were shown a photo of a moderately attractive brown-haired, brown-eyed woman wearing either a red shirt or a blue shirt. The men were tricked into thinking they were about to have a conversation with the woman in an adjacent room. They were shown to the room, which contained two chairs - one at a table and one at the side. The men were told the woman would sit at the chair by the table and instructed to grab the other chair so as to sit across from her. The men who'd seen the photo in which the woman wore red placed their chair nearer to where they thought she was about to sit. This difference wasn't caused by effects on mood.

Kayser and her colleagues said their findings are consistent with evolutionary accounts of human attraction and have obvious practical implications. 'It appears that women would do well to wear a red shirt or dress when preparing for a date with a desirable man, and women may be particularly successful in online dating when they post a picture of themselves in red apparel. More generally, our findings should be of considerable interest to fashion consultants and product designers, as well as marketers and advertisers.'

Were these recommendations to be heeded widely, it raises the comical prospect of city bars and night-clubs being filled entirely with red-clad women and men, like rival sports teams arriving for a match only to discover they're both wearing the same strip. Yes, the men in red too, because another recent study by the same research team found that men wearing red were rated as more attractive and high-status by women.

ResearchBlogging.orgNiesta Kayser, D., Elliot, A., and Feltman, R. (2010). Red and romantic behavior in men viewing women. European Journal of Social Psychology DOI: 10.1002/ejsp.757

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 1 October 2010

Cross-cultural reflections on the mirror self-recognition test

The performance of young children on the 'mirror self-recognition test' varies hugely across cultures, a new study has shown. This is the test that involves surreptitiously putting a mark on a child's forehead and then seeing how they react when presented with their mirror image. Attempts by the child to touch or remove the mark are taken as a sign that he or she recognises themselves in the mirror. Studies in the West suggest that around half of all 18-month-olds pass the test, rising to 70 per cent by 24 months. Chimps, orangutans, dolphins and elephants have also been shown to pass the test, and there's recent debate over whether monkeys can too.

Tanya Broesch and her colleagues began by taking a simplified version of the mirror self-recognition test to Kenya, where they administered it to 82 children aged between 18 to 72 months. This version of the test involved a small, yellow post-it note rather than a red splodge, and children weren't given the usual verbal prompts such as 'who's that in the mirror?'. Amazingly, just two of the children 'passed' the test by touching or removing the post-it note. The other eighty children 'froze' when they saw their reflection - that is they stared at themselves but didn't react to the post-it note.

Next, Broesch and her team took their test to Fiji, Saint Lucia, Grenada, Peru, Canada and the USA, where they tested 133 children aged between 36 to 55 months. The performance of the North American children was in line with past research, with 88 per cent of the US kids and 77 per cent of the Canadians 'passing' the test. Rates of passing in Saint Lucia (58 per cent), Peru (52 per cent) and Grenada (51 per cent) were significantly lower. In Fiji, none of the children 'passed' the test.

So, what's going on? Are children in these non-Western nations seriously delayed in their mirror self-recognition. The researchers don't think so. First of all, they deliberately tested a wide age range - in Kenya up to age six - and they think it's highly unlikely mirror self-recognition could be delayed that far. 'Our impression,' the researchers said, 'was that they [the children] understood that it was themselves in the mirror, that the mark was unexpected, but that they were unsure of an acceptable response and therefore dared not touch or remove it.'

Inspired in part by past research conducted in Cameroon, in which children who failed the mirror test tended to be the most compliant and obedient, Broesch and her colleagues speculated that the performance in the non-Western, more interdependent cultures may have been affected by the fact that children in these societies are often discouraged from asking questions (they're expected to learn by watching). 'This is in sharp contrast with the independence and self-initiative that tends to be encouraged and nurtured in the Industrial West,' the researchers said. Another factor could be the non-Western children's relative lack of familiarity with mirrors.

More research is needed to test the truth of these assertions. Meanwhile, this study provides a compelling example of why we must be cautious when extrapolating from Western psychology research. 'Negative results (whether in monkeys or humans) must be examined more closely and results remind us that transporting culture-specific tests among diverse human populations has the potential to lead to flawed interpretations of cognitive differences and developmental processes,' the researchers said.

ResearchBlogging.orgBroesch, T., Callaghan, T., Henrich, J., Murphy, C., and Rochat, P. (2010). Cultural Variations in Children's Mirror Self-Recognition. Journal of Cross-Cultural Psychology DOI: 10.1177/0022022110381114

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Link to Carl Zimmer post on mirror self-recognition in monkeys.
Link to recent journal special issue on psychology's over-dependence on WEIRD (Western, Educated, Industrial, Rich and Democratic) participants.