Tuesday, 31 August 2010

How good are we at estimating other people's drunkenness?

Sloshed, trollied, hammered, plastered. We've done a sterling job of inventing words for the inebriated state, but when it comes to judging from their behaviour how much a person has drunk, we could do (a lot) better. That's according to a review of the literature by US psychologist Steve Rubenzer.

We all have our trusted indices for judging other people's drunkenness. Perhaps it's when the eyeballs start floating about as if under the control of a clumsy puppeteer. Or maybe the effusive 'you know I love you' delivered with a trickle of dribble. However, the vast majority of studies find that lay people, police officers and bartenders are in fact hopeless at distinguishing a drunk person from a sober one, at least at moderate levels of intoxication. To take just one example, after watching drunk and sober people being interviewed and negotiating a stair case, bartenders rated them as slightly, moderately or very drunk with an accuracy of just 25 per cent.

It's a similar story when participants are equipped with more structured means of detecting drunkenness. One 1958 study, for example, found no relation between doctors' assessments of people's intoxication (based on pulse rate, general appearance, gait and mental status) and the subsequent performance of those people on a driving course.

Rubenzer also looked at the evidence for specific indicators of intoxication. Alcohol causes reddening of the eyes, the literature shows, but the association between intoxication level and onset or amount of redness is unreliable. Another indicator is smell. The more a person has drunk, the more likely that their breath will be judged by observers to smell of alcohol. However, this indicator is hampered by the lack of a scientific explanation (alcohol has no odour), not to mention the risk of contamination by food smells. Speech slowing and slurring is another sign of intoxication but people are only modestly accurate at using this as a measure. Predictably enough, impaired walking, the last of the specific indicators, tends to increase the more a person has drunk but it only becomes reliable at very high intoxication levels.

The review finishes by looking at established 'sobriety tests': Nystagmus (jerky eye movements when following a moving target); the Romberg (whether a person sways or falls when they stand, eyes closed, with their feet together, arms at their sides); the Finger to Nose; the Finger to Finger; Saying the Alphabet; and the Hand Pat (alternating between clapping with the palms and backs of hands). In summary, performance on these tests does tend to decline as alcohol intake increases but the evidence for this at lower levels of intoxication is mixed and false positives (sober people categorised as drunk) are a frequent occurrence.

'...[J]udging low to moderate levels of intoxication in strangers is a difficult task,' Rubenzer concludes. 'A variety of professions that might be expected to show substantial skill assessing intoxication do not. [And] no behavioural or physical sign has emerged that is consistently related to a specific level of blood alcohol concentration level without large variation among individuals, with the possible exception of nystagmus.'
_________________________________

ResearchBlogging.orgRubenzer, S. (2010). Judging intoxication. Behavioral Sciences & the Law DOI: 10.1002/bsl.935

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 27 August 2010

Feeling clean makes us harsher moral judges

As the dirt and germs are wiped away, we're left feeling not just bodily but also morally cleansed - a kind of metaphorical virtuosity that leads us to judge others more harshly. That's according to Chen-Bo Zhong's team, who invited 58 undergrads to a lab filled with spotless new equipment. Half the students were asked to clean their hands with an antiseptic wipe so as not to soil the shiny surfaces. Afterwards all the students rated the morality of six societal issues including pornography and littering. Those who'd wiped their hands made far harsher judgments than those who didn't.

It was a similar story in a follow-up study with hundreds of participants recruited via a nation-wide database. Those primed to feel clean by reading a short passage that began 'My hair feels clean and light. My breath is fresh ...' made far harsher moral judgements about 16 social issues compared with those primed to feel dirty by a passage beginning, 'My hair feels oily and heavy. My breath stinks ...'

A third study was identical to the second, except that after reading either the dirty or clean passage of text the 136 undergrad participants also ranked themselves against their peers on several factors including intelligence, attractiveness and moral character. As before, those primed with the clean text made more harsh moral judgements on social issues. Crucially, this association was entirely mediated by their having an inflated sense of moral virtuosity compared with their peers (by contrast, reading the clean vs. dirty text made no difference to self rankings on the other factors).

'Acts of cleanliness have not only the potential to shift our moral pendulum to a more virtuous self, but also license harsher moral judgement of others,' Zhong and his team concluded.
_________________________________

ResearchBlogging.orgZhong, C., Strejcek, B., & Sivanathan, N. (2010). A clean self can render harsh moral judgment. Journal of Experimental Social Psychology, 46 (5), 859-862 DOI: 10.1016/j.jesp.2010.04.003

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Link to earlier related post: Your conscience really can be wiped clean.

Thursday, 26 August 2010

The Special Issue Spotter

We trawl the world's journals so you don't have to:

Diagnosis, diagnosis, diagnosis: towards DSM-5 (Mental Health Journal). Open Access.

First-year maternal employment and child development in the first seven years (Monographs of the Society for Research in Child Development).

Culture and psychological science (Perspectives on Psychological Science).

Special Issue on Language Development (Brain and Language).

Play, Talk, Learn: Promising Practices in Youth Mentoring (New Directions in Youth Development).

Special Issue: European Personality Reviews 2010 (European Journal of Personality).

DSM-5 and the ‘Psychosis Risk Syndrome' (Psychosis).

Approaches to cognitive modelling (Trends in Cognitive Sciences).

Wednesday, 25 August 2010

What clients think CBT will be like and how it really is

Some people expect cognitive behavioural therapy (CBT) to be more prescriptive than it is, and therapists to be more controlling than they really are. That's according to a series of interviews with 18 clients who undertook 8 sessions (14 hours) of CBT to help with their diagnosis of generalised anxiety disorder.

Henny Westra and colleagues selected for interview nine clients whose therapy had ended positively and nine whose therapy had ended poorly. Four of the clients were male. There were four CBT therapists - two men and two women. One was PhD qualified, two were senior clinical psychology grad students, one was junior.

The vast majority of client comments (84 per cent) relating to expectations were that the CBT was not what they'd anticipated. Clients whose outcome was good tended to say they'd been pleasantly surprised - the therapist was collaborative and non-judgmental, and they'd had the opportunity to direct the therapy and choose what to talk about. Of the therapeutic process, the positive outcome clients felt, to their surprise, that they could trust the process, felt comfortable, and that they learned more than they expected. Both good and poor outcome clients worked harder in therapy than they anticipated.

Unsurprisingly, the poor outcome clients tended to say they'd been disappointed by the therapeutic process. In the majority of cases, they took pains not to blame their therapist, instead attributing their lack of progress to time constraints, poor health, their own unrealistic expectations, or their failure to remember the techniques. Direct criticism of the therapist was rare (even though interviewees were reassured their comments were confidential). One person said it would have been better not to have waited until session seven to discuss a key subject from their past.

Sixteen per cent of expectation-related comments conveyed that therapy was just as had been expected. One good outcome client in this category said they thought the therapist would get to the root of their problems, and he did. Poor outcome clients, by contrast, tended to make superficial remarks: 'it was fairly similar to what I expected, I guess'.

The broader context for this research is that client expectations are one of several factors that are known to be associated with therapeutic success (with positive expectations tending to precede good outcomes). However, very little research until now has looked at expectancy violations - that is, when therapy isn't what was expected, for good or bad.

'The findings ... suggest that expectancy disconfirmation in CBT, particularly negative expectations for the therapist and the therapy process, is a common and potentially powerful phenomenon in the experiences of CBT clients with good outcomes,' the researchers said.

A major shortcoming of this research is that the interviews weren't conducted until after the final therapy session, so it's possible that clients recalled their earlier expectations in light of their positive or negative experiences in therapy.
_________________________________

ResearchBlogging.orgWestra, H., Aviram, A., Barnes, M., & Angus, L. (2010). Therapy was not what I expected: A preliminary qualitative analysis of concordance between client expectations and experience of cognitive-behavioural therapy. Psychotherapy Research, 20 (4), 436-446 DOI: 10.1080/10503301003657395

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 23 August 2010

Flynn effect for memory could invalidate neuropsychologists' tests

In Western countries, scores on IQ tests have been rising for several decades - the Flynn effect, named after the political scientist James Flynn. Now Sallie Baxendale at the Institute of Neurology has provided evidence that a similar effect has occurred for the standardised memory tests that are used by clinical neuropsychologists, a finding with implications for the diagnosis of memory problems in contemporary patients.

Baxendale focused on the Adult Memory and Information Processing Battery (AMIPB) - 'the most commonly used memory battery amongst clinical neuropsychologists in the UK' - published in 1985, and its successor, the Brain Injury Rehabilitation Trust Memory and Information Processing Battery (BMIOB), published in 2007. The two tests feature different wording and design but they both make equivalent demands: learning and recalling lists of words, and learning and recalling abstract line drawings.

Baxendale compared the performance of the two participant samples that provided the original normative data (the 'norms') for the two tests. These are the healthy participants, spanning four age ranges, whose average performance provides the benchmark for assessing patients. The normative data for the AMIPB was provided in 1985, or thereabouts, by 184 British people aged 18 to 75; the normative data for the BMIPB was collected in 2007 or thereabouts from 300 British people aged 16 to 89.

On one hand, there was little evidence of any difference in average performance on verbal learning and recall between the 1985 and 2007 samples (the exceptions were verbal learning in the 31-45 years age range and verbal recall in the oldest age range, both of which were superior in the 2007 sample). By contrast, visual learning and recall were both superior in the 2007 sample compared with the 1985 sample, at all four age ranges: 16-30; 31-45; 46-60; and 61-75. This is consistent with the traditional Flynn effect, which is most pronounced for non-verbal intelligence tests.

Baxendale said her findings have implications for diagnosis because present-day patients may, pre-trauma or pre-illness, have had elevated non-verbal learning and recall scores in comparison to the old normative data. Therefore, such patients could be impaired relative to their own healthy baseline, and yet appear unaffected compared with the out-of-date normative data. 'This may present a confound for neuropsychologists concerned with the lateralising and localising significance of memory test profiles,' Baxendale said.
_________________________________

ResearchBlogging.orgBaxendale, S. (2010). The Flynn effect and memory function. Journal of Clinical and Experimental Neuropsychology, 32 (7), 699-703 DOI: 10.1080/13803390903493515

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 20 August 2010

Video protects girls from the negative effects of looking at ultra-thin models

'No wonder our perception of beauty is distorted' - that's the concluding catchphrase of a one-minute video called 'evolution' made by Dove a few years ago to show how cosmetics and computer trickery are used to create the unrealistic portrayals of female models on advertising billboards. Now a team of researchers at the University of the West of England, led by Emma Halliwell, have tested whether viewing this short video can buffer young girls against the negative effects of looking at images of ultra-thin female models. Past research found such a benefit when adult women viewed a similar video but this is the first time the idea has been investigated with young girls.

One hundred and twenty-seven girls, aged ten to thirteen, from two schools in the South of England, were recruited for what they thought was an evaluation of 'attitudes to health, appearance and magazines'. In keeping with the cover story, tests of body satisfaction and esteem were embedded among other questionnaires to try to conceal the true purpose of the study.

Consistent with past research, girls who looked at thin models subsequently reported lower body satisfaction and confidence compared with girls who looked at pictures of landscapes (in turn, prior research has linked lower body self-esteem with increased risk of developing an eating disorder). The key finding was that this negative effect was not seen among the girls who watched the Dove video first, before looking at the ultra-thin models. The body self-esteem and confidence of these girls was just the same as among girls who watched the video and then looked at pictures of landscapes.

'Theoretically, we assume that the intervention disrupted the upward social comparisons that many young girls make when viewing idealised media images,' the researchers concluded. 'Moreover, we propose that the comparison is avoided because the media models have been construed as artificial and, therefore, an inappropriate comparison target.' Halliwell and her team added that future research will be needed to test the truth of this reasoning and whether the benefits of watching the evolution video, or others like it, can be sustained over time.
_________________________________

ResearchBlogging.orgHalliwell E, Easun A, & Harcourt D (2010). Body dissatisfaction: Can a short media literacy message reduce negative media exposure effects amongst adolescent girls? British journal of health psychology PMID: 20687976

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Link to Dove's Evolution video.

Wednesday, 18 August 2010

Hunting the successful psychopath

Put aside the dramatic Hollywood portrayals. Suited, married, high achieving, some of them walk among us. No, not vampires or super-heroes but 'successful psychopaths'. Like their criminally violent cousins - the standard psychopaths - these people are ruthless, callous, fearless and arrogant. But thanks to their superior self-control and conscientiousness, rather than landing in prison, they end up as company chief executives, university chancellors and Queen's Council barristers. Well, that's the idea anyway. But it's an idea that's proven difficult for psychologists to investigate. After all, if you advertise for volunteers for a study of successful people who are psychopathic, you're not likely to get many responses.

Stephanie Mullins-Sweatt and her collaborators tried a different tack. They surveyed hundreds of members of the American Psychological Association's Division 41 (psychology and law), criminal attorneys and professors of clinical psychology about whether they'd ever known personally an individual who was successful in their endeavours and who also matched Hare's definition of a psychopath: 'social predators who charm, manipulate and ruthlessly plow their way through life ... completely lacking in conscience and feeling for others, they selfishly take what they want and do as they please, violating social norms and expectations without the slightest sense of guilt or regret.'

Of the 118 APA members, 31 attorneys and 58 psychology professors who replied, 81, 25 and 41, respectively, said they'd previously known a successful psycho. The examples given were predominantly male and included current or former students, colleagues, clients, and friends (sample descriptions here). The survey respondents were asked to rate the personality of the successful psychopath they'd known and to complete a psychopathy measure of that person. These ratings were then compared with the typical profile for a standard (unsuccessful) psychopath.

The key difference between successful and standard psychopaths seemed to be in conscientiousness. Providing some rare, concrete support for the 'successful psychopath' concept, the individuals described by the survey respondents were the same as prototypical psychopaths in all regards except they lacked the irresponsibility, impulsivity and negligence and instead scored highly on competence, order, achievement striving and self-discipline.

'The current study used informant descriptions to provide information about successful psychopaths,' the researchers concluded. 'Such persons have been described in papers and texts on psychopathy but only anecdotally. This was the first study to conduct a systematic, quantitative analysis of such persons.'
_________________________________

ResearchBlogging.orgMullins-Sweatt, S., Glover, N., Derefinko, K., Miller, J., & Widiger, T. (2010). The search for the successful psychopath. Journal of Research in Personality, 44 (4), 554-558 DOI: 10.1016/j.jrp.2010.05.010

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 17 August 2010

How to apologise

Whether it's a company like BP apologising for causing environmental catastrophe or a political leader expressing regret for her country's prior misdemeanours, it seems there's barely a day goes by without the media watching hawkishly to find out just how the contrite words will be delivered and what effect they'll have on the aggrieved.

Surprisingly, psychology has, until now, paid little attention to what makes for an effective apology. Past studies have tended to focus instead simply on whether an apology was given or it wasn't. Now Ryan Fehr and Michele Gelfand at the University of Maryland have drawn on research in other disciplines, including sociology and law, to explore the idea that apologies come in three forms and that their impact varies according to the character of the victim.

The three apology types or components are: compensation (e.g. I'm sorry I broke your window, I'll pay to have it repaired); empathy (e.g. I'm sorry I slept with your best friend, you must feel like you can't trust either of us ever again); and acknowledgement of violated rules/norms (e.g. I'm sorry I advised the CIA how to torture people, I've broken our profession's pledge to do no harm).

Fehr and Gelfand's hypothesis was that the effectiveness of these different styles of apology depends on how the aggrieved person sees themselves (known as 'self-construal' in the psychological jargon). To test this, the researchers measured the way that 175 undergrad students see themselves and then had them rate different forms of apology. In a follow-up study, 171 more undergrads reported how they see themselves and then they rated their forgiveness of a fictional student who offered different forms of apology after accidentally wiping her friend's laptop hard-drive.

The researchers found that a focus on compensation was most appreciated by people who are more individualistic (e.g. those who agree with statements like 'I have a strong need to know how I stand in comparison to my classmates or coworkers'); that empathy-based apologies are judged more effective by people who see themselves in terms of their relations with others (e.g. they agree with statements like 'Caring deeply about another person such as a close friend is very important to me'); and finally, that the rule violation kind of apology was deemed most effective by people who see themselves as part of a larger group or collective (e.g. they agree with 'I feel great pride when my team or work group does well' and similar statements). These patterns held regardless of the severity of the misdemeanour, as tested by using different versions of the disk-wipe scenario in which either an hour's or several weeks' worth of data were lost.

The message, the researchers said, is that when apologising you should consider your audience. 'This need to meta-cognize about what a victim is looking for in an apology is particularly important when victims' and offenders' worldviews diverge,' they added. Of course, if in doubt about the character of your victim or victims, the researchers said that 'detailed apologies with multiple components are in general more likely to touch upon what is important to a victim than brief, perfunctory apologies. Offenders should therefore offer apologies with multiple components whenever possible.'

Fehr and Gelfand acknowledge their study has limitations, including their reliance on participants imagining fictional scenarios - future research should test out these ideas in the real world. 'By integrating theories of self-construal and apology,' they concluded, 'the current study has shown how the tailoring of apologies to individuals' self-construals can result in increased victim forgiveness.'
_________________________________

ResearchBlogging.orgFehr, R., & Gelfand, M. (2010). When apologies work: How matching apology components to victims’ self-construals facilitates forgiveness. Organizational Behavior and Human Decision Processes, 113 (1), 37-50 DOI: 10.1016/j.obhdp.2010.04.002

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 16 August 2010

Statistical significance explained in plain English

Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students.

Today I'm delighted to discuss an absolutely fascinating topic in psychology - statistical significance. I know you're as excited about this as I am!

Why is psychology a science? Why bother with complicated research methods and statistical analyses? The answer is that we want to be as sure as possible that our theories about the mind and behaviour are correct. These theories are important - many decisions in areas like psychotherapy, business and social policy depend on what psychologists say.

Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you're testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke - but still, there is always a certain probability that it was.

In science we're always testing hypotheses. We never conduct a study to 'see what happens', because there's always at least one way to make any useless set of data look important. We take a risk; we put our idea on the line and expose it to potential refutation. Therefore, all statistical tests in psychology test the probability of obtaining your given set of results (and all those that are even more extreme) if the hypothesis were incorrect - i.e. the null hypothesis were true.

Say I create a loaded die that I believe will always roll a six. I’ve invited you round to my house tonight for a nice cup of tea and a spot of gambling. I plan to hustle you out of lots of money (don’t worry, we’re good friends and always playing tricks like this on each other). Before you arrive I want to test my hypothesis that the die is loaded against my null hypothesis that it isn't.

I roll the die. A six. Success! But wait... there’s actually a 1:6 chance that I would have gotten this result, even if the null hypothesis was correct. Not good enough. Better roll again. Another six! That’s more like it; there’s a 1:36 chance of getting two sixes, assuming the null hypothesis is correct.

The more sixes I roll, the lower the probability that my results came about by chance, and therefore the more confident I could be in rejecting the null hypothesis.

This is what statistical significance testing tells you - the probability that the result (and all those that are even more extreme) would have come about if the null hypothesis were true (in this case, if the die were truly random and not loaded). It's given as a value between 0 and 1, and labelled p. So p = .01 means a 1% chance of getting the results if the null hypothesis were true; p = .5 means 50% chance, p = .99 means 99%, and so on.

In psychology we usually look for p values lower than .05, or 5%. That's what you should look out for when reading journal papers. If there's less than a 5% chance of getting the result if the null hypothesis were true, a psychologist will be happy with that, and the result is more likely to get published.

Significance testing is not perfect, though. Remember this: 'Statistical significance is not psychological significance.' You must look at other things too; the effect size, the power, the theoretical underpinnings. Combined, they tell a story about how important the results are, and with time you'll get better and better at interpreting this story.

And that, in a nutshell, is what statistical significance is. Enthralling, isn't it?

--
Editor's note (07/09/2010): This post has been edited to correct for the fact that statistical significance pertains to the likelihood of a given set of results (and those even more extreme) being obtained if the null hypothesis were true, not to the probability that the hypothesis is correct, as was erroneously stated before. Sincere apologies for any confusion caused.

Thursday, 12 August 2010

Left hemisphere already specialised for language by two months of age

It's widely known that in the majority of people the left hemisphere is dominant for language. But how early does this lateralisation of function emerge? An obvious way to find out is to put babies in a brain scanner and see if their brains show the same left-sided preference for language, compared with other auditory stimuli, as is observed in adults. Of course, from a practical perspective, that's easier said than done.

Ghislaine Dehaene-Lambertz and her colleagues scanned the brains of 24 infants, aged approximately two and a half, using fMRI. The researchers didn't cheat - no sedatives were used - although an experimenter did show the babies toys, visible via a mirror, to help keep them calm. Data from just seven of the babies was usable. As Dehaene-Lambertz and her colleagues explained: 'This high attrition rate underscores the fact that fMRI remains a challenge at this age.'

The basic paradigm involved playing the babies sentences spoken by their mother and by a stranger and comparing the activity this triggered against the activity triggered by music composed by Mozart.

Speech, but not music, triggered more activity in the left versus the right hemisphere of the babies' brains. Obviously babies can't yet understand speech. A possibility is that the left-hemisphere starts out with a bias for rapidly changing stimuli - 'a bias', the researchers explained, 'that would be rapidly extended through learning to other properties of the speech signal...'.

Another finding was that a mother's voice triggered significantly greater activity in language regions than did a stranger's voice. Dehaene-Lambertz and her co-workers said this shows the mother's voice 'plays a special role in the early shaping of posterior language areas.' A further differential effect of the mother's voice is that it led to reduced activity in emotion-related regions. Perhaps, the researchers surmised, this was the neural basis of a 'soothing effect'.

Also notable was that, as in adults, the ventral (lower) portion of the left temporal lobe, but not dorsal (upper) half, showed what's known as a 'repetition effect' when the same four-second snippets of speech were replayed several times in succession. The 'repetition effect' is a reduction in activity with repetition, betraying a kind of memory for the repeated stimulus. The fact that one region of the temporal lobe showed this effect and another region didn't suggests that by two months of age the left temporal lobe is already made up of different functional sub-regions.

'A small but growing infant neuroimaging literature points to the existence, in the first few months of life, of a well-structured cortical organisation,' the researchers concluded. However, they also cautioned that 'acknowledging the existence of strong genetic constraints' on the early organisation of language-related brain regions 'does not preclude environmental influences'. Indeed, they added that: 'The present results show clearly that learning also plays a major role in structuring the infant's brain networks, inasmuch as the mother's voice has a strong impact on several brain regions involved in emotion and communication ...'.
_________________________________

ResearchBlogging.orgDehaene-Lambertz, G., Montavont, A., Jobert, A., Allirol, L., Dubois, J., Hertz-Pannier, L., & Dehaene, S. (2010). Language or music, mother or Mozart? Structural and environmental influences on infants’ language networks. Brain and Language, 114 (2), 53-65 DOI: 10.1016/j.bandl.2009.09.003

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 11 August 2010

Are children from collectivist cultures more likely to say it's okay to lie for the group?

Would you lie for the sake of your team? Perhaps it depends on the culture you come from. Monica Sweet at the University of California and her co-researchers reasoned that children from collectivist cultures, such as China, which emphasise the importance of group ties, might be more inclined to say it's okay to lie for your team than children from individualistic cultures, such as the US, which place more value on self-interest.

Nearly four hundred children aged seven to eleven, approximately half from a city in Eastern China and half from the US, were presented with fictional scenarios in which a protagonist either lied or told the truth about a transgression by his or her team. The transgression related either to a tug-of-war team cheating by getting extra friends to help or a drawing competition team cheating by getting older children to help.

The surprising finding was that the children from China actually found lying to protect one's team less acceptable than did the children from the US. 'This is not to suggest that Chinese children were acting in an individualistic manner,' the researchers said, 'but rather that they were acting based on what they believed to be a more salient moral aspect of the situation.'

Moreover, children from both the US and China tended to refer to the protagonist's concern for him or herself (e.g. 'she wanted to win'), rather than concern for the team, when asked to explain the protagonist's motivation to lie or truth-tell. Also, asked to justify their own evaluation of the protagonist's lies or truth-telling, few Chinese or American children mentioned concern for others (e.g. 'she did the right thing by standing by her group'). '...[I]t is somewhat surprising,' the researchers said, 'that more children from China, the collectivist culture, did not mention the impact of the protagonist's decision on others.'

'Taken together,' the researchers concluded, 'the findings suggest that collectivist ideals do not necessarily equate to a greater focus on the group, and that situational context matters.' However, they acknowledged that the results might have been different if they'd used a sample of children from rural China as opposed to urban China, where Western influences are on the increase.
_________________________________

ResearchBlogging.orgSweet, M., Heyman, G., Fu, G., & Lee, K. (2010). Are there limits to collectivism? Culture and children's reasoning about lying to conceal a group transgression. Infant and Child Development DOI: 10.1002/icd.669

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 9 August 2010

Predicting when a crime is about to take place on CCTV

Are experienced CCTV operators better than naive participants at judging from an unfolding scene on CCTV whether or not a crime is about to be committed? The short answer is no, they aren't. Presented with 24 real-life 15-second CCTV clips, and asked to predict which half ended just before a crime was about to be committed (examples included violence and vandalism) and which half were innocuous, 12 experienced CCTV operators managed just 55.5 per cent accuracy - no better than if they'd just been guessing. Twelve naive controls achieved an accuracy of just 46.5 per cent - no worse, in terms of statistical significance, than the CCTV operators.

Another purpose of the research was to find out if certain viewing tactics lead to more accurate predictions of criminality. To do this, Dawn Grant and David Williams of the University of Hertfordshire recorded the eye movements of the CCTV operators and control participants as they watched the brief clips. Also, for a subset of the clips, they asked the participants to talk through their thought processes regarding what was taking place.

The key to successful predictions seemed to be to pay attention to the social context. Specifically, when participants spent more time focused on the face or head of single individuals not engaged in any social interaction, or looking at the bodies of those in a social interaction, they tended to more accurately predict whether a crime was about to occur. Grant and Williams think this might be because the former allowed the participants to notice when a lone person in the scene was staring at other people, which could betray their plans to commit a criminal act. Meanwhile, viewing the bodies, rather than faces, of those in a social interaction, might have allowed the participants to notice aggressive body language and the spatial proximity of people in a group.

This speculation was backed up by the participants' spoken accounts of how they were appraising the scenes. For example, participants who made accurate comments about which people in a scene belonged to which social group tended to also make accurate predictions about when a crime was about to occur. Accurate predictions also tended to be preceded by comments about body language and the social proximity of people in the CCTV footage.

'For certain types of crime, it may be that understanding the social context and the relations between those in the CCTV image is the first step towards obtaining reliable indicators of criminal intent,' the researchers concluded. Of course, whether we actually want CCTV operators, accurate or otherwise, watching our movements and forecasting crimes is another question altogether.
_________________________________

ResearchBlogging.orgGrant, D., & Williams, D. (2010). The importance of perceiving social contexts when predicting crime and antisocial behaviour in CCTV images. Legal and Criminological Psychology DOI: 10.1348/135532510X512665

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 6 August 2010

Stubbing out thoughts of smoking leads smokers to end up smoking more

Try not to think of a white bear and what happens? You end up thinking of a white bear. This idea that suppressing thoughts makes them rebound stronger is well-established in psychology [pdf]. Now James Erskine and his co-workers have shown that the same or a similar process can lead behaviours to rebound too.

Eighty-five smokers (average age 31), none of whom were currently trying to quit, were divided into three groups for three weeks. One group was instructed to spend the middle week avoiding and suppressing all smoking-related thoughts. The second group were to think about smoking as much as they could during that second week; the third group acted as controls and didn't suppress or encourage smoking-related thoughts. Participants in all groups kept daily diaries of how much they smoked, their stress levels and how much they'd attempted to suppress smoking-related thoughts.

The main finding was that smokers in the suppression group smoked less than others during the middle week while they were suppressing smoking-related thoughts, but ended up smoking significantly more than the other smokers in the final week. In other words, trying to avoid thinking about smoking had a short term benefit but ultimately led to more smoking later on.

Erskine and his colleagues said this short-term benefit of thought suppression was 'troublesome' and could lead smokers to believe mistakenly that the strategy was beneficial.

Another finding to emerge was that smokers from all three groups who suppressed more smoking-related thoughts (as recorded in their evening diaries) tended to have a history of more failed attempts to quit smoking.

'Thought suppression may be more harmful than previously believed,' the researchers concluded. 'Our findings are especially relevant to populations that seek to control behaviours on an ongoing basis (e.g. addicts), but are also relevant to any individuals attempting to control their desires, thoughts, and behaviours.'

This new study comes after an earlier report by James Erskine, in which suppressing thoughts of chocolate led participants to eat more chocolate.
_________________________________

ResearchBlogging.orgErskine JA, Georgiou GJ, & Kvavilashvili L (2010). I Suppress, Therefore I Smoke: Effects of Thought Suppression on Smoking Behavior. Psychological science : a journal of the American Psychological Society / APS PMID: 20660892

Thanks to George Georgiou at the University of Hertfordshire who tipped the Digest off about this new research.

Wednesday, 4 August 2010

Floral arrangement as a cognitive training tool for schizophrenia

It's the hallucinations and delusions associated with schizophrenia that typically attract discussion and research. However, patients with a diagnosis of schizophrenia also exhibit deficits in memory and perception and, importantly, the severity of these is predictive of quality of life, social functioning and autonomy. How can these cognitive deficits be helped? Researchers have found some success with computer-based training but patient motivation can be problem. Now a team of researchers led by Hiroko Mochizuki-Kawai at the delightfully named National Institute of Floricultural Science in Japan have tested out the benefits of floral arranging. 'The use of natural materials may reduce tension and anxiety' they predicted.

Ten patients (six men) with a diagnosis of schizophrenia or schizoaffective disorder agreed to undertake four one-hour sessions of flower arranging, supported by staff, over two weeks. The arranging involved following simple written instructions, holding them in memory one at a time, and placing flowers and leaves into the correct slots in an absorbent sponge. Two patients failed to attend; average attendance for the remainder was 3.1 sessions.

Before the intervention, the flower arranging patients' performance on the 'block-tapping' measure of non-verbal working memory was the same as that displayed by ten controls. After two weeks' flower arranging, however, the flower patients' performance had improved and was now superior to the controls. The block tapping task involves observing blocks being touched one at a time and then reproducing that same order from memory. On another test, which involved copying a complex figure from memory, the flower arranging patients were again no better than controls at the study outset but were superior to controls after the two weeks of training (although this was because the controls had deteriorated at the task rather than because the flower arrangers had improved).

This was only a pilot study and it has obvious short-comings including the small sample sizes, the lack of any comparison intervention for the control group, and no way of measuring the impact of cognitive gains on quality of life. However, the researchers were upbeat in their conclusion: 'We believe that the findings of the present study may contribute to the improvement of cognitive rehabilitation in schizophrenic patients'.
_________________________________

ResearchBlogging.orgMochizuki-Kawai, H., Yamakawa, Y., Mochizuki, S., Anzai, S., & Arai, M. (2010). Structured floral arrangement programme for improving visuospatial working memory in schizophrenia. Neuropsychological Rehabilitation, 20 (4), 624-636 DOI: 10.1080/09602011003715141

Monday, 2 August 2010

That's not a poker face, this is a poker face

What does your poker face look like? If it's the traditional, stern, emotionless expression, you may want to consider practising a new one. Erik Schlicht and colleagues report that a friendly, trustworthy face is more likely to influence your opponents, leading them to think that you've got a good hand - that you're not bluffing.

Schlicht's team had 14 relative novices play hundreds of one-shot rounds of a simplified version of Texas Hold'em poker against hundreds of different 'opponents'. Each round the participants received a two-card hand and their opponent had bet 5000 chips. They had to decide whether to 'fold' or 'call'. Folding meant they would lose 100 chips guaranteed. By calling, they would win 5000 chips if their hand was stronger than their opponent's, or lose the same amount if their hand was weaker. To boost their motivation, participants had the chance to win a small amount of money based on the outcome of one randomly chosen hand out of the 300 that they played.

Each round, before making their decision, the participants saw a picture of their opponent's face. These were morphed to appear either untrustworthy, neutral or trustworthy (see picture). Participants were told that, as in real poker, the different opponents could have different styles of play (but no mention was made of faces providing a clue to style).

Because participants played just one round against each opponent there was no opportunity to use past behaviour to make judgments about their style. This meant the only information participants had to go on was the cards in their own hand and any inferences they'd made about their current opponent's playing style based on his face. They didn't receive any feedback during play on whether they'd won a round or not.

On each round, there was an optimal decision for participants to make considering the cards in their hand and the stakes involved in holding or calling. The researchers were careful to ensure that participants' hands were of equal value across the different categories of opponent face - trustworthy, neutral, untrustworthy. Unbeknown to the participants, their opponents' hands bore no relation to their facial expression.

The key finding was that faces with neutral or untrustworthy expressions made no difference to the decisions the participants made. By contrast, if an opponent had a trustworthy face, the participants took longer to decide what to do and they made less optimal decisions. Effectively, they were behaving as if their opponent had a better hand.

'Contrary to the popular belief that the optimal face is neutral in appearance,' the researchers said, 'poker players who bluff frequently may actually benefit from appearing trustworthy, since the natural tendency seems to be inferring that a trustworthy-looking player bluffs less.' Before you try this out at your local poker den, remember the findings apply when you're up against new opposition and there's little other information to go on.
_________________________________

ResearchBlogging.orgSchlicht EJ, Shimojo S, Camerer CF, Battaglia P, & Nakayama K (2010). Human wagering behavior depends on opponents' faces. PloS one, 5 (7) PMID: 20657772

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

For further info, lead author Erik Schlicht has created a webpage where he answers frequently asked questions about this research.
Google+