Showing posts with label Argument. Show all posts
Showing posts with label Argument. Show all posts

Tuesday, 23 February 2016

Why is it so hard to persuade people with facts?

An effective way to correct people’s falsely held beliefs is to address them directly with evidence. However, such rebuttals can sometimes backfire, leading people to double-down on their original position. A new paper published in Discourse Processes suggests why: when people read information that undermines their identity, this triggers feelings of anger and dismay that make it difficult for them to take the new facts on board.

Past research had suggested that one reason changing minds is so challenging is that exposing someone to a new perspective on an issue inevitably arouses in their minds the network of information justifying their current perspective. An arms race ensues: when the new complex of information overwhelms the old, often by integrating some of the existing information (yes, yoghurt contains bacteria, but bacteria can be helpful), persuasion is possible. If not, the attempt fails, or even backfires, as the old perspective is now burning even more fiercely in the person’s consciousness.

However, the new research led by Gregory Trevors was motivated by the idea that the backfire effect may not be about which side is winning that mental arms race at all. Instead, these researchers believe the problem occurs when new information threatens the recipient’s sense of identity. This triggers negative emotions, which are known to impair the understanding and digestion of written information.

Trevors’ team tested their theory with a study on genetically modified foods – a subject rife with misconceptions, such as that hormones are involved in making them. The researchers assessed 120 student participants for their prior knowledge and attitudes to genetically modified organisms (GMOS) and their need for dietary purity, measured by items like “I often think about the lasting effects of the foods I eat.” This was the key variable of interest because it was intended to tap into how important food purity was to the participants’ sense of identity. The researchers specifically wanted to find out whether this identity factor would influence how people felt when their beliefs were challenged, and whether they would comply with, or resist, the challenge.

After the researchers gave participants scientific information worded to directly challenge anti-GMO beliefs, those with higher scores in dietary purity rated themselves as experiencing more negative emotions while reading the text, and in a later follow-up task, they more often criticised GMOs. Crucailly, at the end of the study these participants were actually more likely to be anti-GMO than a control group who were given scientific information that didn’t challenge beliefs: in other words, the attempt to change minds with factual information had backfired.

In further analysis, the researchers directly tested the claim that the identity factor had disrupted the learning of new pro-GMO information, but there was no evidence for this. Although negative emotions were weakly associated with lower post-test learning on a short quiz, participants at all levels of dietary purity performed at a similar (poor) level.

So we can reasonably conclude from this study that threats to a person’s identity do cause resistance to taking new factual arguments on board, and we know negative emotions seem to play a part, but we need more research to fully understand why this leads to a backfire effect.

If persuasion is most at risk of backfire when identity is threatened, we may wish to frame arguments so they don’t strongly activate that identity concept, but rather others. And if, as this research suggests, the identity threat causes problems through agitating emotion, we may want to put off this disruption until later: Rather than telling someone (to paraphrase the example in the study) "you are wrong to think that GMOs are only made in labs because…", arguments could firstly describe cross-pollination and other natural processes, giving time for this raw information to be assimilated, before drawing attention to how this is incompatible with the person's raw belief – a stealth bomber rather than a whizz-bang, so to speak.
_________________________________

  ResearchBlogging.orgTrevors, G., Muis, K., Pekrun, R., Sinatra, G., & Winne, P. (2016). Identity and Epistemic Emotions during Knowledge Revision: A Potential Account for the Backfire Effect Discourse Processes DOI: 10.1080/0163853X.2015.1136507

--further reading--
The “Backfire Effect”: Correcting false beliefs about vaccines can be surprisingly counterproductive
Researchers say they've found a way to combat anti-vaccine attitudes, but is it premature to celebrate?
How to win an argument (podcast)

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Monday, 11 January 2016

How to evaluate an argument like a scientist

From the pontifications of the politician on the nightly news, to the latest tabloid health scare, we're constantly bombarded by other people's arguments – their attempts to make a particular claim based on some kind of evidence. How best to evaluate all these assertions and counter-assertions? Some insights come from a new study in the journal Thinking and Reasoning that's compared the argument evaluation strategies of scientists (advanced doctoral students and post-docs in psychology) with those used by first-year undergrad psych students.

Sarah von der Mühlen and her colleagues presented the 20 undergrads and 20 psychologists with two passages of text, approximately 400 words long, about smoking and addiction, each containing a mix of plausible and implausible arguments (note the superficial meaning and grammar of the implausible arguments was not at fault).

There were several elements to the task: All the participants were asked to identify the different components of the arguments and to judge the plausibility of the arguments. They were specifically told to evaluate the arguments based on their internal consistency and quality, not based on their own prior knowledge or opinion. The participants were also interviewed afterwards about what they'd thought of the task, the strategies they'd used to evaluate the arguments, and whether the arguments contained any of a list of fallacies, such as being circular. For one of the texts, the participants were asked to speak their thoughts out loud as they evaluated the arguments, granting the researchers immediate insight into their evaluation strategies.

As you might expect, the psychologists were better than the students at judging the plausibility of the arguments (achieving roughly 80 per cent vs. 70 per cent accuracy). The psychologists were especially superior at spotting weak or implausible arguments (they spotted nearly 80 per cent of these vs. 60 per cent spotted by the students). The psychologists, who took more time to judge plausibility, were also better at breaking down the structure of the arguments, especially at recognising what's known as the argument "warrant" – this is the link made between the claim and the evidence cited to support that claim.

From analysing the participants' out-loud thoughts and their comments at interview, the researchers established that at least part of the reason the psychologists were better at evaluating the arguments was that they far more often than the students (over 40 per cent of the time vs. around 12 per cent of the time) actually followed the instructions and made their judgments by considering the internal consistency of the arguments and whether the arguments contained any logical fallacies, including: being circular in nature; containing a contradiction; using a wrong example; citing a false dichotomy; or overgeneralising (see box for examples). By contrast, the students more often (approximately 43 vs. 27 per cent of the time) relied on their intuition (as revealed by comments like "I don't know why, but that just doesn't sound plausible to me") and on their prior opinions or knowledge.

Psychologists and other scientists aren't usually given formal training in argument logic and analysis, but the researchers think they probably pick up a lot of relevant analytical skills through their training and the social aspects of being a scientist. Further analysis suggested that a greater awareness of the formal structure of arguments (check out the Toulmin model of argumentation for more on this), and the range of argument fallacies, helped the psychologists better evaluate the arguments used in this study. However, we need to be aware that the study was cross-sectional so we don't know that this knowledge caused their better performance – for example, perhaps being the kind of person to take on post-doctoral science studies makes you better at judging arguments and/or maybe the psychologists were more motivated to excel at the task and follow the instructions.

Another limitation of this research is that the students and psychologists were assessing arguments in a context that was at least partly related to their domain of expertise or study (but note that no prior knowledge was required to judge the plausibility of the arguments). It would be interesting to know how well the psychologists argument evaluation skills would extend to other topics. For now though, what this research reveals is that when it comes to evaluating arguments, people find it very difficult to put aside their gut instincts, their prior opinions and knowledge and to judge the arguments in a logical way, based on their actual quality and coherence. Although we think of scientists as highly knowledgeable experts, their greater skill at evaluating arguments actually seems to come from their ability to forget what they know and to judge an argument on its merits.
_________________________________

  ResearchBlogging.orgvon der Mühlen, S., Richter, T., Schmid, S., Schmidt, E., & Berthold, K. (2015). Judging the plausibility of arguments in scientific texts: a student–scientist comparison Thinking & Reasoning, 1-29 DOI: 10.1080/13546783.2015.1127289

--Further reading and listening--
PsychCrunch Episode 3: How To Win An Argument
When our beliefs are threatened by facts, we turn to unfalsifiable justifications
Conspiracy theorists are more focused on discrediting official accounts than proposing their own
Five minutes with the discoverer of the Scientific Impotence Excuse, Geoffrey Munro

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Monday, 30 November 2015

The secret to a conflict-proof relationship? Feeling like your partner understands you

A relationship under strain can be helped by a dose of understanding. In itself, this is no new insight, and it makes sense that understanding your partner and looking for mutual solutions is healthier than looking to win the argument or change them. But new research published in the Journal of Personality and Social Psychology suggests that quite aside from any practical value of understanding, simply feeling understood can nullify conflict’s impact – or even allow it to improve relationships.

Amie Gordon and Serena Chen at the University of California conducted several studies with participants recruited through online research pools, most of whom were in their twenties and thirties; all were in an ongoing relationship with at least six months behind it. The first survey-based study showed that the higher the frequency of conflicts in a relationship, the less satisfying it was to participants – unless they felt that their partner understood them well (feeling understood was measured by agreement with statements like  “My partner nearly always knows exactly what I mean”).

In the second study, participants who wrote about a specific conflict that had occurred in their current relationship articulated a lower satisfaction within that relationship immediately post-conflict, compared with a control group who wrote about a neutral event – if, and only if, they were asked to consider a conflict where their partner didn’t understand them. This suggests that conflict needn’t be harmful to a relationship if it occurs in the context of feeling understood.

A diary study over a fortnight showed this effect of feeling understood (this time measured by items like: "Today, how much do you think your partner was able to accurately understand what you were thinking and feeling?") wasn’t merely a product of laboratory context: here too, as participants went about their lives, conflicts accompanied by feeling understood by one’s partner didn't appear to harm people’s satisfaction with the relationship.

All these results point to the beneficial effect of feeling understood – yes, there are some alternative explanations for the results, but the studies addressed these. For example, it’s true that when participants felt understood, conflicts were more likely to be resolved, and often (although not always) involved the participant believing they themselves were understanding. Thanks to a key experiment involving video, described below, we know also that these conflicts are typically conducted in a more positive tone. But when these factors were accounted for, the beneficial effect of feeling understood remained. These types of conflicts weren’t for smaller stakes, and the diary study, by controlling for previous-day relationship satisfaction, confirmed that this was not a third variable driving both higher satisfaction scores and a willingness to feel understood.

One of the most compelling experiments in the new research involved both partners in a relationship being invited into the lab to discuss, while being videotaped, a topic that was a source of conflict within their relationship. Afterwards, they completed similar surveys to the other studies, and analysis of this replicated the prior finding, as well as producing two other intriguing findings (note, these do warrant retesting and replication before we put too much faith in them). The pre-and post-conflict measures of satisfaction available here showed that participants who felt understood during the conflict left the the session more satisfied than when they began. In other words, when participants felt their partner understood them, the conflict apparently wasn’t just less harmful, it was actually beneficial. And when one partner felt understood, the other felt happier, even after controlling for how understood they felt themselves. It seems there’s a virtuous circle at play: When you feel understood, that increases your partner’s faith in the relationship.

Regardless of whether we get what we want, how soft and fluffy the encounter was, or its stakes, when we feel understood, it seems our relationships can handle, even flourish, from conflict. Some of that is about feeling cared for, some of it is faith that your partner actually considers what you have to be a partnership: the researchers identified both motivations as having a part to play. But beyond these, feeling understood remained as an end in itself, simply worthwhile. However simple such a need might seem to fulfil, beware: Gordon and Chen point to two ways we can fall short and allow conflicts to rankle. One is not understanding – not being willing to see the other’s concerns. The other is to conclude you are being misunderstood or ignored despite your partner doing their best to understand, as if you're unable to penetrate your historically-founded conclusions about what they are capable of.

_________________________________ ResearchBlogging.org

Gordon AM, & Chen S (2015). Do You Get Where I'm Coming From?: Perceived Understanding Buffers Against the Negative Impact of Conflict on Relationship Satisfaction. Journal of personality and social psychology PMID: 26523997

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Monday, 19 October 2015

Episode Three: How To Win An Argument



This is Episode Three of PsychCrunch, the new podcast from the British Psychological Society's Research Digest. In this episode we explore whether psychology can help you to win an argument.

After our presenter Christian Jarrett tries his luck with an argument about Michael Jackson's legacy, we find out why convincing people of your point of view is so difficult, and we hear about a paradoxical technique that's encouraging people to change their own minds about one of the most serious arguments in the world – the Israeli-Palestinian conflict. We also touch on why neurobabble appears to be so convincing.

Our guests are Dr Jon Sutton (Editor, The Psychologist); Dr Tom Stafford (University of Sheffield); Boaz Hameiri (Tel Aviv University); and Dr Sara Hodges (University of Oregon).

Some of the research discussed by our guests has been covered previously on the Research Digest blog, including how superfluous neuroscience can be so persuasive, and other relevant research is in our archive. Boaz Hameiri's research on the paradoxical thinking intervention was published last year in PNAS. Tom Stafford's ebook is available on AmazonFor argument's sake: evidence that reason can change minds. Further reading from The Psychologist magazine: The truth is out there–a look at belief in conspiracy theories; Are conspiracy theories just harmless fun?; Looking back: Every believer is also a disbeliever; Falling on deaf ears–when people believe psychology is not science.

Episode credits: Presenter/editor Dr Christian Jarrett. Producer Dr Lorna Stewart. Music and mixing Dr Catherine Loveday and Jeff Knowler. Art work Tim Grimshaw.

Subscribe and download via iTunes.
Subscribe and download via Stitcher.

Bonus material:
Listen to the entire Michael Jackson argument between Christian Jarrett and Jon Sutton!

Previous episodes:
Episode one: Dating and Attraction.
Episode two: Breaking Bad Habits.

---
PsychCrunch is sponsored by Taylor and Francis

Tuesday, 4 August 2015

Researchers say they've found a way to combat anti-vaccine attitudes, but is it premature to celebrate?

By guest blogger Simon Oxenham

Over recent years, measles has once again become a public health crisis in the western world as a result of growing anti-vaccination movements in the UK and the US. This is an enormous problem because the success of vaccination depends on herd immunity: unless the vast majority of the population is protected, vaccines cannot halt the rapid spread of infectious diseases, leaving vulnerable people at risk, such as those with immune disorders, the elderly and newborn babies.

Efforts to encourage vaccine uptake have been hampered by a curious psychological phenomenon: the Backfire Effect. Research by Brendan Nyhan and Jason Reifler published this year demonstrated that efforts to persuade people to vaccinate their children can backfire, and actually reduce intentions to vaccinate. Now a new study, published yesterday in PNAS, has investigated further, adapting Nyhan and Reifler’s methodology.

A core problem in combating anti-vaccine views is that it is very hard to prove absence of risk, much in the same way that it is impossible to prove there isn't a cosmic teapot orbiting the sun. The authors of the new study, led by Zachary Horne, hypothesised that a better approach would therefore be to focus on convincing parents that without vaccinating, the risk of contracting vaccine-preventable diseases is high and the consequences severe.

First a quick re-cap on an earlier 2014 study by Nyhan and colleagues that provided the inspiration for the new research. Nyhan’s group compared the effects of four different types of evidence: 1) Correcting misinformation about the MMR vaccine; 2) Information about the risks of measles, mumps and rubella; 3) Parents’s reports on their children being infected with these diseases; and 4) Images of children infected with the diseases.

Nyhan's team found that none of these forms of evidence alone increased parents’ intentions to vaccinate their children. The effects of the information about measles, mumps and rubella were surprisingly neutral, while the images of infected children and the mother’s narrative about her hospitalised child both had the unintended effect of increasing beliefs in vaccine side-effects. The images also managed to inadvertently increase false beliefs that vaccines cause autism. The material that refuted the MMR-autism link successfully reduced false beliefs, but also reduced the intent to vaccinate in parents with the most anti-vaccine beliefs (i.e. there was a backfire effect). The overall picture was complex to say the least.

For the new study, Horne and his team combined the different information presented in the 2014 research to create a “disease risks condition” that included a mother's narrative of their child experiencing infection, alongside information about the effects of the diseases, as well as the images of the infected children. The comparison condition provided evidence contradicting the myth that MMR causes autism, and a control group read information that was irrelevant. Participants were hundreds of parents and non-parents surveyed online.

Contrary to the 2014 study, Horne’s team found that informing participants of the risks of failing to vaccinate did have a meaningful effect on parents’ beliefs about vaccines. This compared favourably to the information that debunked false beliefs about autism and the MMR vaccine, the effect of which did not differ significantly from the control condition. In this study neither intervention led to a backfire effect.

The results are encouraging, and indicate that a backfire effect is not unavoidable when communicating information about vaccine safety. The study is, however, not without its limitations. The study didn't directly evaluate intention to vaccinate – this is an important factor, as Nyhan’s 2014 study found that debunking false beliefs successfully reduced misperceptions about vaccines, but at the same time had the effect of reducing intentions to vaccinate in the very same people! In their 2014 paper they even remarked "if we had not measured intent, we might have missed a potentially dangerous backfire effect". The same researchers replicated this finding earlier this year in the context of flu. It seems a design flaw of the new Horne et al. study that this effect wasn't evaluated.

In response to the paper, Nyhan and Reifler remarked: "We’re excited to see the Horne et al. PNAS article on the effect of messages promoting vaccines and hope to see more experimental studies like this in the future. In addition, we’re encouraged that they find messages about the dangers of communicable diseases are effective in improving vaccine attitudes. Future research should consider the extent to which design differences between the two studies help explain the somewhat different conclusions the two studies reach, which will help us reach our shared goal of better understanding vaccine attitudes and behavior."

While the new research sounds like good news all round, celebration may be premature. The findings don't even get close to the bottom of the mystery that is the backfire effect; they don't explain the complexities teased out in earlier research, and they don't show that behaviour to vaccinate or not vaccinate would have changed. At a time when anti-vaccine beliefs are making a comeback, understanding and avoiding the backfire effect is more important than ever.

The next step for researchers will likely involve working out how we can make a powerful enough argument to convince the hard core of people with fixed anti-vaccine beliefs to vaccinate their children. The final step, and perhaps most difficult of all, will be ensuring that the change in beliefs stands the test of time.
_________________________________

   ResearchBlogging.org
Horne, Z., Powell, D., Hummel, J., & Holyoak, K. (2015). Countering antivaccination attitudes Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1504019112

--further reading--
The “Backfire Effect”: Correcting false beliefs about vaccines can be surprisingly counterproductive

Post written by Simon Oxenham for the BPS Research Digest. Simon Oxenham covers the best and the worst of the world of psychology and neuroscience on his Neurobonkers blog at the Big Think. Follow @Neurobonkers on Twitter, Facebook, Google+, RSS or join the mailing list.

The Research Digest free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 17 April 2015

Psychology students are seduced by superfluous neuroscience

It seems as though neuroscience is particularly popular and seductive. Not only is the discipline enjoying some eye-spinningly massive new grants, there are also ever more brain-branded products (like brain games and brain drinks), there are new disciplines like neuroleadership, and there's a growing obsession about the brain among many journalists, many of whom invoke brain science in odd contexts (check out "The neuroscience of ISIS" for a recent example).

This atmosphere has led to a near-consensus among commentators that there is something distinctly persuasive about neuroscience. In fact, besides anecdotal argument, there is little solid evidence to suggest this is true (and some that it's not). A landmark paper from 2008 showed that images of the brain are particularly compelling, but this effect has failed to replicate.

Another key study, also from 2008, demonstrated the seductive allure of neuroscience – participants found circular explanations for psychological phenomena more convincing when they contained superfluous written neuroscience information. Unfortunately, this study had issues. For example, it's possible the addition of the neuroscience information simply acted to conceal the circularity of the explanations.

Enter Diego Fernandez-Duque and his colleagues. Across four studies, they asked dozens of US psychology students to rate the quality of short explanations (some were sound, others were circular) for psychological phenomena such as "face recognition" and "emotional states". The main take-away is that when superfluous neuroscience information (i.e. information that offered no further insight) was added to the end of these explanations, the students rated the explanations more highly. The students with superior analytical skills were just as prone to this effect. The students' religious and other philosophical beliefs (such as their endorsement of mind-body dualism) also made no difference.

Fernandez-Duque found the convincing influence of superfluous neuroscience information applied both to good quality and circular explanations. However, the additional presence of brain imagery did not add to the appeal of the explanations, thus confirming recent failures to replicate the allure of brain pictures.

It's not just that extra, spurious neuroscience information made psychological explanations more convincing by making them longer. The addition of superfluous social science information did not increase the students' ratings of the explanations. Neither is it simply that neuroscience is seen as a "hard science" adding weight to purely psychological explanation. When the researchers tested the addition of superfluous chemistry-based, maths, genetic or physics information (i.e. science disinclines also considered "hard" or prestigious), this did not lead the students to rate the explanations of the psychological phenomena more highly (this despite the fact that, on their own, these extra superfluous snippets were considered just as high quality as the extra neuroscience information).

The researchers say all this suggests there is something uniquely convincing about neuroscience in the context of psychological phenomena. They believe the most plausible reason is that psychology students endorse a "brain-as-engine-of-mind" hypothesis – that is, they "assign to neuroscience a privileged role in explaining psychological phenomena not just because neuroscience is a 'real' science but because it is the most pertinent science for explaining the mind." That the students who endorsed dualist beliefs (seeing the mind as separate from the brain) were just as wooed by superfluous neuroscience information somewhat undermines this interpretation.

It will be interesting to test whether these findings hold true for the general public, and for people in other cultures for whom the brain might be considered less important. If the allure of neuroscience is found more widely, it's a worrying situation. As the researchers explain: few, if any, mental phenomena have single causes. "As such, infatuation with any single source explanation – whether it is the brain or something else – may impede humans' progress to find and accept more complete explanations."

_________________________________ ResearchBlogging.org

Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. (2015). Superfluous Neuroscience Information Makes Explanations of Psychological Phenomena More Appealing Journal of Cognitive Neuroscience, 27 (5), 926-944 DOI: 10.1162/jocn_a_00750

--further reading--
People are quicker to dismiss evidence from psychology than neuroscience

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 20 January 2015

When our beliefs are threatened by facts, we turn to unfalsifiable justifications

On being told physics could undermine
religious claims, believers said faith
was more about living a moral life
It's great to have facts on your side. The fundamentalist is delighted by the archaeological find that tallies with scripture, just as the atheist seizes on the evidence that contradicts it. But when the evidence goes against us, we're less likely to change a belief than to criticise the validity or provenance of the evidence. Now, research suggests that the mere prospect of a factual threat leads us to downplay how much our belief depends on such evidence at all. We become attracted to other, less falsifiable reasons for believing.

Justin Friesen and his colleagues conducted a series of studies each with a hundred or more participants. The first presented participants with a summary statement from a conference on science and God. When it suggested that science could one day settle the question of God's existence, religious participants wavered in their religious conviction, rating it significantly lower than those told that science was not armed to answer such questions. The very possibility that the religious belief was falsifiable made it vulnerable.

A subsequent study presented the discovery of the Higgs Boson as either a threat to or unlikely to affect matters of religion. Asked what reasons underpinned their belief, religious participants gave more importance to unfalsifiable statements such as "living a moral life would be impossible without God" when told the particle was a threat, and relatively less to evidence-linked statements such as  "historical and archaeological evidence shows how God intervened in the world."

This effect wasn't restricted to religious belief. In another study, supporters and opponents of same-sex marriage were shown data on life outcomes of children raised by same-sex couples; by presenting these outcomes as either positive or troubled, participants were exposed to data that either supported or undermined their position. When the facts were on their side, they rated the issues of same-sex marriage and child-rearing as a matter for evidence to decide; when the facts were against them, they saw it as more a matter of opinion.

The authors speculate that this tendency to revert to unfalsifiable justifications may mean that many beliefs, over time, shear off their evidential component and become increasingly unchallengeable. But they also note that unfalsifiability may have important psychological value, for instance in making inviolable beliefs such as "love is real" or "genocide is wrong", whose compromise could otherwise be deeply distressing and disorientating.  Cherish or bemoan it, our belief systems are laced with unfalsifiable aspects that won't be budged by evidence alone.

_________________________________ ResearchBlogging.org

Friesen, J., Campbell, T., & Kay, A. (2014). The Psychological Advantage of Unfalsifiability: The Appeal of Untestable Religious and Political Ideologies. Journal of Personality and Social Psychology DOI: 10.1037/pspp0000018

--further reading--
Five minutes with the discoverer of the "Scientific Impotence Excuse"
The unscientific thinking that forever lingers in the minds of physics professors
Paranormal believers and religious people are more prone to seeing faces that aren't really there
Can psychology help combat pseudoscience?

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Thursday, 6 March 2014

Three-year-olds show greater suspicion of circular arguments than adults

Children aren't as gullible as you might think. Early in life they display a discernment that psychologists call "epistemic vigilance". They are more likely to trust information from experts compared with novices, from kind people rather than meanies, and from those they are familiar with, as opposed to strangers. Now a study shows that even by age three, children are sceptical about circular arguments; in some cases even more than adults.

Hugo Mercier and his team presented 84 children aged 3 to 5 (and a control group of adults) with three illustrated vignettes in which a girl was looking for her dog. For each story, one character advised the girl of the dog's whereabouts with an argument based on what they'd seen: "The dog went this way because I've seen him go in this direction," (this is known as an "argument from perception" and it was spoken in a neutral voice played through speakers). A second character said the dog had gone in the other direction and gave a circular argument, "The dog went this way because he went in this direction" (also heard through speakers).

Children from age three and up, and the adults, more often chose to believe the character who based their testimony on what they'd seen rather than on a circular argument. This supports the idea that children from three and upwards have epistemic vigilance. "These results point to the existence of basic skills of argument evaluation that children would possess from at least three years of age onwards," the researchers said.

A developmental trend was for the older children to grow more consistent in their preferences. That is, as the children got older, they more often favoured either the argument from perception on every occasion, or (in a minority of cases) they favoured the circular argument on every occasion. Focusing on just those participants who always made the same choice, an intriguing pattern emerged. A minority of the four- and five-year-olds, and adults, always favoured the circular arguments, but none of the three-year-olds showed this pattern. In a sense then, some older children, and adults, were less sophisticated in their judgment of arguments than the three-year-olds.

How could this be? Mercier and his team think that as they get older, some children and adults become dependent on a rule of thumb that mistakes circular arguments for a sign of dominance or authority. When a person says that "the dog went this way because he went in this direction" this is interpreted as equivalent to an authoritative person saying, "this is the case because I say so."

To test this idea, the same children were tested on a similar task to before, but this time one character used a circular argument for a cat's location, while the other character provided no argument (i.e. they just said "The cat went this way"). Preference for circular arguments would be evidence that they are interpreted as having value beyond no argument at all. In this case, three-year-olds were equally likely to trust either form of advice, while a large number of four- and five-year-olds consistently chose to trust the circular arguments. That is, older children, but not the three-year-olds, saw more value in a circular argument than in no argument at all.

Many children display a distrust of circular arguments from a very early age. However, the findings also reveal an intriguing developmental trend, in which a minority of slightly older children begin to be seduced by circular arguments (a weakness that also persists in a minority of adults). This is likely due to them interpreting such arguments as a sign of authority. Such an inference requires a complexity of social thinking that is beyond three-year-olds. Ironically, this means that three-year-olds end up being more canny in their distrust of circular arguments than even some adults.

_________________________________ ResearchBlogging.org

Mercier H, Bernard S, and Clément F (2014). Early sensitivity to arguments: How preschoolers weight circular arguments. Journal of experimental child psychology PMID: 24485755

--Further reading--
Young children trust kindness over expertise
Lying is common at age two, becomes the norm by three
Kids experience schadenfreude by age four, maybe earlier

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 2 January 2014

Activists have an image problem, say social psychologists

An activist shouts at the Power Shift '09 rally on the West Lawn of the U.S. Capitol in 2009 | Getty Images.
When you picture a feminist or an environmental campaigner, what kind of a person do you think of? If you're like the US and Canadian participants in this new paper, then you'll have in mind an eccentric, militant, unhygienic person. Nadia Bashir and her colleagues say this commonly held stereotype of an activist is partly responsible for the sluggishness of social change. Large sections of the public agree with activists' messages, but are put off by not wanting to affiliate themselves with the kind of person they think makes an activist.

Bashir's team conducted five proper studies in all, and three pilot investigations. The pilot work involved Canadian students, and US participants recruited online, and was used to establish the characteristics - militant, eccentric etc - that people tend to associate with a typical feminist or environmentalist.

For one of the main studies, undergrads read about either a "typical" feminist, who took part in rallies, or an atypical feminist, who used less abrasive techniques, such as holding social events to raise money for feminist causes. Next, all the students read an article, ostensibly written by the aforementioned feminist, about the unfair obstacles that women continue to face. Finally, the students declared their intentions to adopt pro-feminist behaviours, such as getting involved in pro-women's rights initiatives.

The students who read about a typical feminist tended to assume she had more negative stereotypical traits, such as being militant and eccentric. What's more, after reading her article, these same students tended to report fewer intentions to engage in pro-feminist behaviours themselves, as compared with students who'd encountered the atypical feminist and her article. These two things were linked - mediation analysis suggested students who encountered the typical feminist and her article had lower pro-feminist intentions because they saw the feminist as having stereotypical activist traits.

The gist of these findings was replicated in another study with a sample of 140 US participants recruited online, and with the focus on an environmentalist rather than a feminist. This study also showed that participants were less inspired by the arguments of a more typical militant environmentalist, not just because of seeing him as having more negative stereotypical traits, but also because of not wanting to affiliate with him.

Past research on people's advocacy for social change has tended to focus on their beliefs about the issue at hand, or on the personality characteristics of people who tend to favour social change or oppose it. This study is novel in that it focuses instead on people's perceptions of those who campaign for social change. The findings have obvious real-life implications for activists. "…. seemingly zealous dedication to a social cause may backfire and elicit unfavourable reactions from others," the researchers said. "… [T]he very individuals who are most actively engaged in promoting social change may inadvertently alienate members of the public and reduce pro-change motivation."
_________________________________ ResearchBlogging.org

Nadia Y. Bashir, Penelope Lockwood, Alison L. Chasteen, Daniel Nadolny and Indra Noyes (2013). The ironic impact of activists: Negative stereotypes reduce social change influence. European Journal of Social Psychology DOI: 10.1002/ejsp.1983

--Further reading--
Political activism is good for you
How weak arguments can make a more effective call to arms than strong arguments

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 11 July 2013

How weak arguments can make a more effective call to arms than strong arguments

We often think of persuasion in terms of converting people to our side of an argument. Just as important in many contexts is the need to inspire supporters to do more to help a cause they already believe in. In a new paper, Omair Akhtar and his colleagues provide evidence here for a counter-intuitive principle - they say that presenting people with weak arguments for a cause they already believe makes for a more powerful call to arms than presenting them with strong arguments.

In an initial study, 165 US citizens were presented with either weak or strong arguments made by other voters in favour of Barack Obama's re-election as President. Among the participants who were already supportive of Obama, exposure to the weak rather than strong arguments led them to say they had more intention to persuade other people to vote for Obama. This association was mediated by their feeling more confident about their own persuasive powers. It's as if seeing the poor quality of arguments made by other pro-Obama voters had inspired them to feel they had a valuable contribution to make to the cause. The intentions of anti-Obama participants were unaffected by the strength of the pro-Obama arguments.

Further studies sought to test the limits of this counter-intuitive principle. In one, pro-Obama participants were given false feedback in an earlier writing challenge, either indicating that they were skilled debaters or only average. Exposure to weak pro-Obama arguments subsequently acted as a powerful call to arms, but only for participants who'd earlier been told they were merely average at debating. This reinforces the idea that weak arguments can serve to inspire people they have something to contribute to a cause they believe in, but only if their self-belief was relatively low. If a person is already confident in their debating skills, seeing the weak arguments doesn't seem to have the same inspirational effect.

Another study was based around arguments against a proposal to make a school cafeteria entirely vegetarian. For people who already agreed with the cause (i.e. they too were against the vegetarian proposal), exposure to weak arguments increased their determination to join the campaign, but only if they were initially uncertain about their attitudes toward the issue. Again the implication is that weak arguments can be inspiring for people who are initially less sure of themselves. Seeing the weakness of the existing arguments prompts them to see the valuable contribution they can make.

This same principle was replicated in a fourth study that tested people's actual advocacy behaviour rather than just their intentions. Uncertain participants already against the cafeteria proposal who were shown weak (anti all-vegetarian menu) arguments tended to write longer messages of their own against the plans to create a vegetarian-only cafeteria.

Omair Akhtar and his colleagues emphasised they aren't claiming weak arguments are more persuasive than strong arguments. This isn't about attitude change. Rather, they're saying that weak arguments can more effectively inspire "pro-advocacy" intentions and behaviour in people who are already in agreement with the cause.

"Counter-intuitively it might sometimes behoove advocacy groups to expose their supporters to weak arguments from others - especially if those supporters are initially uncertain about their attitudes or about their ability to make the case for them," the researchers said. "This suggestion might be somewhat radical," they added, "but the current results indicate that it is at least worth considering the potential benefits of weak arguments in advocacy contexts."
_________________________________

  ResearchBlogging.orgAkhtar O, Paunesku D, and Tormala ZL (2013). Weak Strong: The Ironic Effect of Argument Strength on Supportive Advocacy. Personality and social psychology bulletin PMID: 23798375

--Further reading--
Strong reassurances about vaccines can backfire
Scary health messages can backfire

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Wednesday, 27 October 2010

Five minutes with the discoverer of the Scientific Impotence Excuse, Geoffrey Munro

When attempting to change people’s behaviour – for example, encouraging them to eat more healthily or recycle more – a common tactic is to present scientific findings that justify the behaviour change. A problem with this approach, according to recent research by Geoffrey Munro at Towson University in America, is that when people are faced with scientific research that clashes with their personal view, they invoke a range of strategies to discount the findings.

Perhaps the most common of these is to challenge the methodological soundness of the research. However, with newspaper reports and other brief summaries of science findings, that’s often not possible because of lack of detail. In this case, Munro's research suggests that people will often judge that the topic at hand is not amenable to scientific enquiry. What’s more, he’s found that, having come to this conclusion about the specific topic at hand, the sceptic will then generalise their belief about scientific impotence to other topics as well (further detail). Munro says that by embracing the general idea that some topics are beyond the reach of science, such people are able to maintain belief in their own intellectual credibility, rather than feeling that they’ve selectively dismissed unpalatable findings.

The Digest caught up with Professor Munro to ask him, first of all, whether he thinks there are any ways to combat the scientific impotence excuse or reduce the likelihood of it being deployed.
"One of the most difficult things to do is to admit that you are wrong. In cases where a person is exposed to scientific conclusions that contradict her or his existing beliefs, one option would be to accept the scientific conclusions and change one’s beliefs. It sounds simple enough, and, for many topics, it is that simple. However, some of our beliefs are much more resistant to change. These are the ones that are important to us. They may be linked to other important aspects of our identity or self-concept (e.g., “I’m an environmentalist ”) or relevant to values that are central to who we are (e.g., “I believe in the sanctity of human life”) or meaningful to the social groups to which we align ourselves (e.g., “I’m a union man like my father and grandfather before him”) or associated with deeply-held emotions (e.g., “Homosexuality disgusts me”). When scientific conclusions challenge these kinds of beliefs, it’s much harder to admit that we were wrong because it might require a rethinking of our sense of who we are, what values are important to us, who we align ourselves with, and what our gut feelings tell us. Thus, a cognitively easier solution might be to not admit our beliefs have been defeated but to question the validity of the scientific conclusions. We might question the methodological quality of the scientific evidence, the researcher’s impartiality, or even the ability of scientific methods to provide us with useful information about this topic (and other topics as well). This final resistance technique is what I called “scientific impotence”.

So, how can strongly-held beliefs be changed? How can scientific evidence break through the defensive tenacity of these beliefs? Well, I hope the paragraph above illustrates how scientific evidence can be threatening when it challenges an important belief. It makes you feel anxious, upset, and/or embarrassed. It makes you question your own intelligence, moral standing, and group alliances. Therefore, the most effective ways to break the resistance to belief-challenging scientific conclusions is to present such conclusions in non-threatening ways. For example, Cohen and his colleagues have shown that affirming a person’s values prior to presenting belief-challenging scientific conclusions breaks down the usual resistance. In other words, the science is not so threatening when you’ve had a chance to bolster your value system. Relatedly, framing scientific conclusions in a way that is consistent with the values of the audience is more effective than challenging those values. Research from my own laboratory shows that reducing the negative emotional reactions people feel in response to belief-challenging scientific evidence can make people more accepting of the evidence. We achieved this by giving participants another source (something other than the scientific conclusions they read) to which they could attribute their negative emotional reactions. While this might be difficult to implement outside of the laboratory, we believe that other factors can affect the degree to which negative emotional reactions occur. For example, a source who speaks with humility is less upsetting than a sarcastic and arrogant pundit. Similarly, the use of discovery-type scientific words and phrases (e.g., “we learned that…” or “the studies revealed that…”) might be less emotionally provocative than debate-type scientific words and phrases (e.g., “we argue that…” or “we disagree with so-and-so and contend that…”). In fact, anything that draws the ingroup-outgroup line in the sand is likely to lead to defensive resistance if it appears that the science or its source is the outgroup. So, avoiding culture war symbols is crucial. Finally, as a college professor, I believe that frequent exposure to critical thinking skills, practice with critical thinking situations, and quality feedback about critical thinking allows people to understand how their own biases can affect their analysis of information and result in open-minded thinkers who are skeptical yet not defensive."
Next, the Digest asked Prof Munro whether he thinks psychology findings are particularly prone to provoke scientific discounting cognitions - and if so, should we as a discipline make extra effort to combat this?
"Yes, I believe psychological research (and probably social science research in general) is prone to provoke scientific discounting. The term “soft science” illustrates how social sciences are perceived differently than the “hard sciences”. There are a number of reasons why this might be true. First, much psychological research is conducted without the use of technologically-sophisticated laboratories containing the fancy equipment that comes to many people’s minds when the word science is used. In other words, psychological research doesn’t always resemble the science prototype. Supporting this position, psychological research that is conducted in high-tech labs (e.g., neuroscience imaging studies) is, in my opinion, perceived with less skepticism by the general public. Second, psychological research often investigates topics about which people already have subjective opinions or, at least, can easily call to mind experiences from their own lives that serve as a comparison to the research conclusions. In other words, people often believe that they already have knowledge and expertise about human thought and behavior. When their opinions run counter to psychological research conclusions, then scientific discounting is likely. For example, there is a common belief that cathartic behaviors (e.g., punching a punching bag) can reduce the frustrations that sometimes lead to aggression. Psychological research, however, has contradicted the catharsis hypothesis, yet the belief remains entrenched, possibly because it has such a strong intuitive appeal. In contrast, people will quickly reveal their lack of expertise on topics in physics or chemistry and have a harder time calling to mind examples from their own lives. Third, there is likely some belief that people’s thoughts and behaviors are less predictable, more mysterious, and affected by more variables than are inanimate objects like chemical molecules, planets in motion, or even the functioning of some parts of the human body (e.g., the kidneys). Furthermore, psychological conclusions are based on probability (e.g., the presence of a particular variable makes a behavior more likely to happen), and probability introduces the kind of ambiguity that makes the conclusions easy to discount. Fourth, some psychological research is perceived to be derived from and possibly biased by a sociopolitical ideology. That is, there is the belief that some psychologists conduct their research with the goal of providing support for some political viewpoint. This is somewhat less common among the “hard sciences” although the controversy over climate change and the researchers who investigate it suggest that if the topic is one that elicits the ingroup-outgroup nature of the cultural divide, then the “hard sciences” are also not immune to the problem of scientific discounting.

I think that the discipline of psychology has already made vast improvements in managing its public impression and is probably held in higher esteem than it was 50 or even 20 years ago. However, continued vigilance is essential against those (both within and outside of the discipline) who contribute to the perception of psychology as something less than science. The field of psychology has much to offer – it can generate important knowledge that can inform public policy and increase people’s health and happiness, but it cannot do so if its scientific conclusions fall on deaf ears."
_________________________________

ResearchBlogging.orgMunro, G. (2010). The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts. Journal of Applied Social Psychology, 40 (3), 579-600 DOI: 10.1111/j.1559-1816.2010.00588.x

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 25 May 2010

The psychological barriers facing MMR promotion campaigns

A focus group study of parents' attitudes towards interventions promoting uptake of the MMR vaccine suggests it is better for health advice to be seen as independent from government.

The findings come after the General Medical Council ruled yesterday that Andrew Wakefield, the doctor who first suggested a link between the MMR vaccine and autism, was guilty of serious professional misconduct.

The MMR vaccine protects children against measles, mumps and rubella. Unfortunately the number of UK parents vaccinating their children plummeted in the wake of Wakefield's 1998 Lancet study, since discredited and un-replicated, which purported to show a link between the MMR vaccine and autism. Today vaccination rates remain at around 85 per cent, compared with the desired rate of 90 to 95 per cent required for herd immunity (whereby even the unvaccinated are safe).

For the new study, Benjamin Gardner and colleagues analysed five focus group interviews they held with 28 parents in London. The parents were asked for their responses to three 'motivation-based' interventions (a website; an information pack; and parent-led group discussions) and three 'organisational interventions' (health care workers acting as immunisation champions; mobile vaccination units; legislation to penalise non-compliers).

Five key themes emerged. Parents felt they didn't have enough information, especially in relation to the dangers associated with not vaccinating. Government sources were not trusted. By contrast, other parents were trusted: 'Parents trust advice from other parents,' one mother said. '[You] take it on board. You listen to them.' Parents also revealed they were biased towards risk-related information. And they misunderstood balance, believing that pro- and anti-MMR arguments should be given equal weight even though the scientific evidence overwhelming favours MMR vaccination.

Gardner's team said a number of practical implications emerged from their findings. In particular, promotional MMR campaigns are likely to be better received if they appear to be independent of government and if they are fronted by parents. More information is needed about the risks of non-vaccination. And care should be taken when highlighting the small risks associated with vaccination - parents are likely to zoom in on these.

The researchers acknowledged their study has some limitations, most notably that the majority of the parents involved had actually vaccinated their children. Nonetheless, they said their results 'highlight important psychological barriers and facilitators that may determine whether MMR promotion interventions are effective.'
_________________________________

ResearchBlogging.orgGardner B, Davies A, McAteer J, & Michie S (2010). Beliefs underlying UK parents' views towards MMR promotion interventions: a qualitative study. Psychology, health & medicine, 15 (2), 220-30 PMID: 20391239

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Also on the Digest: How to promote the MMR vaccine.

Tuesday, 6 April 2010

People have an intuitive understanding of the science of persuasion

Psychologists have devoted entire careers to finding out how people can be persuaded, but far less time investigating what people know intuitively about persuasion.

Now Karen Douglas and colleagues at Kent University have bucked this trend with a paper which they say shows people have an intuitive understanding of how a person's thinking style affects their vulnerability to persuasion, known formally as 'the elaboration likelihood model'. This is the idea, supported by research findings, that people who have a greater inclination for thinking things through tend to be less swayed by adverts that use superficial tricks like beautiful models and slick graphics, but are more persuaded by adverts that make an intelligent argument. The jargon for the character trait in question is 'need for cognition'.

Douglas' team asked 132 non-psychology undergrad students to either rate themselves or 'other students in their class' on their weak-mindedness, their strong minded-ness and their 'need for cognition'. Next the students had to look at six colour advertisements that used style rather than intelligent argument to promote things like food and mobile phones, and their task was to say how much either they or typical undergrads in their class would be persuaded by those adverts.

The key finding was that the participants' judgments about either their own or other people's vulnerability to the adverts was strongly related to the scores they gave on 'need for cognition', even above and beyond the relation to strong and weak-mindedness. In other words, if they saw themselves or other students as low on this measure then the students also tend to say that they or other students would be swayed by the ads. It's as if they were applying the rules of psychology's 'elaboration likelihood model' even though it's highly unlikely they'd ever heard of such a thing.

Another finding to come out of the research was that the students tended to think other people would be swayed by the adverts far more than they would be themselves - a well-established phenomenon in persuasion research. Past studies have suggested that this tendency to think other people will be more prone to persuasion is just another expression of our egotistical tendency to see ourselves as better than average. However, the current study suggested instead that we think other people will be more prone to persuasion (by superficial ads) because we think they have less 'need for cognition'. We probably make this assumption, the researchers said, not for self-serving, egotistical reasons but simply because we 'have greater access to our own thoughts, and therefore to occasions in which we were personally motivated to think.'

Concluding their paper, Douglas's team said: 'This research provides the first evidence that people do indeed use their intuitive understanding of persuasion and the personal characteristics associated with persuasion, to judge the extent to which persuasive attempts will be successful.'

In as yet unpublished research Douglas has further shown how people are able to make effective use of their lay theories of persuasion. In one study participants tailored mobile phone adverts appropriately according to an audience who were described as being either high or low in need for cognition. For example, for consumers who think more, the participants chose an ad with more detail on technical specifications.

'We and Tobias Vogel at the University of Heidelberg have a lot of data on this topic ... it's very interesting because no research (to our knowledge) to date investigates people's lay theories of persuasion and certainly not how people use these theories to persuade people,' Douglas told the Digest.

'Our findings suggest that people do have some kind of awareness of how persuasion works and can use their knowledge to attempt to persuade people. It's just the beginning really - while people seem to have an intuitive understanding of how thinking style relates to persuadibility, it could plausibly extend to other aspects of persuasion and persuasive techniques such as social norms and the foot-in-the-door technique.'
_________________________________

ResearchBlogging.orgDouglas, K., Sutton, R., & Stathi, S. (2010). Why I am less persuaded than you: People's intuitive understanding of the psychology of persuasion. Social Influence, 5 (2), 133-148 DOI: 10.1080/15534511003597423

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Google+