Showing posts with label Brain. Show all posts
Showing posts with label Brain. Show all posts

Thursday, 28 July 2016

Neuro Harlow: The effect of a mother's touch on her child's developing brain

In the 1950s, the American psychologist Harry Harlow famously showed that infant rhesus monkeys would rather cling to a surrogate wire mother covered in cosy cloth, than to one that provided milk. A loving touch is more important even than food, the findings seemed to show. Around the same time, the British psychoanalyst John Bowlby documented how human children deprived of motherly contact often go on to develop psychological problems. Now this line of research has entered the neuroscience era with a study in Cerebral Cortex claiming that children with more tactile mothers tend to have more developed social brains.

Jens Brauer and his colleagues videoed 43 mum-child dyads as they sat together on a couch and played with a Playmobil Farm. The mothers knew they were being filmed but didn't know the aims of the study. There were 24 boys and 19 girls and their average age was 5.5 years. Coders then watched back the videos and counted every instance that the mothers touched their child or vice versa. Finally and within the next two weeks, the researchers scanned each child's brain while they lay as still as possible looking at a lava lamp screensaver (a brain imaging technique known as a resting-state scan).

The researchers were particularly interested in levels of resting activity in the children's brains in a network of areas known to be involved in functions such as empathy and thinking about other people's mental states – sometimes referred to as the "social brain". They found that the children who were touched more by their mother in the ten-minute play session tended to have more resting activity in the social brain, especially the right superior temporal sulcus (STS). Children who received more touch also showed more resting connectivity between different functional nodes within their social brain, such as between the STS and the inferior frontal gyrus and the left insula.

Children touched more by their mother also usually touched their mothers more, but the links between mothers' touch and the children's neural activity were still significant after factoring this out.

Previous research has found that greater resting activity in a person's social brain is linked with their social and emotional abilities, such as being able to take other people's perspective. Based on this, the researchers said "one may speculate that children with more touch more readily engage the mentalizing component of the 'social brain' and that, perhaps, their interest in others' mental states is greater than that of children with less touch."

The research has some serious limitations, most obviously – and as the researchers' acknowledged – that the results are correlational, so it's possible unknown factors are driving differences in amounts of motherly touch and in the children's brain development. For example, perhaps some mothers are more engaged on many levels, including talking to their children more. Such mothers might be more tactile, but it could be, for instance, the way they talk to their children that is responsible for the brain differences. Another major factor, not mentioned by the researchers, is potential genetic effects. The same genes driving tactile behaviour in mothers might be passed down to their children influencing their brain development. It's also worth noting that it remains to be seen if similar results would be found for levels of touch from a father or other caregiver.

These issues aside, Brauer and his colleagues ask us to consider their results in light of animal research that is able to experimentally control how much motherly touch different individual animals are exposed to. This has shown that greater maternal touch is associated with important brain changes in rats, for example in the way their brains respond to stress, and that rats raised with more touch go on to be more tactile towards their own offspring. "On the backdrop of this work then, it is not unreasonable to suspect a potential causal role of touch for human development," the researchers said.

_________________________________ ResearchBlogging.org

Brauer, J., Xiao, Y., Poulain, T., Friederici, A., & Schirmer, A. (2016). Frequency of Maternal Touch Predicts Resting Activity and Connectivity of the Developing Social Brain Cerebral Cortex, 26 (8), 3544-3552 DOI: 10.1093/cercor/bhw137

--further reading--
Neuro Milgram – Your brain takes less ownership of actions that you perform under coercion
Babies' anxiety levels are related to their fathers' nervousness, not their mothers'
It's thanks to Dad that girls are more cautious than boys

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 24 May 2016

Study of firefighters shows our body schema isn't always as flexible as we need it to be

The results could help explain some of the many injuries
incurred by firefighters each year
Your brain has a representation of where your body extends in space. It's how you know whether you can fit through a doorway or not, among other things. This representation – the "body schema" as some scientists call it – is flexible. For example if you're using a grabbing tool or swinging a tennis raquet, your sense of how far you can reach is updated accordingly. But there are limits to the accuracy and speed with which the body schema can be adjusted, as shown by an intriguing new study in Ecological Psychology about the inability of firefighters to adapt to their protective clothing.

Indeed, the researchers at the University of Illinois at Urbana-Champaign and the Illinois Fire Service Institute believe their findings may help explain some of the many injuries sustained by firefighters (of which there were over 65,000 in 2013 alone), and that they could have implications for training.

The participants were 24 firefighters (23 men) with an average age 29 and an average of 6 years experience in the job, all of whom were recruited through the University of Illinois Fire Service. The researchers led by Matthew Petrucci asked the participants to don the full protective kit, including bunker-style coat, helmet and breathing apparatus. As well as the weight and bulk of the gear affecting the participants'  ability to move freely, it also changed the participants' physical dimensions – for instance, the helmet added 21cm to their height, and the breathing apparatus added 21cm of depth to their body.

The researchers created three main obstacles designed to simulate situations in a real-life fire: a horizontal bar that the firefighters had to go under, a bar that they had to go over, and a vertical gap between a mock door and wall that they had to squeeze through. All of these were adjustable, and the participants' first task was to estimate what height bar they could manoeuvre over, what height they could manoeuvre under, and what width gap they could squeeze through. To make these judgments, the researchers adjusted the obstacles' in height or width, and for each setting the firefighters said whether they thought they could safely pass the obstacle.

For the next stage, the firefighters actually attempted to manoeuvre over, under or through the different obstacles, which were adjusted to make them progressively harder to complete. The idea was to find the lowest, highest and narrowest settings that the firefighters could pass through safely and quickly. To count as a safe passage, the firefighters had to avoid knocking off the delicately balanced horizontal bar for the over and under obstacles, and avoid touching their hands to the floor, or dumping their gear.

Despite having many years experience wearing protective gear and breathing apparatus, the results showed that there was little correspondence between the firefighters' judgments about the dimensions of the obstacles they could safely pass under, over or through, and their actual physical performance. In psychological jargon, the firefighters made repeated "affordance judgment errors", misperceiving the movements "afforded" to them by different environments.

The participants' judgments were most awry for passing under a horizontal bar – on average they thought they could pass under a bar that was 15cm lower than the height they could actually go under. Errors related to the over obstacle were a mix of over- and underestimations, and for the through obstacle 80 per cent of participants underestimated their ability by four to five cm – in other words, they thought they couldn't pass through, when actually they could. In a real life situation, this could lead to time wasting or unnecessary danger as they sought a more circuitous route.

The results suggest that the firefighters struggled to adjust their body schemas to account for their gear, and it's easy to see how this problem could lead to accidents in a burning building. It seems strange that they hadn't learnt to take account of their gear through experience, but in fact the converse was true – the more experienced firefighters made more errors. The researchers propose several explanations for this, including that specific experiences may be needed to recalibrate the body schema to specific obstacles. Also, the firefighters training in manoeuvring in their gear mostly comes at the start of their career and the benefits may have faded. Refresher training may be helpful, especially to learn one's changing capabilities with ageing.

The researchers said that their results were important because "affordance judgment errors made on a fireground could contribute to injuries attributed to contact with ceilings, doors, structural components of buildings, and other objects with slips, trips, and falls."

_________________________________ ResearchBlogging.org

Petrucci, M., Horn, G., Rosengren, K., & Hsiao-Wecksler, E. (2016). Inaccuracy of Affordance Judgments for Firefighters Wearing Personal Protective Equipment Ecological Psychology, 28 (2), 108-126 DOI: 10.1080/

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 6 May 2016

Smartphone study reveals the world's sleeping habits

Middle aged men get the least sleep, the research found
Researchers in the USA have used a smartphone app to see how people's sleep habits vary around the world. More specifically they've investigated how much the timing of sunrise and sunset affect people's sleep times or if social and cultural factors are more important. "Quantifying these social effects is the next frontier in sleep research," they write in the paper in Science Advances.

The study involved the ENTRAIN smartphone app which helps people recover from jet lag by recommending ideal levels of light exposure based on a user's typical sleep routine. Users have the option to make their information available for research. Olivia Walch and her colleagues at the University of Michigan began collecting data from the app in 2014 and the new analysis is based on information sent in by 8070 users around the world during the first year.

Overall the data showed that a later sunrise goes hand in hand with later waking-up times, and that a later sunset is associated with people going to bed later, just as predicted based on how light affects the suprachiasmatic nucleus – the bundle of neurons behind the eyes that controls our sleep cycle, also known as the circadian rhythm. But crucially, the link between sunset and bedtime was weaker than biological explanations would predict.

Put differently, the time we get up is strongly influenced by the timing of sunrise, but the time we go to bed is not as strongly influenced by sunset, suggesting other social and cultural factors are involved. Consistent with this account, most of the cross-cultural differences in sleep – for example, the Dutch reported the most sleep and Singaporeans the least – were explained by later bed times in the countries getting less sleep.

The difference between the countries with the most and least sleep wasn't huge: just under 7.5 hours for Singapore and just over 8.1 hours for The Netherlands. But the researchers emphasised that even a 30-minutes difference is meaningful, especially when you consider that sleep debt can have a cumulative effect over time.

Users of the app from the UK averaged about 8 hours sleep (a healthy amount) with average wake time just after 7 am and average bed time just before 11.15.

The researchers were also able to use the smartphone data to compare sleep habits by age, gender, and time spent exposed to natural light. Age was the most important factor with older people tending to go to sleep earlier. There was also much less variability in the sleep times of older users, which could because of biological mechanisms that narrow the window of opportunity for when it's easy for older people to fall asleep.

This age-related finding could have everyday relevance – "being careful about how much light affects your circadian clock could be more and more important to sleep as you get older," the researchers said. If your body's only willing to sleep between fairly limited hours, you're best off listening to it and switching off that TV.

Meanwhile, women were found to get more sleep than men – 30 minutes more, on average – thanks both to going to bed earlier and waking up later. The gender difference was greatest in mid-life so that middle-age men are the demographic group getting the least sleep, on average.

In terms of exposure to outdoor, natural light, app users who had more of this tended to report going to sleep earlier and sleeping more, which is as you'd expect based on the effect of daylight hours on the brain's circadian clock.

The researchers concluded that their results "point to the suppression of circadian signaling at bedtime as an important target for clinical sleep intervention; and suggest that age-related differences in the window during which sleep can occur are evidenced on a global scale". Aside from these specific insights into sleep, the group also said their findings show the power of modern smartphone technologies as a research tool. ""This is a cool triumph of citizen science," said co-author Daniel Forger in a press release.

--A global quantification of "normal" sleep schedules using smarphone data

_________________________________
   
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 5 May 2016

Literally "knowing by heart": Signals from heart to brain prompt feelings of familiarity

The idea that the body affects the mind is not new. The eminent American psychologist William James famously proposed that it is actually the physical sensation of fear that causes us to feel afraid. In more recent years, researchers have extended this principle, exploring the possibility that physical sensations play an important role in moral decisions and other processes usually seen as more purely cognitive or cerebral, such as memory.

It's already known that physical markers of arousal such as dilated pupils correlate with feelings of familiarity, but this could be because of the mental effort of remembering, rather than because physical arousal triggers the feeling of familiarity. Now a pioneering study in Journal of Experimental Psychology: General gets around this problem by testing people's memory for faces presented at specific phases of the heart beat. The results provide compelling evidence that our feelings and judgments about familiarity are influenced by signals arising from the heart.

Chris Fiacconi at Western University and his colleagues began by asking 37 undergrads to look at 68 fearful faces, presented for 1.5 seconds each. Next, the participants looked at 136 more faces – half had appeared previously and half were new – and their task was to say whether they had seem them before. Crucially, the participants were wired up to an heart monitor and during the memory test some of the faces were presented at the precise moment that the heart had just pumped a burst of blood into the arteries – the so-called systole phase – while the other faces were presented while the heart was relaxing, known as the diastole phase.

This is important because the systole phase increases blood pressure, which is detected by baroreceptors in the heart's arteries, and in turn the baroreceptors signal this change in pressure to various regions in the brain, including the brain stem but also higher brain areas involved in cognition. The amazing revelation from this first study was that participants were significantly more likely to say that a face was familiar if it was presented during the systole phase. This was true for faces that were old and also for those that were actually new. A follow-up study using neutral faces made the same findings.

A final study made things a little more elaborate. During the memory test for the faces, whenever participants thought they'd seen a face before, they were asked to clarify whether they actually recollected seeing it, or if it just felt familiar but they did not actually remember seeing it. Intriguingly, presenting a face during the heart's systole phase increased participants' tendency to report a sense of familiarity, but did not affect their claims to actually recollect having seen it.

These are dramatic results because they suggest that when making memory judgments, we don't necessarily rely only on our actual memory traces in the brain, but that we also interpret our physiological sensations. By presenting faces at a specific moment in the heart beat cycle, the current research effectively hacks into this system to trick participants into thinking they've seen new faces before. As the researchers state – in such cases "participants may interpret the transient increase in arousal that results from baroreceptor mediated feedback as owing to the familiarity of the stimulus probe."

Where next for this line of research? Fiacconi and his colleagues said that an important future goal is "to determine whether other epistemic feelings, such as feelings of knowing, tip-of-the-tongue-states, and deja-vu experiences are also shaped by this type of visceral feedback."

_________________________________ ResearchBlogging.org

Fiacconi, C., Peter, E., Owais, S., & Köhler, S. (2016). Knowing by heart: Visceral feedback shapes recognition memory judgments. Journal of Experimental Psychology: General, 145 (5), 559-572 DOI: 10.1037/xge0000164

--further reading--
People make more moral decisions when they think their heart is racing
Neuroscience lessons in body awareness from the man with two hearts
Does your heart rate hold the secret to getting in “the zone”?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 3 May 2016

A laughing crowd changes the way your brain processes insults

We usually think of laughter as a sound of joy and mirth, but in certain contexts, such as when it accompanies an insult, it takes on a negative meaning, signaling contempt and derision, especially in a group situation. Most of us probably know from experience that this makes insults sting more, now a study in Social Neuroscience has shown the neural correlates of this effect. Within a fraction of a second, the presence of a laughing crowd changes the way that the brain processes an insult.

Marte Otten and her colleagues asked 46 participants to read 60 insults and 60 compliments presented on-screen one word at a time. Half these insults (e.g. "You are antisocial and annoying") and compliments (e.g. "You are strong and independent") featured the silhouette of a crowd of people at the bottom of each screen, and the end of the insult or compliment was followed immediately by a final screen showing the phrase "and they feel the same way" together with the sound of laughter lasting two seconds. Throughout this entire process, the researchers recorded the participants' brainwaves using EEG.

Otten's team were particularly interested in the N400 – a negative spike of brain activity that tends to be larger when people hear something unexpected or incongruent with the context – and in the so-called "Late Positive Potential (LPP)" which is a positive spike of brain activity that can occur 300ms to 1 second after a stimulus and is usually taken as a sign of emotional processing.

The participants'  brains appeared to register the difference between insults and compliments incredibly quickly. Within 300 to 400ms after the onset of the first insulting or complimentary word, the participants' showed a larger LPP in response to insults, and a more widespread N400.

Moreover, when there was the sound of laughter, the size of the LPP was even greater in the insults condition, whereas the compliments condition was unchanged. In other words, insults almost immediately prompt more emotional processing in the brain than compliments, and this more intense processing is accentuated rapidly by a public context and the sound of laughter.

The researchers said their findings are "highly relevant for research that focuses on negative interpersonal interactions such as bullying, or interpersonal and intergroup conflict." They added: "While the insulted is still busy reading the unfolding insult, the extra sting of publicity is already encoded and integrated in the brain."

A problem with interpreting the specifics of the study arises from the way that it combined a visual signal of a public context (the silhouette of a crowd) and the sound of laughter, with the image of the crowd preceding the start of the laughter. This makes it tricky to untangle the effects of a public context from the specific effects of hearing laughter. Indeed, the brainwave data showed that, at a neural level, participants were already responding differently to public insults before they could have registered the sound of the laughter.

This issue aside, the researchers said their findings show that "the presence of a laughing crowd ... leads to stronger and more elongated emotional processing. In short, it seems that public insults are no laughing matter, at least not for the insulted."

_________________________________ ResearchBlogging.org

Otten, M., Mann, L., van Berkum, J., & Jonas, K. (2016). No laughing matter: How the presence of laughing witnesses changes the perception of insults. Social Neuroscience, 1-12 DOI: 10.1080/17470919.2016.1162194

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Monday, 25 April 2016

Neuro Milgram – Your brain takes less ownership of actions that you perform under coercion

The new findings help explain why many people can be coerced so easily
By guest blogger Mo Costandi

In a series of classic experiments performed in the early 1960s, Stanley Milgram created a situation in which a scientist instructed volunteers to deliver what they believed to be painful and deadly electric shocks to other people. Although this now infamous research has been criticised at length, people continue to be unsettled by its main finding – that most of the participants were quite willing to harm others when ordered to do so.

These findings have since been used to explain why some people can commit heinous crimes – the fact that they are “only obeying orders” handed down by others may make it easier for them to deny responsibility for their actions.

Milgram’s study was an investigation of people’s readiness to bow down to authority and obey coercive instructions to inflict harm, but it did not explore how the participants felt during the experiments, nor what was happening in their brains.

A new study in Current Biology that used brain wave recordings shows that when a person is coerced into performing an action, his or her brain processes the outcomes of that action differently from how it processes equivalent actions carried out intentionally, suggesting that coercion does indeed diminish our sense of agency, or the sense that we are in control of our actions.

The new research is based on a phenomenon called temporal binding, first described in 2002 by neuroscientist Patrick Haggard of UCL. It refers to the observation that the brain compresses time during voluntary actions, but not involuntary ones, so that our actions and their consequences are perceived to occur more closely together, enhancing our sense of agency.

Haggard and his collaborators performed two experiments to determine whether coercion alters perception of the time interval between an action and its outcome.

In the first, each of 30 pairs of female volunteers took turns at being the “agent” and the “victim”. In the coercive condition, a researcher stood over the table at which pairs of participants sat, stared intensely at the agent, and ordered her to either take money from the victim or give her a mild electric shock, outcomes which the participant initiated by pressing one of two computer keys. In the free-choice condition, the researcher was more detached, and told the agents that they could inflict a shock on the victim in order to earn themselves some money, or take money from the victim, or refrain from such actions, and that they were totally free to choose – again, the agents initiated the outcomes they wanted by pressing different keyboard keys.

In both the coercive and voluntary conditions, the agents’ key presses caused an audible tone to occur, with a variable random delay of up to one second, and the participants had to estimate the interval between the two.

Under the coercive, but not the free-choice condition, the participants estimated the intervals to be significantly longer than they actually were – in other words they showed a reduced temporal binding effect, suggesting that they had a reduced sense of agency over their actions. This was the case for both the harmful and the non-harmful outcomes, showing that the effect was not related to whether or not the agents inflicted any harm, but was due instead to the coercive instructions they were given.

Before the experiment, all the participants had filled out a questionnaire measuring empathy and various personality traits, and the more empathetic ones experienced a more dramatic reduction in agency when their actions had a harmful outcome compared to the less harmful ones. But most of them acted somewhat vindictively, giving roughly the same amount of electric shocks when they played the role of agent as they had received when they were the victim.

In a second experiment, the researchers recruited 22 more volunteers and used electroencephalography (EEG) to examine whether coercion alters the brain wave patterns associated with action outcomes. Consistent with earlier work showing that one particular type of brain wave, called N1, is far bigger for outcomes of voluntary actions than for those of actions performed under (non-coercive) instruction, they found that the outcomes of coerced actions produced smaller N1 waves than the outcomes of actions performed in the free-choice conditions.

A questionnaire administered after the experiments further revealed that the participants felt more responsible for their actions during the free-choice than the coerced trials.

Thus, being coerced into doing something seems to reduce our sense of agency, not just psychologically, but also at the level of basic brain function – the neural processing of the outcomes of coerced actions resembles the outcomes of passive movements more closely than the outcomes of voluntary or intentional actions.

Fifty years ago, Milgram reported that ordinary people usually comply with coercive instructions, even if it means inflicting real or apparent harm on others. These new findings show that the effects of coercion on the sense of agency are universal, as opposed to being associated with any particular characteristic of personality.

They also suggest a reason why people can be coerced so easily – coercion may automatically reduce the link between an action and its outcome, emotionally distancing people from distasteful consequences and diminishing their sense of moral responsibility.

Haggard and his colleagues believe that their findings could have profound implications for legal responsibility and the criminal justice system. They do not legitimise the notorious strategy used by defendants at the Nuremberg war crimes trials, that they were just obeying orders. But the researchers argue that the law would do well to shift the focus from people who obey orders to those who give them out.

_________________________________ ResearchBlogging.org

Caspar, E., Christensen, J., Cleeremans, A., & Haggard, P. (2016). Coercion Changes the Sense of Agency in the Human Brain Current Biology, 26 (5), 585-592 DOI: 10.1016/j.cub.2015.12.067

--further reading--
Social psychology textbooks ignore all modern criticisms of Milgram's "obedience experiments"
In search of the conscious will

Post written by Mo Costandi (@Mocost) for the BPS Research Digest. Mo trained as a developmental neurobiologist and now works as a freelance writer specialising in neuroscience. He writes the Neurophilosophy blog, which is hosted by The Guardian, and is the author of 50 Human Brain Ideas You Really Need to Know, published by Quercus in 2013. His second book, Neuroplasticity, is due to be published by the MIT Press later this year.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 15 April 2016

Experienced meditators have brains that are physically 7 years younger than non-meditators

If you want to keep your brain young, you could do a lot worse than taking up meditation. That's if you believe the results of a new study in NeuroImage that's found experienced meditators have brains that appear 7.5 years younger, on average, than non-meditators.

The researchers used a computer programme that they created previously – it was trained on brain scans taken from hundreds of people to recognise what brains of different ages typically look like, in terms of amounts of grey matter, white matter, and cerebral spinal fluid. For the new study, the same programme analysed the brains of 50 experienced meditators (average age 51, with an average of 20 years meditation experience) and the brains of 50 healthy, non-meditators (also average age 51) and it produced a figure for each person saying how old their brain was in terms of its physical condition, as compared with the actual age of the person. Using this approach, the group of meditators had brains that were 7.5 years younger than the control group, on average.

Moreover, among the controls, the gap between their "brain age" and chronological age didn't vary with greater age, but among the meditators it did: it was the older meditators who had brains that seemed particularly well preserved, suggesting that meditation provides protection against the brain cell loss associated with aging.

Should you believe these findings? Prior research has shown that meditation appears to increase brain volume. But some issues to bear in mind include the fact that meditation might not preserve the brain, rather people with more age-resistant brains might be more likely to take up meditation. Similarly, we don't know if people who meditate do other healthy things that non-meditators don't do. Another caveat: this study just looked at the physical characteristics of the participants' brains, there was no test of their mental functioning. As a final aside, the researchers also noted that their female participants had more youthful brains than men – at age 50, they appeared three years younger, on average.

--Estimating brain age using high-resolution pattern recognition: Younger brains in long-term meditation practitioners

_________________________________

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free weekly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 29 March 2016

Mind wide open – brain activity reveals motives behind people’s altruism

By guest blogger Sofia Deleniv

We often want to know what’s driving other people’s actions. Does the politician who visited a refugee camp on the eve of elections truly care for the poverty-stricken? In reality of course, our mind reading skills are pretty limited and something as complex as an apparent act of altruism can disguise a huge diversity of motives. Most of the time, these motives remain entirely private to the individual – a driving force in a black box.

For a new paper published in Science, however, researchers have prised open the box by decoding patterns of brain activity to reveal the hidden motives underlying people’s altruistic decisions. Participants engaged in a financial game that required them to make choices about how to allocate money between themselves and their experimental partner. A decision could either be selfish, in that it primarily benefitted the participant (eg. £10 for me, £2 for my partner) or altruistic, in that it maximised the partner’s payoff at a cost to the participant (eg. £4 for me, £10 for my partner). Occasionally, participants made self-sacrificial decisions – but why?

The researchers pinned down the participants’ motives for altruism by creating these motives themselves. To do this, they put the participants through two different conditions. Half of the participants were made to feel a sense of compassion towards one of their partners, by having them repeatedly observe this partner receive aversive electric shocks – the researchers reasoned that this would encourage the participants to be generous to this partner out of empathy. The other half of the participants were provoked into feeling a desire to reciprocate their partner’s kindness – they observed one of their experimental partners sacrificing their own profit on several trials in order to prevent the participant him or herself from receiving painful electric shocks. The researchers anticipated that this would encourage these participants to make altruistic decisions towards this partner as a way of repaying the kindness.

Indeed, both motive inductions pulled at the heartstrings – participants playing with a partner towards whom they felt empathy or in debt made more altruistic decisions than when they had to allocate money to a neutral partner. As the participants made these decisions, the research team examined their brain activity using functional magnetic resonance imaging (fMRI). The scans revealed that there was no brain region in particular that became more or less active under the influence of a particular motive. Thus, a quick glance at brain activity couldn’t tell the researchers whether a person’s altruistic decision was rooted in empathy or a desire to reciprocate.

Instead, what did appear to be critically different between the two motives was how various brain regions communicated with each other. However, the ultimate test of the consistency and usefulness of this finding lay with this question: could a computer be trained to use information about these connections to judge whether a person made a selfless choice due to a state of empathy or a wish to reciprocate?

The researchers investigated this by providing a computer programme (a kind of “learning algorithm”) with multiple "learning experiences" – this involved showing it examples of the kind of brain connection patterns that were associated with each type of motive. Crucially, the researchers then measured how often the algorithm was able to accurately identify a person’s motive when it was given a brain scan it had never been exposed to. Using this approach, the computer could predict individuals’ hidden motives with an accuracy of 68 per cent.

Interestingly, the researchers found a remarkable similarity between connection patterns that characterised altruism driven by empathy, and altruism that participants occasionally displayed towards a completely neutral partner. This raises the intriguing possibility that what the authors call "home-grown altruism" – our intrinsic impetus for kind behaviour – is primarily rooted in a sense of compassion.

Now, these findings appear to show that machines can have insight into the richness of human motivation, even when behaviour itself lends us no clues. But let’s temper that excitement! While we might find it intuitively impressive that a machine accurately judged a person’s hidden motive in 68 per cent of cases, we should keep in mind that if it were 50 per cent accurate, it would be no better than random guessing (after all, because of the way the researchers designed things, the computer programme only needed to choose between two possible motives).

As we delve into the realm of real social interactions, hidden motives gain a far greater complexity. People may lend a helping hand due to a sense of compassion, because they expect the return of good karma, as a means of boosting their public image, or perhaps for some other more obscure reason. Faced with such intricate thought processes, no doubt rooted in incredibly complex sets of connections in the brain, the guessing of the machine might be no better than our own! But that remains to be seen.

_________________________________ ResearchBlogging.org

Hein G, Morishima Y, Leiberg S, Sul S, & Fehr E (2016). The brain's functional network architecture reveals human motives. Science (New York, N.Y.), 351 (6277), 1074-8 PMID: 26941317

--further reading--
How to cheat a brain-scan-based lie detector
First brain scan study to feature THAT dress

Post written by Sofia Deleniv for the BPS Research Digest. Sofia holds a degree in Experimental Psychology and is currently working towards a PhD in Neuroscience at the University of Oxford, where she investigates multisensory processing in the mouse brain. In 2015, she decided to try her hand at science writing by starting her blog 'The Neurosphere'. You can visit her Facebook page or Twitter feed for updates on her written work and other exciting bits of science.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 14 January 2016

What does fear do to our vision?

By guest blogger Melissa Hogenboom

Consider the following scenario. A policeman is on patrol, maybe he's quite new to working in the field. He sees a suspicious young man and decides to follow him.

He turns the corner and sees that the man has drawn a gun from his pocket. In a snap second – almost too fast to think twice – he takes out his own gun and shoots the man dead.

Only the man didn't have a gun at all, it was a mobile phone.

Sadly, it's a familiar story. An incident exactly like it occurred only last week (January 2016) and a quick trawl though more newspaper reports shows how commonly it occurs.

When people make snap decisions in situations like this, they are often under intense momentary stress. This can provoke a host of automatic mental and physical effects that some psychologists refer to as "freezing behaviour". We usually think of this kind of reaction as occurring in animals – a mouse paralysed with fear or a deer trapped motionless in the headlights (resulting in much road kill).

In other words, it's the moment before an animal decides what to do to do, whether to "fight or flee". This is believed to be an innate response to a predator, to avoid being seen or heard. Research has shown that an animal's heart rate actually decreases when in this state.

Although we hear about it less in humans, our physiological response can be similar. For instance one 2005 study found that in response to pictures of mutilated bodies, participants' physical movements reduced and their heart rates slowed. The same effect was found in a 2010 study in response to pictures of threatening faces.

However, there’s still much more we need to learn about the effects of the human freezing response – for example, what effect does it have on visual perception, and could any effects help explain some of the tragic instances when police have mistaken phones and other harmless objects for guns? A new study published in the Journal of Experimental Psychology aimed to find out.

Maria Lojowska of Radboud University and colleagues in the Netherlands tested 34 participants between the ages of 18 and 30. To create a situation that elicited freezing behaviour, the researchers occasionally gave their participants a mild electric shock, which was always preceded by a red dot. Participants were told they were taking part in a visual perception task and were fully informed about the nature of the shocks before the experiment started.

It was not the shock itself that made the participants show "freezing behaviour" (as measured by their heart rate), rather it was the anticipation of the shock. When participants saw a green dot (which did not presage a shock), they relaxed, but when they saw a red dot they felt more scared, regardless of whether a shock was actually given or not.

The participants’ task was to judge as accurately as possible the orientation of the lines inside small squares, which appeared on a computer screen on the left or right of their visual fields. The squares either had several lines (high detail) or few lines (indicating low detail), as you can see below. Crucially, the researchers found that the participants’ visual performance was affected by whether or not they were stressed and showing physiological signs of freezing. When they were afraid and stressed, their performance at judging the squares with high detail was impaired but their ability to judge the squares with coarse visual details actually improved.

The square on the left features high detail and the one on the right low detail. Stimuli from Lojowska et al 2015. When scared, participants were better at perceiving low detail. 
The researchers said that previous research in animals had suggested that the freezing response leads to an overall improvement in vision, but their new findings suggest a more nuanced situation – it seems that when we’re afraid, we perceive some aspects of the world more clearly, but at the cost of ignoring much of the detail.

Intuitively, it makes sense that an animal or human only sees the most basic detail of a potentially threatening object. It would take too much time to take in all the detail of a scene. Our brain has a clever way of quickly reconstructing what every object is likely to be using its memory of similar events and situations, rather than analysing each new thing afresh, in depth. It is these shortcuts that can result in errors and visual illusions.

Despite these potential flaws in our visual perception, it's important for us to be able to perceive things quickly. If you are walking in a desert and glimpse a shape that could be a snake (but is more likely a stick), it's better to show caution and stop than assume it's a stick and walk right into danger.

Now that we better understand how our visual perception changes when we feel fear, Maria Lojowska and her team plan to discover exactly what's going on in the brain when this happens. Meanwhile, the researchers hope their findings might help inform training programmes to improve a person's performance when they are in a stressful environment. Many police forces in the US already train their officers to overcome their implicit bias towards race and sex. It would be helpful to add the limits of our visual perception to the list.
_________________________________

ResearchBlogging.org

Lojowska, M., Gladwin, T., Hermans, E., & Roelofs, K. (2015). Freezing promotes perception of coarse visual features. Journal of Experimental Psychology: General, 144 (6), 1080-1088 DOI: 10.1037/xge0000117

--further reading--
When humans play dead
Why do we get goose-bumps?
How science is helping America tackle police racism
The smell of fear more powerful than previously realised

Post written by Melissa Hogenboom (@melissasuzanneh) for the BPS Research Digest. Melissa is BBC Earth's feature writer.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 3 November 2015

Attention training can wire your brain to be less scaredy-cat

After the training, participants were
less distracted by scary pictures
A simple computer training task, which involves ignoring irrelevant information, can change the brain's wiring to make it less responsive to threatening pictures. That's according to a new study published in Neuroimage. The authors say they are the first to demonstrate that neutral (as in, non emotional) attention training can change the brain's emotional reactivity.

Twenty-six healthy participants completed the "executive control" training, which required them to identify the direction of a central arrow on a computer screen as fast as possible, while ignoring the direction of two other arrows adjacent either side of it (e.g., →→←→→). Half the participants completed a more intense version of the training in which 80 per cent of the trials were incongruent, with the distracting arrows facing the opposite way the target arrow. The other participants completed an easier version of the training in which only 20 per cent of the trials were incongruent. The participants completed this training three times a day (for about 15 minutes each session) for six days.

The test of the participants' emotional reactivity involved them indicating the colour of squares flashed on a screen, with each square preceded either by a neutral picture or a scary picture, such as a vicious snarling dog. Typically, people's colour judgments are slowed down after a scary picture.

To see how the training affected the brain, the researchers, led by Noga Cohen at Ben-Gurion University, first scanned the participants brains by fMRI during the emotional task, then they conducted a resting-state scan which reveals communication patterns between brain areas. Next, the participants did their six days' of training. Finally, the participants returned for another brain scan while they did the emotional task again, and they undertook another resting-state scan.

The participants who completed the more intense version of the training (but not the other participants) showed reduced activation in their amygdala – a brain region involved in emotions, including anxiety and fear – during the second emotional task, as compared with at the study start. This reduction in amygdala reactivity also correlated with their performance on the emotional task. That is, the more their amygdala was calmed, the less their responses were slowed by scary pictures. There was also some evidence that, after the training, the high-intensity training group showed increased connectivity between their right amygdala and frontal cortex.

This is very preliminary evidence that exercises that improve people's basic attentional skills (specifically their ability to ignore irrelevant information) can alter brain networks involved in emotional processing, with the consequence that the person becomes less reactive to frightening imagery. The sample size was very small, the participants were all healthy with no mental health problems, and we know nothing of the long-term effects of the training. Acknowledging these limitations, the researchers said their findings suggest non-emotional executive control training "can suppress emotional reactions, and thus might serve as a short-term and easy-to-implement treatment for individuals suffering from disorders characterised by emotion dysregulation."

_________________________________ ResearchBlogging.org

Cohen N, Margulies DS, Ashkenazi S, Schaefer A, Taubert M, Henik A, Villringer A, & Okon-Singer H (2015). Using Executive Control Training to Suppress Amygdala Reactivity to Aversive Information. NeuroImage PMID: 26520770

--further reading--
Can cognitive training boost self-control?
Attention training for young children has noticeable benefit
Driving video game reverses age-related mental decline

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 27 October 2015

First brain scan study to feature THAT dress

Figure and caption from Schlaffke et al., 2015.
Earlier this year a dress nearly broke the internet. A photo of the striped frock (which is actually blue and black) was posted on Tumblr and it quickly became apparent that it looked very different to different people, spawning furious arguments and lively scientific commentary.

Specifically, people disagreed vehemently over whether it was white and gold (that's my perception) or blue and black. Now, writing in the journal Cortex, researchers in Germany have published the first study to scan people's brains while they look at the dress, and the neural findings appear to support earlier, psychological explanations of the phenomenon.

When the dress story went viral, psychologists were quick to explain that this dress provided a striking example of how our perception of the world arises from a combination of incoming sensory information and our interpretation of that information. In the case of colour perception, when light bounces off an object and hits your retina, its mix of wavelengths is determined by the colour of the object and the nature of the light source illuminating it. Your brain has to disentangle the two. Usually it does this very well allowing for something called "colour constancy" – the way that objects of the same colour are perceived the same even under different illumination conditions. However, the mental processing involved in colour perception does leave room for interpretation and ambiguity, especially when the nature of the background illumination is unclear as is the case with the photo of the dress (another illusion that hacks the limitations of this aspect of our visual system is the checker shadow illusion).

For the new study, Lara Schlaffke and her colleagues scanned the brains of 28 people with normal vision while they looked at the photo of the dress. Fourteen of the participants see the dress as white and gold and 14 see it as blue and black. The key finding is that the people who see the dress as white and gold showed extra activation in a raft of brain areas, including in frontal, parietal (near the crown of the head) and temporal (near the ears) regions. Yet, no group differences emerged in a control condition when the participants simply looked at large coloured squares that matched two of the colours that feature in the dress, but without any contextual information also visible (see figure, above).

These results are broadly consistent with the idea that the white/gold perceivers were engaged in more interpretative mental processing when looking at the dress. To oversimplify, their perceptual experience of the dress is based less purely on the "bottom up", raw sensory information arriving at their eyes, and is distorted more by their own assumptions and expectations about the background illumination. The extra activity in their brains during the dress viewing is likely, at least in part, a neural correlate of all this interpretative, "top down" processing.

What the new study can't answer is whether this extra neural processing (or which aspects of it) in the white/gold group is the cause of their perceptual experience of the dress, or the consequence. However, the researchers describe some future approaches that could help address this quasi-philosophical conundrum: for example, by using transcranial magnetic stimulation (TMS) to temporarily disrupt the extra localised neural activity seen in the people who experience the dress as white and gold, we could ask: will they still experience the illusion?

Meanwhile, as someone who's firmly in the white/gold camp, I take satisfaction from this study: I might see the dress as the "wrong" colours, but at least this isn't due to simple-mindedness, but rather it's because my brain's working overtime, doing clever tricks in the background. I'm pretty sure that must be an advantage in at least some situations.

_________________________________ ResearchBlogging.org

Schlaffke, L., Golisch, A., Haag, L., Lenz, M., Heba, S., Lissek, S., Schmidt-Wilcke, T., Eysel, U., & Tegenthoff, M. (2015). The brain's dress code: How The Dress allows to decode the neuronal pathway of an optical illusion Cortex, 73, 271-275 DOI: 10.1016/j.cortex.2015.08.017

--further reading--
Visual illusions foster open-mindedness


Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Friday, 23 October 2015

The man who saw a stranger in the mirror

The man was diagnosed with
Capgras syndrome
Mirrors easily call forth the uncanny: the vampire that casts no reflection; the figure who seems to appear in your periphery. Overtired or in an odd mood, I sometimes find myself scrutinising my own reflection, momentarily toying with the idea that it’s something independent, alive in its mirror space. So I was fascinated to read a short account published in the journal Neurocase of a 78-year-old man, referred to as Mr B to protect his privacy, who over the course of ten days, in place of his own reflection had repeatedly encountered a doppelganger in the mirror: a stranger who looked just like him, and knew all about him, but a stranger nonetheless. Eventually this figure "became aggressive" – the article doesn’t share any details – and it was presumably this change in tone that triggered Mr B’s admission to hospital.

Mr B, who had no history of psychiatric illness, was diagnosed with a form of Capgras syndrome. People with Capgras believe that one or more familiar people have been replaced by identical strangers, and it falls within the delusional misidentification syndrome, together with related conditions such as the Fregoli delusion: the belief that many people you encounter are actually a single deceiver in disguise. It’s likely that such delusions involve some degree of impairment in face processing, specifically the ability to process familiarity of other people’s faces: in Cagras the familiar feel somehow not (an experience undergirded by skin conductance response data); in Fregoli the many feel somehow the same. If this hypothesis bears out, these delusions could be seen as the complement – or mirror image – of prosopagnosia (also known as face-blindness), where explicit recognition of faces is impaired, but sufferers still retain their implicit feelings about faces, correctly guessing which belong to people they know.

What’s interesting about Mr B’s mirrored-Capgras, also called mirrored self-misidentification, is that the delusional judgment is applied to one’s own face. Evidence suggests that our brains process own-face information in a special way, suggesting that this particular experience reflects a specific brain impairment, rather than the delusion arbitrarily settling on one target rather than another. The authors of the Neurocase article don’t report too deeply on Mr B’s neurological symptoms, mentioning only some protein indicators consistent with Alzheimer’s Disease, and atrophy in chiefly posterior brain regions. We would expect some damage also within the dorsolateral prefrontal cortex, as this is typical in patients experiencing such delusions. This area is involved in evaluating beliefs, stepping in to question extreme or incoherent ideas.

Mr B was given antidepressant and antipsychotic medication in hospital, and three months later, he had recovered from his delusion: “Mr. B. explained that his double had gone.” It’s nearly Halloween, where we get to play tricks with the uncanny. Maybe spare a thought for the tricks that that the uncanny can play on us.

_________________________________ ResearchBlogging.org

Diard-Detoeuf, C., Desmidt, T., Mondon, K., & Graux, J. (2015). A case of Capgras syndrome with one’s own reflected image in a mirror Neurocase, 1-2 DOI: 10.1080/13554794.2015.1080847

Note: photograph is a stock image and does not portray Mr B. 

--further reading--
Who replaced all my things?
The stroke patient for whom strangers look normal whilst family look strange
The woman who mistook her daughters for her sisters

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Google+