Showing posts with label Language. Show all posts
Showing posts with label Language. Show all posts

Thursday, 19 November 2015

Why do people find some nonsense words like "finglam" funnier than others like "sersice"

Calm down, it's not that funny! 
When you're trying to understand a complex phenomenon, a sensible approach is to pare things back as far as possible. For a new study, published recently in the Journal of Memory and Language, psychologists have applied that very principle to test a popular theory of humour.

The theory states that, fundamentally, we are most often amused when we are surprised by, and then resolve, an apparent incongruity: a word that didn't mean what we originally thought, say, or a person not being who we first expected and so on (also known as expectancy violation). It can be difficult to test this theory because in real life jokes and funny situations so many other factors come into play (such as cultural knowledge or people's reputations), layered atop this fundamental mechanism. To test the theory in its purest terms, Chris Westbury and his colleagues have explored the possibility that some nonsense words are inherently funnier than others at least in part because they are simply just less expected.

The researchers first established that some nonsense words are consistently rated as funnier than others. To do this, they used a computer programme to generate thousands of random nonsense words and then asked nearly a thousand students to rate them for funniness. To make sure the nonsense words were viable and pronounceable, the programme was computed to make sure that every three letters in each nonsense word actually appeared in a real English word. Any words that sounded the same as actual real words (but spelt differently) were removed.

The first key finding was that there was a significant amount of consistency in the students' ratings – that is, some nonsense words were consistently rated as funny (such as blablesoc), while others were consistently rated as unfunny (such as exthe). This was true even after all the rude-sounding nonsense words were removed, an important step since the researchers didn't want implied meanings to contaminate the results. Among the rude-sounding words were whong, dongl, focky, and clunt, which consistently attracted the highest humour ratings before being removed.

Next, to specifically test the theory that humour is often based on resolved incongruities, the researchers created a new list of nonsense words and calculated the entropy of each – this essentially means quantifying how unlikely each word was; that is, how far removed it is from being a real word. The researchers predicted that the less entropy in a nonsense word (i.e. the less "wordy" it was), the funnier it would be, because it would more strongly challenge people's expectation for what counts as a real word. Among the lowest entropy words used in the study included subvick, quingel, and probble, while among the highest entropy words were tatinse, retsits and tessina (rude-sounding words were again removed).

Two experiments supported the researchers' predictions: when comparing the humorousness of pairs of nonsense words, 56 participants consistently gave higher funniness ratings to the lower entropy word, and also when simply rating the nonsense words for their humour value, lower entropy words tended to receive higher ratings. The researchers said these results are entirely in line with the expectation violation theory: "Nonwords [are sometimes] funny because they violate our expectations of what a word is," they said. As to why we find unexpected events, including nonsense words, funny, perhaps even chuckling a little out loud, Westbury and his team said their findings can be interpreted in line with a recent evolutionary account of humour:

"... it has proven adaptive across evolutionary time for us to be structured in a way that makes us involuntarily let conspecifics [friends and family] know about anomalies that we have recognised are not at all dangerous, since anomalies are generally experienced as frightening."

The researchers added that as well as supporting the resolved incongruity theory of humour, their results also have some potential applied uses. For example, testing patients reactions to nonsense words could provided a very sensitive and subtle measure of their sense of humour, which can be impaired by brain damage or illness. "The effect may also have practical effects in product naming," they said. "If it can be shown that the computable funniness of a name is a relevant factor in consumer behaviour. We predict that consumers will strongly prefer (funny nonsense words) 'whook' or 'mamessa' to (unfunny nonsense words) 'turth' or 'suppect' for a new product name."

_________________________________ ResearchBlogging.org

Westbury, C., Shaoul, C., Moroschan, G., & Ramscar, M. (2016). Telling the world’s least funny jokes: On the quantification of humor as entropy Journal of Memory and Language, 86, 141-156 DOI: 10.1016/j.jml.2015.09.001

--further reading--
How many psychologists does it take to explain a joke? Psychologist magazine feature.
Why it's apt - psycho-acoustically speaking - that Darth Vader wasn't called Barth Vaber

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Wednesday, 21 October 2015

Life is different for people who think in metaphors

Some people are literal minded – they think in black and white whereas others colour their worlds with metaphor. A new paper published recently in the Journal of Personality and Social Psychology reports on the first standardised measure of this difference, and it shows that having a proclivity for metaphors has real consequences, affecting how people respond to the world around them and even how they interact with others.

A metaphor uses a concrete concept, often sensory (e.g. "light") or location-based (e.g. "forward"), to illuminate a nebulous one, such as emotion or time. While this colouring-in can be useful – and can endure, and transform language – it may not be to everyone’s taste, or necessary for the demands of their day-to-day life. To systematically investigate this, an international research team led by Adam Fetterman developed a measure – which asks people to choose their preference for various metaphors or equivalent literal phrases, for example: “She uses her head” vs. “She makes rational decisions” – and administered it to 132 student participants.

The researchers found a good deal of variability between the students in how they responded, including some who only ever selected the metaphor option, and others only the literal alternative. Scores on the measure correlated with a preference for mental imagery, and they correlated with the amount that participants used metaphoric phrases in a free writing exercise, confirming the test predicts actual behaviour. Conversely, the test scores were not associated with personality factors, with intellectual ability, nor with the ability to visualise, suggesting the test is measuring a mental style rather than a capacity. So, then, a metaphoric thinking style is an actual thing you can measure. But does it matter?

It does. Take the way metaphors can affect our feelings (known as the “metaphor transfer effect”). In a classic example, people rate neutral words as more pleasant when they are printed in a white font rather than a black one – "light" being associated metaphorically with "good". Before now researchers didn’t agree on whether this kind of effect is truly down to reliance on metaphorical representations – a counter explanation is that these effects reflect fundamental, non-conceptual associations between different stimuli that were formed early in life – e.g. through repeated pairings of warmth and affection. However, when Fetterman and his team recruited a further 132 students, they found that those who scored higher on having a metaphoric thinking style also tended to show a greater preference for white-font over black-font words, thus providing good evidence that the metaphor transfer effect is aptly named, after all.

Another study took this out into the real world, tracking 136 people over a fortnight to see whether the amount of sweet food consumed on a given day influenced how agreeable they were in their interactions with other people. I would have imagined that if there were any effect, it might be simply due to a glucose buzz. But no. The link between sweet food consumption and people’s behaviour that the researchers found was mostly down to thinking style. That is, the effect was much stronger among the highly metaphorical participants: when they were sweet in tooth, they were also sweet in nature (thus adding a nuance to previous research on this link).

Remember, too, that metaphors are supposed to illuminate, particularly when it comes to abstract concepts that can be hard to pin down, like the subtleties of emotions. In another experiment, Fetterman’s team measured participants’ ability to correctly judge most people’s typical emotional response in different situations, such as when something unpleasant was happening that couldn’t be stopped. In this example, the correct response was “distressed”. Crucially, people who scored highly in metaphoric thinking style tended to perform better at this task. This suggests their colourful thinking style actually gave them greater insight into emotions.

In a final experiment, 50 participants spent 5 minutes each day for a week writing about their negative emotions, and they were encouraged to be either literal, "I felt anxious or confused," or metaphorical: "I felt like a leaf in the wind". The participants’ depression symptoms and negative emotion ratings, which were recorded at the start and end of the week, were found to drop in the metaphorical condition only. Although this experiment didn’t measure participants’ metaphoric style, it shows that being encouraged to adopt this style is more effective in alleviating negative feelings on troubling topics.

From an experimenter’s perspective, it’s interesting to note that in the font-colour study, participants who were well below average in metaphor usage didn’t show any significant preference for white words: those transfer effects don’t work on me, Jedi. In fact, if the researchers had just looked at the average scores for the participants as a whole, the metaphor effect would have been undetectable. This suggests the new measure of metaphorical thinking style can help us to investigate meaning-related effects that might be elusive. For example, it might have value in pinning down findings in the contested area of social priming, helping to identity those people likely to be influenced by such effects. Furthermore, we’ve seen that a metaphorical thinking style has emotional benefits, but could it also be useful in non-emotional domains, for instance in the extent to which fishy smells activate sceptical thinking?

One thing’s for sure – whether we prefer a crystal-clear monochrome take on the world, or to ladle on the technicolour, it’s clear that metaphor usage filters how we take in the world, for good and ill.

_________________________________ ResearchBlogging.org

Fetterman, A., Bair, J., Werth, M., Landkammer, F., & Robinson, M. (2015). The Scope and Consequences of Metaphoric Thinking: Using Individual Differences in Metaphor Usage to Understand How Metaphor Functions. Journal of Personality and Social Psychology DOI: 10.1037/pspp0000067

--further reading--
Shining a light on why sensory metaphors are so popular

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Thursday, 6 August 2015

What does a person's writing style say about their risk of suicide?


Suicidal thoughts are relatively common whereas acts of suicide are, thankfully, far more rare. But this creates a dilemma – how to judge the risk of thoughts turning into action? A new study claims that an objective way is to use a computer programme to analyse a person's writing style. People who are having suicidal thoughts and who use more pronouns relating to the self (I, me, myself) than pronouns relating to others, are likely to take more time to recover, meaning they will be at risk for longer.

Mira Brancu and her colleagues investigated 114 US students who'd referred themselves to an outpatient counselling centre at their university, and all of whom said they were having suicidal thoughts. When the students started therapy they completed a measure of suicidal risk which involved them writing about what they found most painful, pressing, agitating, and if and why they felt hopeless and self-hating. They also wrote about their reasons for living and dying and about "one thing that would help me no longer feel suicidal".

A computer programme analysed the students' answers to these questions, counting the relative number of mentions of first-person pronouns compared with mentions of other people, including friends and family and people's names. Based on this, the students were categorised as either self-focused or other-focused. The important finding was that this categorisation was related to how the students progressed through therapy, based on their session-by-session self-ratings of their frequency of suicidal thoughts and their assessment of their own suicide risk. Students who were more self-focused at the study start took longer to recover – their suicidal thoughts resolved, on average, in 17 to 18 sessions, compared with 6 to 7 sessions for the students who were categorised as other-focused.

This finding does build on past research linking first-person pronoun use with personal distress, but as a diagnostic tool for suicidal risk it definitely needs replicating in other contexts – these were students who'd self-referred for treatment and all of them recovered, so it's not clear whether the same results would apply with other, potentially more at-risk groups (but note, a prior study of suicidal poets [pdf] found that those who used more first-person pronouns were more likely to die by suicide).

Despite its limitations, this is an intriguing result that shows the possibility of a relatively quick, efficient and objective way to estimate the likely persistence of suicidal thoughts. The measure would be "bias-free" which is important because it's known that clinicians' own subjective judgments can be off-target. Indeed, Brancu and her colleagues note that an informal survey of clinicians at conferences found that they thought more self-focused versus other-focused writings would be a positive sign, indicating that patients were better able to articulate their feelings.

_________________________________ ResearchBlogging.org

Brancu, M., Jobes, D., Wagner, B., Greene, J., & Fratto, T. (2015). Are There Linguistic Markers of Suicidal Writing That Can Predict the Course of Treatment? A Repeated Measures Longitudinal Analysis Archives of Suicide Research DOI: 10.1080/13811118.2015.1040935

--further reading--
Greater use of "I" and "me" as a mark of interpersonal distress
What's different about those who attempt suicide rather than just thinking about it?
A study of suicide notes left by children and young teens

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 28 July 2015

Why words get stuck on the tip of your tongue, and how to stop it recurring

Someone in a tip-of-the-tongue state will invariably writhe about as if in some physical discomfort. "I know it, I know it, hang on ..." they will say. Finger snapping and glances to the ceiling might follow, before a final grunt of frustrated submission – "No, it's gone".

Psychologists studying this phenomenon say it occurs when there is a disconnect between a word's concept and it's lexical representation. A successful utterance requires these two steps are bridged, but in the tip-of-the-tongue state, only the concept is activated (and possibly a letter or two) while the complete translation into letters and sounds fails. What's more, new research shows the very act of being in this state makes it more likely that it will recur.

Maria D'Angelo and Karin Humphreys provoked their participants into experiencing tip-of-the-tongue states by presenting them with the definitions for rare words (e.g. "What do you call an instrument for performing calculations by sliding beads along rods or grooves?"). Sometimes the students knew the word straight-off, other times they said they simply didn't know, but occasionally – and these were the important trials – they said they definitely knew the word, but couldn't quite spit it out.

The researchers quickly (after 10 or 30 seconds) put the students out of this last, uncomfortable tip-of-the-tongue state by telling them the answer. However, a key finding was that being in a tip-of-the-tongue state for a particular word on one occasion increased the likelihood of being in that state again for the same word on later re-testing, whether that second test came 5 mins, 48 hours or one week later (thus replicating and extending previous research by the same lab). This recurrence is despite the fact of having been told the word after the initial tip-of-the-tongue state.

This suggests the state involves an unhelpful learning process. Imagine a hiker who is lost en route to his destination – this is your brain trying to find the path between word concept and letters and sounds. The findings suggest that walking the wrong route once actually makes it more likely you'll get lost again as you unintentionally come to learn the wrong way to your destination.

Consistent with this account, spending more time deliberately but unsuccessfully attempting to resolve a tip-of-the-tongue state made it even more likely that it will recur (but note, contrary to the researchers' prior work, this time this effect was only found when participants put a lot of unsuccessful effort into resolving the tip-of-the-tongue state).

In real life, this means that if you're hopping about in a frustrated tip-of-the-tongue state and I tell you the word you're hunting for, I won't have done you any favours – next time you need that word, you're likely to get stuck again. The researchers believe this is because although I've told you the word, you haven't arrived at it through your own word-searching processes. To follow the hiking analogy, it's a bit like I've picked you up by car and fast-tracked you to your destination – by doing so, I will have done nothing to teach you the correct route.

So, is there anything you can do to help a person in a tip-of-the-tongue state? A clue comes from the fact that when the students in these experiments spontaneously resolved a tip-of-the-tongue state (i.e. they finally managed to find the word before the researchers told it to them), they were subsequently far less likely to get stuck again. Such spontaneous resolutions suggest that the word-search process has managed to resolve itself and when this happens, the correct concept-word connection is usually remembered. This is like the lost hiker managing to find his own way to the destination and remembering the route for future use.

The way to help someone in a tip-of-the-tongue state, then, is to nudge them towards a spontaneous resolution. When the researchers helped their student participants resolve a tip-of-tongue state by giving them the first few letters of the solution, this prevented the state from recurring on later testing. Point the hiker in the right direction and if he finds the right way himself, he will remember the correct route in future. This nicely complements an established phenomenon from research on word learning known as the generation effect: that is, generating words from clues (such as a word stem) leads to better memory for those words than being told them whole.

"These findings may have potential applications for both educational, and therapeutic settings, in which a student or a patient with neurological damage is trying to retrieve a difficult item," the researchers concluded.

_________________________________ ResearchBlogging.org

D’Angelo, M., & Humphreys, K. (2015). Tip-of-the-tongue states reoccur because of implicit learning, but resolving them helps Cognition, 142, 166-190 DOI: 10.1016/j.cognition.2015.05.019

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!

Tuesday, 14 July 2015

What is the correct way to talk about autism? There isn't one

Image: National Autistic Society
The language we use reflects our attitudes but perhaps more important, it can shape those attitudes. A new study considers this power in the context of autism. Lorcan Kenny and his colleagues have conducted a UK survey of hundreds of autistic people; parents, relatives and carers of autistic adults and children; and professionals in the field, about their preferences for the language used to discuss autism. The research was conducted online with the help of the National Autistic Society.

The main finding is that there is no consensus about the preferred terms to use when talking about autism and people with the diagnosis. A key disagreement between and within the surveyed groups is whether the language we use should put the "person first", as in "people with autism", or put the diagnosis first, as in "autistic person". Overall, researchers and other professionals expressed a strong preference for the former. One professional said:
"I don’t like phrases which describe a person as their condition, so would always go for 'person' first, because that’s what we all are regardless of what conditions we have. I would never describe myself as a thyroidy, for example."
In contrast, autistic people showed a clear preference for autism-first terminology. One of the autistic adults in the survey said:
"Separating the person from their autism is damaging, as it reinforces opinions about autism being a ‘thing’ that can be removed, something that may be unpleasant and unwanted, and something that is not just another aspect of a whole, complete and perfect individual human being. Describing oneself as autistic is an extremely important and positive assertion about oneself, it means that one feels complete and whole as one is."
Related to this disagreement is the issue of whether autism is viewed as a "disorder" or a "difference", and whether any disability associated with autism is seen as located purely in the individual or as arising from society's failure to adapt to the needs of people with autism. Another adult with autism said:
"Autism is just another way of thinking, not some sort of disease that one can catch."
Yet some parents and carers were wary of downplaying the impact of autism, often because they are the ones championing their children's needs. One of them said:
"I prefer 'disorder' to 'condition' because I think it conveys better the seriousness and the need for support and intervention."
There was also disagreement about the appropriateness and value of the term Asperger's Syndrome (a diagnosis dropped recently by US psychiatry) or "Aspie". Some people felt it was an important part of their identity. Yet others believed continued use of the term undermined efforts to build a united autism community.

Another contentious issue is the idea of autism being a spectrum upon which everyone is located to some degree. This terminology was more popular among professionals and family members than among autistic people, some of whom felt that it trivialises the difficulties faced by those who are "truly autistic".

A notable point of agreement across the different groups who completed the survey was the dislike for the terms "high-functioning autism" (it downplays the everyday difficulties experienced even by autistic people who have good verbal and intellectual skills) and "low-functioning autism" (it undermines people's potential).

The researchers said the "fundamental finding" of their research was that "there are reasonable and rational disagreements between members of the autism community as to which terms should be used to describe autism." They said this "plurality" of views was likely to persist and evolve with time and that for anyone involved in autism, choosing the right language will be difficult and require care, reflection and "practical wisdom". They added: "The overriding principle for those who are unclear about appropriate terminology should therefore be to inquire of the people with whom they are working or describing for clarification."

_________________________________ ResearchBlogging.org

Kenny, L., Hattersley, C., Molins, B., Buckley, C., Povey, C., & Pellicano, E. (2015). Which terms should be used to describe autism? Perspectives from the UK autism community Autism DOI: 10.1177/1362361315588200

--further reading--
Advice from the National Autistic Society on how to talk about autism.
Autism journal podcast about the new survey findings.
"Watch your language when talking about autism" co-author Liz Pellicano reflects on the new findings at The Conversation.
Autism – Myth and Reality

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 9 July 2015

Shining a light on why sensory metaphors are so popular

A warm welcome to the latest Digest post, dear reader. You won’t find it hard work – my editor made some small changes, eliminating any sour notes to ensure a light read.

Did you notice how the metaphor phrases scattered through my previous sentences each relate to a sense – touch, sight, taste? This is common to many popular phrases, and to understand why, a new article draws on a combination of the Google Books dataset and a series of lab experiments. The research reveals that sensory metaphors owe their cultural success to the fact that we find sensory information easier to process and recall.

The first study by Egi Akpinar and Jonah Berger identified a set of 32 sensory metaphors in the adjective + noun structure I used above, each matched to a further three non-metaphorical equivalent phrases (e.g. "warm welcome" versus "friendly welcome", "kind welcome", "sincere welcome"). The researchers used Google Books’ frequency data on 5 million books to track the popularity of all these phrases since 1800, finding sensory metaphors enjoyed a steeper rise in popularity than their non-metaphorical equivalents.

To explore why, the researchers gave a subset of the metaphorical phrases together with their non-metaphorical equivalent phrases to 229 students. The students then rated each phrase on two criteria: How strongly did it relate to the senses? And does it have many or fewer associations with other ideas? After a filler task, the students tried to recall the phrases, and were able to remember only 18 per cent of the non-metaphorical phrases, but 28 per cent of the sensory metaphors, which also received higher ratings in sensory quality and associations. The higher its ratings, the better a phrase was remembered, and, critically, the steeper the increase in the popularity of that phrase in the Google Books data set.

This fits Akpinar and Berger’s hypothesis that cultural success stories are in debt to their psychological convenience. In their account, we will favour well-remembered concepts and phrases: those that are processed more automatically – as sensory knowledge is known to be – and that come to mind more easily. Sensory metaphors can be triggered by real-world phenomena: for example, bright future from the sight of a morning, torch or star. These small gains in popularity then snowball, as we lean on better-known phrases rather than the obscure, so that our listeners can understand us.

One note of caution is that the researchers may simply be backing winners, as sensory metaphors that were true failures would be unknown, and wouldn’t make it into their set of stimuli. To address this, the next study included each and every sensory metaphor found in the corpus of Jane Austin – 226 in all – including such gems as "clamorous happiness" and "delicious harmony", to see what characterised the phrases that succeeded versus those that did not. One hundred and thirty-five students studied, rated and recalled these metaphors, and those that enjoyed a rise to popularity in the Google dataset were, again, those rated higher in sensory quality and associations, and more easily recalled by participants.

It’s great to see research leveraging big cultural data and marrying it with experimental technique. “We study how the senses shape language,” the authors begin their general discussion, and given the clear evidence they present, it’s hard to disagree.

_________________________________ ResearchBlogging.org

Akpinar, E., & Berger, J. (2015). Drivers of cultural success: The case of sensory metaphors. Journal of Personality and Social Psychology, 109 (1), 20-34 DOI: 10.1037/pspa0000025

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Tuesday, 2 June 2015

Why do children stick their tongues out when they're concentrating?

Have you ever watched a young child perform a delicate task with their hands and noticed how they stick out their tongue at the same time? A new study is the first to systematically investigate this behaviour in four-year-olds. This isn't just a cute quirk of childhood, the findings suggest, rather the behaviour fits the theory that spoken language originally evolved from gestures.

Gillian Forrester and Alina Rodriguez videoed fourteen 4-year-olds (8 boys), all right-handed, as they completed a number of tasks in their own homes. The tasks were designed to involve either very fine hand control (e.g. playing with miniature dolls or opening padlocks with keys), less fine control (e.g. a game of knock and tap, in which the child does the opposite to the researcher, be that knocking or tapping the table with their right hand), or no hand control (remembering a story).

The researchers studied the videos looking for how often the children stuck out their tongues during these different games, and whether they stuck them out towards the left or right side of their mouths.

All the children stuck out their tongues during the games and tasks, which supports past research with 5- to 8-year-olds that suggested this is a common behaviour. But crucially, the children stuck out their tongues more during some tasks than others, and most of all in the knock and tap game. This goes against expectations (the researchers thought the fine motor control games would provoke the most tongue protrusions) but Forrester and Rodriguez argue their surprise finding makes sense in terms of the evolutionary history of language. They explain the knock and tap game involves rapid turn-taking, hand gesturing and structure rules – what you could think of as "the foundational components of a communication system" or the rudiments of language.

This fits with another result, which is that most of the kids' tongue protrusions tended to be biased to the right, suggestive of control by the left brain hemisphere. The left side of the brain is the side that's more dominant for language in nearly all right-handers, so again we have a suggestion that children's gestural activities are accompanied by tongue protrusions because of the tongue and hands sharing a link with language and communication. The researchers think that adults (presumably excluding Miley Cyrus) suppress their own tongue protrusions because of the cultural connotations of sticking out your tongue.

Taken together with past research that's shown an overlap in the brain areas involved in speech and hand control, the researchers propose their new findings support the idea that the same communication system involves both the hand and the mouth, and that "hand and tongue actions possess a reciprocal relationship such that when structured sequences of hand actions are performed they are accompanied by spontaneous and synchronous tongue action".

_________________________________ ResearchBlogging.org

Forrester, G., & Rodriguez, A. (2015). Slip of the tongue: Implications for evolution and language development Cognition, 141, 103-111 DOI: 10.1016/j.cognition.2015.04.012

--further reading--
Why do children hide by covering their eyes?

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 1 May 2015

Children use time words like "seconds" and "hours" long before they know what they mean

For adults, let alone children, time is a tricky concept to comprehend. In our culture, we carve it up into somewhat arbitrary chunks and attribute words to those durations: 60 seconds in a minute, and 60 of those in an hour and so on. We also have a sense of what these durations feel like. Children start using these time-related words at around the age of two or three years, even though they won't master clocks until eight or nine. This raises the question – what do young children really understand about duration words?

Katharine Tillman and David Barner began by asking dozens of three- to six-year-olds to compare several pairs of durations (e.g. Farmer Brown jumped for a minute. Captain Blue jumped for an hour. Who jumped more?). As well as minutes and hours, other durations used were seconds, days, weeks, months and years. This test showed that by age four, the children were tending to get more of these questions right than would be expected if they were just guessing. With increasing age, the children got better at the task. In other words, from age four and up, children have a sense of the rank order of different duration terms.

What young children don't have, according to the findings from further experiments, is a sense of the actual lengths of time that these terms refer to. When the comparison test was repeated, but with different amounts of each duration, the children were flummoxed. Take, for example, the question "Farmer Brown jumped for three minutes. Captain Brown jumped for two hours. Who jumped more?" As adults, we aren't thrown by the minutes outnumbering the hours by three to two, because we know that an hour feels much longer, and is by definition 60 times longer. However, even five-year-olds, who know well the principle that an hour is longer than a minute, were thrown by these kinds of comparisons. This suggests they don't yet have a very good understanding of the formal definitions of duration words, nor what the different durations feel like.

In another experiment, five- to seven-year-old children were asked to place different duration words along a horizontal line after the far-left end had been described to them as the location for "something very short, like blinking" and the far-right end as "something very long, like the time from waking up in the morning to going to bed at night". Again, before age 6 or 7, the children really struggled with this – even with the order correct, they tended to space them out inappropriately, compared with how an adult would do it. Six and seven-year-olds who knew the formal definitions for the duration words tended to perform better.

These findings mirror what's been found for the way children use words for other concepts like numbers and colours. Before they map the words onto actual perceptual experiences, they understand that words in a given domain are related, and (in the case of numbers and time), they have a sense of the relative magnitude of the concepts. But it's only after using such words for some years, and learning their formal definitions, that they fully connect the experience of the concept (such as the length of an hour, or the physical magnitude of a number) with its corresponding word.

"Our results indicate that proficiency in estimating the absolute time encoded by duration words emerges relatively late," the researchers said, "and may even rely on formal instruction in [primary] school."

_________________________________ ResearchBlogging.org

Tillman, K., & Barner, D. (2015). Learning the language of time: Children’s acquisition of duration words Cognitive Psychology, 78, 57-77 DOI: 10.1016/j.cogpsych.2015.03.001

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.


Friday, 12 September 2014

Psychologists compare the mental abilities of Scrabble and crossword champions

Completed Scrabble (left) and crossword grids (image from Toma et al 2014).
Every year, hundreds of word lovers arrive from across the US to compete in the American Crossword Puzzle tournament. They solve clues (e.g. "caught some Z's") and place the answers (e.g. "slept") in a grid. Meanwhile, a separate group of wordsmiths gather regularly to compete at Scrabble, the game that involves forming words out of letter tiles and finding a suitable place for them on the board.

Both sets of players have exceptional abilities, but how exactly do they differ from each other and from non-players of matched academic ability? Some answers are provided by Michael Toma and his colleagues, who have performed the first detailed comparison of the mental skills of the most elite crossword and Scrabble players in the US. Previous studies on gaming expertise have mostly involved chess players, so this is a refreshing new research angle.

Toma's team recruited 26 elite Scrabble players (in the top two per cent of competitive players, on average; 20 men) and 31 elite crossword players (in the top 7 per cent, on average; 22 men) to complete several tests of working memory - the kind of memory that we use to juggle and use information over short time-scales.

For example, there was a visuospatial task that involved judging whether images were symmetrical, while also remembering highlighted locations in a series of grids that always appeared after each symmetry image. Another challenge was the reading span task (a test of verbal working memory), which involved judging the grammatical sense of sentences, while also remembering the order of individual letters that were flashed on-screen after each grammatical challenge.

The researchers anticipated that the Scrabble players would outperform the crossworders on visuospatial working memory, whereas they thought the crossword players might be superior on verbal working memory. These predictions were based on the contrasting skills demanded by the two games. Scrabble players often spend hours learning lists of words that are legal in the game, but unlike crossword players, they don't need to know their meaning. In fact many Scrabble players admit to not knowing the meaning of many of the words they play. On the other hand, Scrabble players need skills to rearrange letters and to find a place for their words on the board (a visuospatial skill), whereas crossword players do not need these skills so much because the grid is prearranged for them.

The researchers actually uncovered no group differences on any of the measures of visuospatial and verbal working memory. However, in line with predictions, the crossword competitors outperformed the Scrabble players on an analogies-based word task - identifying a pair of words that have the same relation to each other as a target pair - and the crossworders also had higher (self-reported) verbal SAT scores than the Scrabble players (SAT is a standardised school test used in the US). The two groups also differed drastically in the most important strategies they said they used during game play - for instance, mental flexibility was far more important for crossworders, whereas anagramming was important for Scrabble players but not mentioned by crossworders.

Both expert groups far outperformed a control group of high-achieving students on all measures of verbal and visuospatial working memory. This was despite the fact the students had similar verbal SAT levels to the expert players. So it seems the elite players of both games have highly superior working memory relative to controls, but this enhancement is not tailored to their different games.

Toma and his team said that by looking beyond chess and studying experts in cognitively demanding verbal games, their research "helps to build a more general understanding of the cognitive mechanisms that underlie elite performance." From a theoretical perspective, their finding of no working memory differences between Scrabble and crossword competitors is supportive of a domain general account of working memory - the idea that there exists a single mechanism that fulfils the processing of verbal and visuospatial information.

_________________________________ ResearchBlogging.org

Toma, M., Halpern, D., & Berger, D. (2014). Cognitive Abilities of Elite Nationally Ranked SCRABBLE and Crossword Experts Applied Cognitive Psychology DOI: 10.1002/acp.3059

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Monday, 2 June 2014

I mean, you know, I'm a conscientious person: Links between use of "speech fillers" and personality

Few people are capable of speaking spontaneously without, er, you know, pausing and using filler words every now and again. However, we all differ in the extent to which we do this, and now a study by US researchers has examined how use of filler words varies according to age, gender and personality.

Charlyn Laserna and her colleagues used recordings of everyday speech collected from hundreds of participants in earlier studies performed between 2003 and 2013. They specifically looked at utterances of uh, um (known as "filled pauses") and I mean, you know, and like (known as "discourse markers").

The purpose of these kinds of words is not straightforward - they can be a sign of being tongue tied, but they can also be a way to keep hold of one's turn in a conversation, to form a bridge between phrases or sections of conversation, to seek consensus, or convey uncertainty.

Use of discourse markers was more frequent among younger people, and among women versus men. However, the gender difference was only present in teen and student participants, and had disappeared from age 23 and up. Discourse markers were also used more frequently by people with a more conscientious personality. Uhs and ums became less common with age, but their use was not related to gender or personality. This last point is somewhat surprising since such hesitations are often assumed to be a sign of anxiety.

Why should use of phrases such as "like" and "you know" be related to conscientiousness? One possibility is that this is a false positive result - the researchers performed multiple comparisons looking for links between personality and word use, and this is known to increase the risk of spurious findings. However, assuming the finding is reliable, the researchers believe the explanation is that "conscientious people are generally more thoughtful and aware of themselves and their surroundings," and their use of discourse markers shows they have a "desire to share or rephrase opinions to recipients."

Stated slightly differently, discourse fillers are a sign of more considered speech, and so it makes sense that conscientious people use them more often. This is a result that may surprise some, including the veteran actress Miriam Margoyles, who publicly castigated pop star Wil.I.Am for his overuse of "like". The researchers didn't propose any explanation for why age and gender are related to use of discourse fillers.

Laserna and her team believe their findings are useful because they suggest that people's habits of speech can be used to make inferences about their personality, age and gender. "From a methodological standpoint, the use of discourse markers can provide a quick behavioural measure of personality traits," they said. So, you know, don't be put off next time you hear someone, like, using discourse fillers. I mean, it could actually be a sign that they're conscientious.

_________________________________ ResearchBlogging.org

Laserna, C., Seih, Y., & Pennebaker, J. (2014). Um . . . Who Like Says You Know: Filler Word Use as a Function of Age, Gender, and Personality Journal of Language and Social Psychology, 33 (3), 328-338 DOI: 10.1177/0261927X14526993

--further reading/watching--
The effect of, er, hesitations in speech
Greater use of "I" and "me" as a mark of interpersonal distress
Miriam Margolyes castigates Wil.I.Am for his use of the discourse filler "like"

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 6 May 2014

The clinical reality of neurodevelopmental disorders

Some years after I had completed my clinical training, I remember meeting a ten-year-old boy, Peter. Peter's parents wanted my opinion on his language abilities, and I was able to assess him and discover he did have poor language comprehension, as well as some difficulties formulating sentences. It seemed to me that he would fit the picture of having a developmental language disorder.

However, when I discussed his history with his parents, it turned out that Peter had all sorts of other problems. A paediatrician had noted that he was overactive and inattentive, and had diagnosed attention deficit hyperactivity disorder (ADHD). He'd later seen an educational psychologist who thought he was dyslexic. An occupational therapist who visited the school had noticed he was clumsy and suggested he had developmental dyspraxia, and more recently, as his rather obsessive interest in dinosaurs had become more pronounced, his teacher had wondered if he should be assessed for autistic spectrum disorder. His parents, needless to say, were utterly baffled and were continuing to seek out specialists in the hope that they might find someone who would give them the right diagnosis.

In fact, it is likely that the experts were all right in some regards and wrong in others. The so-called neurodevelopmental disorders tend to co-occur, so that a child who has symptoms of one condition often also has symptoms of another. Thus an expert looking at the child through the lens of their own specialism may well find evidence of problems in that area. However, the experts were all wrong insofar as they assumed the child's difficulties could be accounted for by a single diagnostic label. It is easy to be misled by the diagnostic framework that is exemplified in systems such as the International Classification of Diseases (ICD) of the World Health Organisation, or the Diagnostic and Statistical Manual (DSM) of the American Psychiatric Association into thinking that children's neurodevelopmental problems divide neatly into discrete and clearcut disorders.

The Swedish psychiatrist, Christopher Gillberg, has for many years been interested in the overlaps between neurodevelopmental disorders, and in 2010 he proposed the term ESSENCE (Early Symptomatic Syndromes Eliciting Neurodevelopmental Clinical Examinations) to cover the whole group of conditions affecting attention, inhibition, social behaviour, language and learning, motor skills and perception. In a recent paper from his group, Pettersson et al have gone a step further and attempted to find out more about the causes of the symptom overlap.

An expert looking at the child through a particular lens may be
wrong to assume difficulties can be accounted for by a single label
They were particularly interested in the idea that there might be a general genetic factor that put the child at risk for the entire range of neurodevelopmental disorders. They were able to take advantage of a giant nationwide study of Swedish twins, from which they could use data from around 6000 same-sex twin pairs. If genes play a part in determining who gets a neurodevelopmental disorder, then monozygotic twins – formed by the splitting of a single fertilised egg – should be more similar to one another than dizygotic twins, formed when two eggs are fertilised at the same time. And, of particular interest for this study, it is possible to extend the logic to see if there is evidence for genetic influence that operates across different symptoms. For instance, if one twin has poor attention, is their co-twin at increased risk of tics, and if so, does the twin similarity depend on their genetic relationship?

This study used fiendishly complex analytic methods that combine factor analysis and structural equation modelling to address this issue for 53 questionnaire items that spanned the range of ESSENCE symptoms. The results, however, seemed extremely clearcut. The pattern of results agreed with a model in which there was a single genetic factor that increased the risk for the whole gamut of symptoms. There were other genetic factors that conferred more specific risks relating to symptoms clusters of impulsivity, learning problems, and tics/autism, but these had smaller effects. Remarkably, there was no evidence of factors that made twins more similar regardless of their zygosity, as would be expected if neurodevelopmental disorders were affected by aspects of the home environment.

The implications of this work for clinicians are spelled out by the authors thus: "When children display problems in one area, it might be more important to, as early as possible, set up a strategy for helping with all related symptoms rather than trying to help only with a specific diagnosis (which often will change over time)." We may feel comfortable with our domain-specific labels for neurodevelopmental disorders, but they do not capture the clinical reality.

_________________________________ ResearchBlogging.org
Pettersson E, Anckarsäter H, Gillberg C, & Lichtenstein P (2013). Different neurodevelopmental symptoms have a common genetic etiology. Journal of child psychology and psychiatry, and allied disciplines, 54 (12), 1356-65 PMID: 24127638

Post written for the BPS Research Digest by guest host Dorothy Bishop, Professor of Developmental Neuropsychology and a Wellcome Principal Research Fellow at the Department of Experimental Psychology in Oxford, Adjunct Professor at The University of Western Australia, Perth, and a runner up in the 2012 UK Science Blogging Prize for BishopBlog.

Tuesday, 18 March 2014

How thinking in a foreign language makes you more rational in some ways but not others

Back in 2012, US researchers showed that when people used their second, non-native language, they were less prone to a mental bias known as loss aversion. This bias means we're averse to the same outcome when it's framed in a way that highlights what's to be lost, as compared with when it's framed in a way that emphasises what's to be gained. For example, a vaccine is more appealing if it's stated that it will save 200,000 out of 600,000 people; far less appealing if it's explained the vaccine means 400,000 will die. In a sense then, the US research suggested that using a second language makes our thinking more rational.

Now Alberta Costa and his colleagues have investigated the limits of this "foreign language effect". "… [M]any people in today's world interact and make decisions in a foreign language making it crucial to understand how decisions are affected by language," they said.

In all, the researchers tested over 700 people. Most were Spanish students whose first language was Spanish, but who also spoke English learned in a classroom. Some native Arab speakers were also tested (second language Hebrew), and some native English speakers (second language Spanish).

Costa's team first tested their participants on decision-making tasks that involved loss aversion and other forms of uncertainty. For example, they used the Ellsberg Paradox in which participants were offered the chance of rewards if they picked the right coloured token (black or red) out of a jar where they knew the balance of colours (50-50), or out of a jar where they didn't know the balance. The Ellsberg Paradox describes the fact that people usually prefer to make successive gambles from the jar where they know the balance, even though this behaviour suggests they sometimes think first jar has a higher proportion of one colour than the second jar, and yet other times they think it has a higher proportion of the other colour - a logical impossibility.

The main result here was that the participants tended to show less irrationality when completing these decision-making tasks in a foreign language. A feasible explanation, the researchers said, is that the emotional element of these kinds of tasks - ones based on uncertainty and fear of loss - usually encourages more intuitive, less rational thinking. But when a second language is used, this dulls the emotional intensity of the decisions, thereby encouraging deeper, more rational thought.

If this explanation is correct, then we'd expect the foreign language effect to be absent or reduced for decision making tasks that are emotionally neutral. That's exactly what Costa and his colleagues found. For example, the participants completed several tests of cognitive reflection. These involved questions like "If it takes 5 machines 5 minutes to make 5 keyboards, how long would it take 100 machines to to make 100 keyboards?" Answering these questions correctly involves deliberate reflection, and answers based on intuition tend to be wrong. This time, there was no evidence of a foreign language effect - participants tended to answer these kinds of questions wrongly even when using their second language.

Costa and his colleagues acknowledged there's lots more research to be completed on the foreign language effect. Other possible factors at play include effects of cognitive fluency and cognitive load - the mental effort of speaking a foreign tongue. For now, however, the results are consistent with the idea that using a second language can make us more rational on decision tasks that have an emotional element. If you're bilingual and confronted with such a decision, it could be worth thinking it through in your second tongue. Bonne chance!
_________________________________

  ResearchBlogging.orgCosta A, Foucart A, Arnon I, Aparici M, and Apesteguia J (2014). "Piensa" twice: on the foreign language effect in decision making. Cognition, 130 (2), 236-54 PMID: 24334107

--further reading--
We think more rationally in a foreign language.

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Google+