A fascinating study has shown that we're unable to read insights into ourselves from watching a video of our own body language. It's as if we have an egocentric blind spot. Outside observers, by contrast, can watch the same video and make revealing insights into our personality.
The premise of the new study is the tip-of-the-iceberg idea that what we know consciously about ourselves is fairly limited, with much of our self-knowledge lying beyond conscious access. The researchers wondered whether people would be able to form a truer picture of themselves when presented with a video of their own body language.
In an initial study, Wilhelm Hofmann and colleagues first had dozens of undergrad students rate how much of an extrovert they are, using both explicit and implicit measures. The explicit measure simply required the students to say whether they agreed that they were talkative, shy and so on. The implicit measure used was the Implicit Association Test, and was intended to tap into subconscious self-knowledge. Briefly, this test reveals how much people associate ideas in their mind (such as 'self' and 'shy'), by seeing whether they are quicker or slower to respond when two ideas are allocated the same response key on a keyboard.
Next, the participants recorded a one minute television commercial for a beauty product (they'd been told the study was about personality and advertising). The participants then watched back the video of themselves, having been given guidance on non-verbal cues that can reveal how extraverted or introverted a person is. Based on their observation of the video, they were then asked to rate their own personality again, using the explicit measure.
The key question was whether seeing their non-verbal behaviour on video would allow the participants to rate their personality in a way that was consistent with their earlier scores on the implicit test.
Long story short - they weren't able to. The participants' extraversion scores on the implicit test showed no association with their subsequent explicit ratings of themselves, and there was no evidence either that they'd used their non-verbal behaviours (such as amount of eye contact with the camera) to inform their self-ratings.
In striking contrast, outside observers who watched the videos made ratings of the participants' personalities that did correlate with those same participants' implicit personality scores, and it was clear that it was the participants' non-verbal behaviours that mediated this correlation (that is, the observers had used the participants' non-verbal behaviours to inform their judgements about the participants' personalities).
Two further experiments showed that this general pattern of findings held even when participants were given a financial incentive to rate their own personality accurately, as if from an outside observer's perspective, and also when the task involved anxiety personality ratings following the delivery of a short speech.
What was going on? Why can't we use a video of ourselves to improve the accuracy of our self-perception? One answer could lie in cognitive dissonance - the need for us to hold consistent beliefs about ourselves. People may well be extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence. A detail in the final experiment supports this idea. Participants seemed able to use the videos to inform their ratings of their "state" anxiety (their anxiety "in the moment") even while leaving their scores for their "trait" anxiety unchanged.
"When applied to the question of how people may gain knowledge about their unconscious self, the present set of studies demonstrates that self-perceivers do not appear to pay as much attention to and make as much use of available behavioural information as neutral observers," the researchers said.
Hofmann, W., Gschwendner, T., & Schmitt, M. (2009). The road to the unconscious self not taken: Discrepancies between self- and observer-inferences about implicit dispositions from nonverbal behavioural cues. European Journal of Personality, 23 (4), 343-366 DOI: 10.1002/per.722