Tuesday, 14 September 2010

What are participants really up to when they complete an online questionnaire?

Internet surveys are an increasingly popular method for collecting data in psychology, for obvious reasons, but they have some serious shortcomings. How do you know if a participant read the instructions properly? What if they clicked through randomly, completed it drunk or maybe their cat walked across the keyboard? Now a possible solution has arrived in the form of a tool, called the UserActionTracer (UAT), developed by Stefan Stieger and Ulf-Dietrich Reips.

The UAT is a piece of code that tells the participant's web browser to store information, including timings, on all mouse clicks (single and double), choices in drop-down menus, radio buttons, all inserted text, key presses and the position of the mouse pointer. Stieger and Reips tested this out with a survey of 1046 participants on the subject of instant messaging. The new tool revealed that 31 participants changed their reported age; 5.9 per cent made suspicious changes to opinions they'd given; 46 per cent clicked through at least some parts of the questionnaire at a suspiciously fast rate (mainly for so-called 'semantic differential' items in which the participant must choose a position between two contrasting adjectives); 3.6 per cent of participants left the questionnaire inactive for long periods; 6.3 per cent displayed excessive clicking; and 11 per cent showed excessive mouse movements (it's that cat again).

As a way of checking the usefulness of this extra behavioural data, the researchers concentrated on the fraction of participants for whom they had access to a secondary source of information that could be used to verify the questionnaire answers. This showed that participants who'd displayed more suspicious behaviour while filling out the questionnaire also tended to provide answers that didn't match up with the other information source.

'Our study shows that the UAT was successful in collecting highly detailed information about individual answering processes in online questionnaires,' Stieger and Reips said. Another application of the tool is in pre-testing of online questionnaires. Researchers could use the tool to test which items tend to prompt corrections or inappropriate click-throughs before rolling out a questionnaire to a larger sample.
_________________________________

ResearchBlogging.orgStieger, S., & Reips, U. (2010). What are participants doing while filling in an online questionnaire: A paradata collection tool and an empirical study. Computers in Human Behavior, 26 (6), 1488-1495 DOI: 10.1016/j.chb.2010.05.013

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

5 comments:

  1. It is worth noting that not all of the "excessive mouse movements" need be suspicious. I tend to circle my mouse around the screen whenever I read something, so its possible that this is something others do also.

    Interesting tool though, something i may try in the future.

    ReplyDelete
  2. Yet another reason for never responding to questionnaires - Big Brother is watching your mouse............

    ReplyDelete
  3. Cavalle,

    No psychologist working in academia will either know or care who you are. We're interested in overall trends amongst large groups of people rather than figuring out anything about an individual. Please don't let this article stop you from responding to online questionnaires for research, as it is something that is very important to take part in.

    ReplyDelete
  4. Julien7:54 am

    The major reason for collecting data through the internet is the speed of it, and data can be collected while you work (or not) on something else.
    Recording all behaviors during the completion of questionnaires might be interesting, but if you need to "replay" all of the subjects mouse movements to check for "strange" or "suspicious" behaviors, web experiments aren't that appealing anymore...
    Recording the data is pretty simple, but the real challenge would be to automate the checking process...

    ReplyDelete
  5. I would think it's important to have information about what people scroll through, as this can tell you how the respondent feels about the test. I have found that the kind of survey that has endless rating questions gets really really tedious, especially if you're meant to go through a set of ratings that you might not feel applies. I scroll through large blocks of ratings too. Maybe survey makers can learn from this info to make more user-friendly surveys.

    ReplyDelete