If you have participated in a survey online, you probably have used sliders at some point. Survey designers often include sliders in order to enhance respondent engagement, or to make a larger scale (e.g. 1 – 100) seem more approachable and natural than asking people to type in a number.

Even if you’ve used sliders as a participant, I’m hoping you haven’t done so as a researcher. A new report from Grey Matter Research shows if you did, your sliders very well may have biased your data.

In an online survey of 1,700 adults (with a demographically representative general population sample from an online panel, conducted in English and Spanish), Grey Matter included a couple of question sets with sliders. One used a seven-point scale, and one a five-point scale.

The problem with sliders is that, unlike radio buttons, they require a starting point on the screen for the slider. The slider button respondents move must start somewhere on the scale – at the low point, at the mid-point, at the high point, or somewhere else.

This creates a couple of problems. First, let’s say you decide to start your slider at the mid-point of a seven-point scale (a 4). What do you do if the respondent wants to select a 4 as her answer? You can accept the lack of movement of the slider as a legitimate response, but then you can’t differentiate between people who purposely wanted to choose a 4 and those who just didn’t bother to move the slider. Or you can force movement of the slider in order for the response to be recorded, but then someone who wants to choose a 4 must move it off the 4 and back on again. (This is the approach we took in our study.)

But a far more significant problem is that our research found that people’s answers depended significantly on where the slider started. We randomized the starting point (one-third saw it at the bottom of the scale, one-third in the middle, and one-third at the top). After about 500 completes, the data was evaluated.

Through five questions using the five-point scale and nine questions using the seven-point scale, we found pervasive bias according to the starting points. People who started in the middle of the scale were more likely to choose a mid-point answer. Those who started at the top of the scale were more likely to choose a higher number.

But the effect was particularly strong at the bottom end of the scale. People who saw their slider start at the bottom were strongly biased to choose a low number on the scale. Up to three times more likely, in fact, than people who started elsewhere. It doesn’t take a research genius to see the problem here, nor to realize how much worse it would be if we hadn’t randomized the starting points on our sliders.

All of the nasty details are in the report How Sliders Bias Survey Data, which is available upon request from Grey Matter Research. You’ll read why Grey Matter no longer uses sliders in any survey.

Although our latest work focused specifically on sliders, there’s a bigger issue here – how much are attempts at respondent engagement corrupting the data we get? When we move away from tried-and-true questionnaire design in quantitative studies and start using things such as drag-and-drop, gamification, cartoon icons that “guide” respondents through the questions, thermometer-style graphic measures, and other approaches, are we sure that we’re getting the benefits of respondent engagement without the downside of simply getting wrong data?

And even bigger than that is the issue of why we need respondent engagement in the first place – is it because people are bombarded with too many extremely long questionnaires, surveys that don’t really apply to them, repetitive question sets, lengthy and boring grids, and other things that are making participation tedious and causing respondents to lose interest?

How much better would it be if we simply design a good, simple, relatively brief questionnaire that respects our respondents and doesn’t require us to resort to tricks and gimmicks in order to keep them engaged?

Sliders may bring intense brand loyalty to White Castle, but they’re probably best left to the fast food industry rather than the research world.

0
Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *