We need it now!
Business solutions in 48 hours! Get your survey data overnight! Do agile research! Fast, faster, fastest!
Yes, it seems the insights world is moving faster and faster every day. Many companies are promising turnaround times that would have seemed absurd just a few years ago. Shorter questionnaires, automation, and DIY solutions all offer speed and more speed.
But there’s one big question with this race to be faster than everyone else: what’s getting sacrificed?
Fast and inaccurate
No matter how a questionnaire is designed or how data processing or reporting are automated, there’s still an important component to any quantitative study: respondents. And while online research panels can give you access to thousands of respondents in just hours, panel quality ain’t gettin’ any better, folks.
As regular users of panels, we are also regular recipients of bad respondents mixed in with the good ones:
- Research bots
- Duplicate respondents
- Click farms
- Straightliners
- Speeders
- Other kinds of obvious cheaters
You can’t leave it to algorithms
But aren’t panel companies and field agencies screening out the bad respondents for you? Well, some are trying, but many of their solutions are automated (again, in the interests of being cheaper and faster). For example, they’ll employ an algorithm that automatically tosses any respondent who answers a questionnaire in less than 50% of the average length, or one that catches straightliners in all your grids (that is, if you’re still using lots of grids).
Frankly, they just miss a lot.
What needs to be done
Panel quality is atrocious today. Grey Matter Research has adopted the position that every respondent we get is a bad respondent, until we can demonstrate otherwise. This takes a lot more than digital fingerprinting or pre-programmed algorithms. Usually, it requires going line-by-line through the data to find and remove problem respondents. Just a few ways we do this:
- We review every response to every open-end. Even once the field agency or panel has done their quality control checks, we regularly receive verbatims that just say “great,” give answers that have nothing to do with the question, or even are actual copies of the question itself that the bot picked up from the questionnaire and inserted as the answer.
- We look hard for duplicates. Despite the claims of how digital fingerprinting removes this problem, we regularly find dozens of duplicates in a sample. The chances that a survey database of 600 respondents contains two 43-year-old Hispanic women from Iowa? The chances that both are football fans who spelled their favorite team as the Pittsbergh Stellers? And that they just happened to complete the questionnaire 15 minutes apart? Not so possible.
- We search for logical anomalies, which are different in every questionnaire. In various recent studies, we’ve thrown out people who claimed to have been in both Boy Scouts and Girl Scouts as kids, those who make under $30,000 annually but had given $40,000 last year to charity, those who supposedly live one mile away from four different local hospitals which are 75 miles apart, and those who belong to a non-existent organization (with a name that couldn’t be confused with a real one).
Of course, respondents do make mistakes or misread questions, so usually the decision to toss a respondent is from a combination of factors. They straightlined the one short grid we included? Mark ‘em yellow. They further completed the 12-minute questionnaire in 8 minutes? Downgrade to orange. Also answered the question “What are the main reasons you are not at all interested in learning more about this product” with “I like this advertisement the best”? Buh-bye.
How speed hurts
So what does any of this have to do with speed? (Or with brownies…but I’ll get to that in a moment.) Simple: this cleaning process is not a fast one. It doesn’t have to take days, but it won’t be done in minutes, either. In the quest for getting your data faster, how many of the respondents you’re getting are bots, duplicates, satisficers, or those who just didn’t actually pay attention to the questions you were asking?
Do you have any idea how many bad respondents had to be replaced on your last study? Or what criteria your vendor used to identify fraudulent or poor-quality respondents?
Most importantly: Did your vendor even do anything beyond some basic, automated checks to assure you got real, quality respondents?
Make no mistake – this is not just a problem with quick turn-around surveys. I’ve seen plenty of databases delivered in no particular hurry that still lacked proper quality control. But going all-out for speed dramatically increases the chances that your data includes some bad respondents, because putting everyone on a rush basis makes it far less likely that there will be time available for quality control.
So…brownies?
In a qualitative interview last month, I had a respondent object to a product concept, because she felt one small part of the statement was not true. When I probed for why this undermined the whole concept, she earthily explained, “Even a little bit of poop in the brownie batter means I’m not going to eat the brownies.”
So what proportion of bad respondents are you willing to accept in order to get your data faster: 2%? Five percent? Ten percent? Twenty?
Or, to paraphrase my favorite respondent of the year so far: How much poop will you accept in your batter in order to get your research brownies baked faster?