I try to stay up on what’s happening within the research industry by participating as a research respondent whenever I can. I just spent about an hour today responding to a few survey opportunities through a couple of online panels of which I am a member.
OMG.
It fascinates me how many intense debates there are about the future of the marketing research industry, and about fine points of the research process. Is big data the future? Will microsurveys revolutionize research? Will mobile make online panels obsolete? Is gamification of surveys good or bad for the industry?
Step away from these high level debates for a moment and try being a participant for a few projects. You’ll quickly forget the big picture and begin to wonder whether anyone can design something as basic as a moderately decent questionnaire anymore. The big picture starts to become moot when you realize how poor the little picture is in many instances.
Follow along with me as I attempt to respond to a few surveys. First of all, once again my inbox was filled with requests for survey participation – five, six, seven a day from some panel companies. Not a great start (and possibly not great panel companies to rely on for sample – but that’s another topic for another day’s rant.)
Then, of course, I hit the survey invitation sent just a few hours ago, where I give them my gender and age (information the panel company already has), and promptly get told that the survey is now closed. After four hours? Really?
Let’s not forget the variety of studies for which I don’t qualify, wasting my time as a respondent without any compensation for my efforts, and answering the same five or six demographic questions over and over. The panel company keeps redirecting me to another survey, and I have to answer those questions again. Just how many times do I need to give my age and race? (And yes, as a researcher I understand why this happens – but respondents won’t.)
So I finally qualify for a study. But before I did, the panel company sent me through their portal, where they asked me pre-qualifying questions for a bunch of different projects. One of those questions was how often I drink vodka, gin, whiskey, and rum.
Now I’m answering a full questionnaire, and I find it’s about alcohol. One of the first questions is what types of alcohol I can think of. Hmmm…I’ve just read a specific question about vodka, gin, whiskey, and rum in the portal two minutes ago. What are the chances I can immediately think of vodka, gin, whiskey, and rum? Amazing!
Half of this survey’s response options are in all capitals, and the other half in upper and lower case. Oh, and the questionnaire was obviously written for the phone and simply programmed into an online version. How do I know? The box for “don’t know” actually says “Doesn’t know/Is not aware of any,” and one of the questions starts with, “Now I’m going to read you a few things other people have said after seeing this advertisement.” I was actually expecting some type of voice reader to give me the options, until I clicked “next” and found it was just that someone was too lazy or incompetent to note that phone surveys and online surveys need different wording.
I finish that survey, and click on an invitation from a different panel. After a few attempts, I again qualify for a study, but I’m quickly shaking my head when I’m faced with non-exclusive categories to a simple demographic question. I can say that I have no children of any age, that I have children under 18 in the household, or that I have children 18 or older no longer in the household. It’s not multiple response. What do I answer if I have adult children living in my household (as so many people do today)? Or if I have a teenager at home, but also a 20-year-old away at college? Fortunately, I only have a ten-year-old, so I, at least, can answer accurately and move on.
It’s a brief questionnaire about flossing my teeth. First, I’m asked how often I floss (you’ll be so pleased to know it’s daily). Then, I’m given a number of questions that ignore that answer. I’m asked where I floss, and one of the options is that the question is irrelevant to me because I don’t floss at all. I’m also asked why I don’t floss, and one of the options is that the question is irrelevant to me because I actually do. At this point, I’m wondering whether I’ll get a question about whether I flossed when I was pregnant.
I would love to sit down with the survey designer and introduce him or her to this fabulous new development called “skip patterns” – they mean not everyone has to see every question or option when some don’t apply to them! (What wonders we now have available to us in research!)
Oh, I almost forgot – on one questionnaire, I was asked to report what state I live in. I had a lot of trouble with this one, because it was very difficult to find “Arizona.” You see, someone had apparently read a research primer and learned that randomization of responses can be a good thing, so they actually randomized the order of the states. I finally found Arizona about three-quarters of the way down the list, right between Oregon and North Dakota.
At this point, I should probably apologize for my sarcasm and snarkiness. It’s just that I’ve spent enough years in the consumer insights world that I actually care about the industry, and it literally hurts me to see such lack of competence on the very basics of survey design. My hour spent trying to respond to surveys in a thoughtful, accurate way felt like a massive waste of time. These aren’t fine points of whether the questionnaire should use a five-point or a seven-point scale or whether the methodology should incorporate discrete choice – these were mistakes that shouldn’t be made after taking just one market research class in college. These mistakes shouldn’t be made by any professional researcher…yet they are what I see all too frequently in research.
I might even be willing to chalk this up to inexperienced people trying their hands at DIY, except for too many personal experiences and reviews of questionnaires and projects that I know to have been conducted by “professional” researchers.
I often recall the long-time corporate research manager who thought focus group respondents were all employees of the focus group facility who were simply assigned to each night’s groups, or of another corporate research manager who, after 18 months handling primary research for his company, asked a colleague what a focus group was.
I think of the car rental companies and hotels that give me satisfaction surveys and tell me I’m supposed to give them the highest marks possible.
I think of the major research report released by a consulting company claiming that young adults are far more educated than previously thought, then learning that their sample frame was alumni association lists from eight specific Midwestern colleges. I think of another consulting company releasing a big report about how senior adults give to charities online much more than previously thought, then learning that the entire study was done online.
I think of being asked to tabulate the screeners from a set of focus groups. I think of the company we did a major study for (to the tune of about $750,000), that couldn’t believe we forgot to put percent signs on all the numbers in the presentation, so they added them and presented it to their customers (even though we clearly explained in the report that those “percentages” were actually R-squared figures). These are not just mistakes (which everyone makes), but serious competence issues.
In all honesty, I have trouble feeling excited about the “new frontier” of research when so much is being done so poorly on the methods that have been around for decades. The fundamentals of good research, including knowing how to choose a methodology, understanding how to ask a question, knowing what people can and cannot reasonably answer, knowing what statistical methods to use on a database, knowing the validity of the data and how it was gathered, understanding that qualitative research is not statistically projectable, and knowing what a good sample is, all still apply to the new methods just as they did to the old methods.
Maybe it’s time to take a step back for a moment in the debate about the future of the industry. Maybe we need to discuss issues related to basic quality and competence a little bit more, with less of a focus on whether something is new and exciting. Because if a researcher doesn’t grasp the difference between writing questions for an online survey and a phone survey, what are the chances that researcher will handle facial analysis or galvanic skin response competently?