With all of the attention being paid to emerging research methods, a point that is too often missed is that, just like the old techniques, many of these new approaches still require basic research skills.
For instance, bulletin board qualitative, mobile MR, Google Surveys, and other less traditional approaches still mean we’re asking questions of people, even if we’re now doing it in different ways. So it’s important that we continue to ask relevant questions that people can actually answer.
Unfortunately, no matter what technique is involved in asking questions, there are still a lot of bad questions being asked.
Example: I just completed a questionnaire (as a respondent) in which I was asked to name my “primary financial institution” (with no further definition of what that means). Trouble was, the questionnaire had already asked about a variety of financial services: loans, credit cards, investments, checking and savings, etc. I use a major national bank for day-to-day transactional needs, but most of my investments (and therefore most of my money) are with a different set of financial services companies, and my personal and business credit cards (which get a heavy workout as daily transactional tools) are with two entirely different firms.
Which one is my “primary” financial institution? Is it the one where I have most of my money, the one where I have my basic checking account, or the ones I use every day for transactions? In answering the survey question, I might define it one way, while another respondent uses a very different definition. That leads to inconsistent data based on different parameters.
Similarly, the questionnaire asked me how likely I would be to consider a number of financial institutions if I wanted to “open a new account or take out a new loan.” Again, for me there’s a big problem trying to answer this. If I were to refinance my mortgage, I would shop for the best rate and not particularly care which financial institution provided it (since I figure it’ll just get sold anyway). If I were to open a new checking account, I would only consider a major national bank that has ATMs all over, because of how much I travel. If I were to open a new investment account, I would not consider a bank at all (not being a novice investor).
My answer to that question doesn’t fit into a nice, convenient box like the researcher wanted. I simply cannot give a blanket answer to this question, because my answer would be very different for different types of financial service products that the questionnaire has lumped together.
In another questionnaire, I was asked whether I consider clothing made out of cotton to be better quality than clothing made out of other materials. Well, it depends – I don’t wear a lot of cotton suits or cotton ties, but I certainly want cotton socks and cotton jeans. And what is meant by “better quality”? Does that mean durability, how it feels against my skin, how others perceive my wardrobe, or something else? Further, what happens if I consider cotton to be better quality than rayon and polyester, but lower quality than silk and wool?
What the researcher obviously wanted was the ability to have one nice, neat number that shows how many people think cotton is superior (or inferior) to other materials. But sometimes you can’t just ask one question and learn everything you need to learn. People don’t work that way. And if people don’t work that way, neither should research.
I see this type of question all the time, and quite frankly, I’ve probably written a few of them in my career. It’s easy to do. But it’s also important to understand that how respondents think, and the lives they live, won’t always conform to the neatly wrapped parameters we desire in order to simplify research. And that fact won’t change whether the respondent is participating in a telephone survey or a Google Survey on his tablet.
This becomes particularly important in our current industry situation. It’s easy to become enamored with a new approach and forget that many of the same rules and standards still need to apply. Good probing is good probing, whether the respondents are gathered around a conference table, doing laundry as you watch, or staring at you through their webcams. A survey conducted by tablets and smart phones still loses value and relevance if respondents are not quite sure what you’re asking, just as it did when interviewers were marching door to door.
And this all becomes even more important when you consider that some of the people now designing the questions have expertise in data mining or technology rather than in traditional research techniques.
In this respect, research is a bit like medicine. When doctors were making house calls in their Model A’s, they didn’t have CAT scans, MRIs, genetic testing, antibiotics, or many of the wonderful tools available to today’s practitioners. But today’s doctors still have to know basic skills they’ve used for decades; things like diagnosing a condition, setting a broken bone, stitching up a wound, and dealing with a scared eight-year-old (or eighty-year-old). The tools are different but many of the basic skills are still the same.
No matter the research method chosen, a biased sample is still a biased sample. A meaningless but statistically significant correlation is still meaningless. A bad question is still a bad question. A bored, disengaged respondent is still failing to give you useful insights. And using bright, shiny, cool new research tools doesn’t change any of these facts.