This post actually started as a reply to Scott Weinberg’s terrific GreenBook Blog post Is Online Sample Quality A Pure Oxymoron?  After doing a little writing in the reply box, I realized my comments were lengthy enough to warrant an actual blog post of their own rather than a reply.

Blogs, articles, and reader comments I’ve seen regarding research quality often have the perspective that quality in the consumer insights industry is worse today than at other times in the sector’s history.  Whether various writers blame this on DIY research, online panels, new methodologies, lack of training, or other reasons, this is a fairly common perspective.

Having been in the industry for more years than I care to admit, I have a somewhat different view.  Yes, quality is often pretty bad today, and I shudder to read lengthy lists of transgressions that Scott and others have personally witnessed.  But I question whether things are worse today than they were in past years.

First, as human beings we have the tendency to focus on recent events and situations and forget about what’s happened before.  This was really brought home to me by reactions to the New England Patriots’ recent Super Bowl victory over the Seattle Seahawks.  For those who aren’t NFL fans, Seattle was three feet away from the winning touchdown with about 40 seconds left.  Seattle has one of the most dominating running backs in the game in Marshawn Lynch, and a terrific running quarterback in Russell Wilson, so everyone expected them to use one of those two to score the winning touchdown by running the ball in.

Instead, Seattle called a passing play, and the ball was intercepted at the goal line to preserve an unexpected win for the Patriots.  After the game, a lot of the talk by pundits and fans alike focused on two opinions:

  • That was the worst play call in the history of the Super Bowl (and even that it was the worst play call in the history of sports).
  • That was the best defensive play in the history of the Super Bowl.

Now, it was a pretty bad call and a pretty great defensive play, but was it really the worst/greatest in all of 49 different Super Bowls?  I won’t get into details, but without much effort I can think of two other plays that would give it a run for the “best defensive play ever” title.  But because it’s what we just witnessed a few days ago, and because many people haven’t seen a single play from Super Bowls back in the 70s or 80s, it’s considered the best/worst ever.

We see the same things when Americans are surveyed about who is the greatest president ever.  Modern names such as Ronald Reagan and Bill Clinton generally outpoll historical greats such as Thomas Jefferson, James K. Polk, or Theodore Roosevelt.  But most respondents experienced Clinton’s presidency, while for most people Polk is just another name they might have heard of briefly in high school history.

So as bad as things are in consumer insights, are things really worse than they were ten, 20, or 30 years ago?  There’s still a problem of unqualified people doing bad research, just using a different methodology.  We still have decision makers cutting corners in order to get the lowest cost possible.  I’m guessing we’ll soon have some of the same issues with galvanic skin response, eye tracking, and any of the newer methodologies as they become more popular.

Back in the days when the phone survey was king, I worked for a boss who ordered 70% listed sample and 30% RDD sample for most studies because using listed sample was much cheaper in the phone room.  His reasoning?  Only 30% of phone numbers (at that time) were unlisted, so he was using the RDD to represent the unlisted phone numbers.  He couldn’t figure out that if 70% of phone numbers were listed, it would mean 70% of the RDD numbers would be listed, so in effect he was running with 9% unlisted and 91% listed sample.  Oh, and clients were never informed about his sample decisions, so they were unaware of the possible quality implications.

I also remember fielding a tracking study by phone in about 1988.  It had ridiculous demographic quotas and could be over an hour long for some people.  In getting it programmed, I came upon a question that made absolutely no sense to me – I didn’t even understand what it was asking.  When I questioned the client, he also had no clue and said it was worthless.  When I asked if we could change or eliminate it, he was shocked – “Absolutely not – it’s a tracking study!”  So we continued to track meaningless data for them.

I remember being a respondent for an in-person interview.  The study was about oil company advertising, and I got to listen to a variety of radio commercials with the name of each company bleeped out to see if I could identify the sponsor.  The audio editing was terrible; the bleeping generally consisted of things such as “Texa-beep” to try to hide the Texaco brand.

At the end of the survey, the interviewer asked my occupation, and I told her I was a project director at a market research company.  She looked at me and said, “I can’t put that.”  I told her that the screener had not included a security question or asked me my occupation, and she informed me that “They just know they’re supposed to ask that.”  I told her I also did some media work for the company, so she lied on the questionnaire and put me as a media liaison so that she could get credit for the interview.

I was asked by one client to falsify data to make sure that their intended advertising campaign would look good in the findings.  I was told by another client to change a question so they could get the answers they wanted, and that I had to  learn that “Sometimes you want real answers and sometimes you want to make sure you get the answers you want.”  Both of these happened back when fax machines were considered to be high tech.

When I took a corporate research job in 1993, the first thing I did was visit all of our vendors.  I monitored survey calls at one phone room and heard interviewers going completely off script and getting into conversations with respondents.  When I raised the point with the field supervisor, she was totally comfortable with what they were doing and saw no problems (needless to say, under my watch they were never used again).

We also subscribed to a number of syndicated reports, including a Hispanic tracker.  When I started digging into the data, I found that the research company was regularly reporting and graphing quantitative data from subsets of fewer than 20 people (without noting the sample sizes anywhere).  When I objected, they admitted they “probably shouldn’t do that,” but no changes were made in future waves (which is why we stopped subscribing to it).

Back on the vendor side, I took over a telephone brand tracker for a bank in 1998.  The previous vendor had first asked people an aided question about where they banked.  Only after naming about six different local banks did they ask “unaided” brand awareness.  No bias there, of course!

I could relate many other horror stories from 20 years ago as well as from last year, but you get the idea.  Are things really worse than they were in the past?  I have no quantitative way to measure that and prove or disprove my hypothesis, but I truly question whether consumer insights quality is worse today.  We had plenty of multi-paragraph concepts we had to read to people by phone, plenty of 30-minute phone questionnaires with lengthy and repetitive grids, plenty of questions which were incomprehensible, and plenty of shoddy sampling and field work back then.  We have many of the same problems today, just with different methodologies and technologies.

Is this even a relevant issue?  I would contend that it is.  For one thing, it is easy to become depressed when we believe that things are going downward, and figure there’s nothing we can do about the trend.  Can you change the whole industry?  Maybe not.  But you can darn well make certain that what you do in the industry is done properly, and you can work to point out the quality problems to those who fail to understand their importance.  If you feel the battle is already lost, it becomes much easier to throw in the towel.

For another thing, it becomes easy to blame certain methodologies for the problem, rather than human greed, sloth, or incompetence.  We have tremendous government waste in our republic, but then again so have countries under monarchies, dictatorships, socialist governments, and communist governments.  Is government waste a function of our form of government or of government in general?  Only in understanding that question can we attack the real problems rather than the symptoms.

Yes, online panel research is often atrocious, but so was a lot of the phone, intercept, and mail research that went on in the past, and so is much of the big data analysis and social media monitoring that goes on today.  We need to attack the root causes rather than the symptoms.

Finally, I want to be very clear that this post is not any sort of attack on what Scott or others have written on this topic.  Scott’s post is what got me thinking about today versus the past, but it was outstanding and I agree with the points he made.  I just wanted to bring a slightly different perspective to the discussion than we often hear about when discussing research quality, because I believe it is an important nuance that deserves some consideration.  This is an ongoing battle, not a recent development.

0
Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *