This post from the Harvard Business Review’s Daily Stat shows a surprising lack of insight into how consumers actually respond to research questionnaires. It reports a McKinsey survey where consumers were asked what they were more interested in: core benefits or bells and whistles.
If you believe the results, consumers are more interested in core benefits.
I don’t believe the results.
The issue here is the way the survey has been undertaken. In this case, asking customers what they want is unlikely to give you an accurate answer to your question. Not because they will not tell you the truth but because they are unlikely to act in the same way, in practice, that they say they will act, in theory.
If you really want to know what consumers want then, in this case, it’s best to look at what they actually do. On this issue (core benefits or bells and whistles) it would be a relatively easy piece of analysis to do: compare sales of the upmarket “bells and whistles” models with the downmarket “core benefits” models. That way you can see what customers actually do rather than what they say they do.
Poor research design like this is a common problem.
For instance, we’ve seen loyalty program research that, when asking customers if they want points or discounts, found they overwhelmingly want discounts. Then those same customers go on to collect points, respond to points promotions, etc.
And there was the set of supermarket research that asked customers if they would buy “store brands”. When you compared actual purchase data with customer responses, many customers with baskets full of store brand product said they would never lower their standards that low.
If you naively believed what customers told you in either of these circumstances you could easily build the completely wrong customer experience.
Consumer research is fraught with this kind of problem and when designing your research approach you need to act defensively to guard against getting incorrect or inaccurate results. If you don’t, you will get a chart that looks pretty in the report but the insights will be just plain wrong.
Another example of this type of not believing what customers say is the “how important is feature x” type of question. Understanding how important a service feature is to the customer is critical in designing good service processes. So questionnaires are often used to determine which features are most important.
The problem comes if the surveys are poorly implemented. You’ve probably seen the bad ones; the wording looks a little like this (paraphrasing):
Q1: How good is our price?
Q2: How important is price?
Q3: How good is our service responsiveness?
Q4: How important is responsiveness?
Q5: How good is our feature x?
Q6: How important is feature x?
Even before I see the results I know what the answers will be. Everything is important: 9s or 10s out of 10.
So what do you know now? Nothing more than you knew before because everything has the same high importance. You’re back to square 1.
Even if the results are not all 9s and 10s they will be skewed by what customers want you to think. For instance, no customer is going to tell you that price is unimportant, lest you decide to put up your price.
There are at least two better approaches than this:
- Some type of forced ranking: In this approach you force the customer to rank the importance of each feature using a points approach, ranking or a best-worst approach.
- Infer importance: If you design the survey in the right way you can infer what is important based on the answers you get from other questions or actual customer behaviour. This takes a bit of additional analysis but is well worth the extra effort.
Both of these alternatives will deliver a much more accurate outcome.
So, when you next look at the survey questionnaire your agency has provided, act defensively, and think about whether the answers you get will be accurate.
Have you seen other poor research approaches? Leave a comment and let me know.