This post from the Harvard Business Review’s Daily Stat shows a surprising lack of insight into how consumers actually respond to research questionnaires. It reports a McKinsey survey where consumers were asked what they were more interested in: core benefits or bells and whistles.
If you believe the results, consumers are more interested in core benefits.
I don’t believe the results.
The issue here is the way the survey has been undertaken. In this case, asking customers what they want is unlikely to give you an accurate answer to your question. Not because they will not tell you the truth but because they are unlikely to act in the same way, in practice, that they say they will act, in theory.
If you really want to know what consumers want then, in this case, it’s best to look at what they actually do. On this issue (core benefits or bells and whistles) it would be a relatively easy piece of analysis to do: compare sales of the upmarket “bells and whistles” models with the downmarket “core benefits” models. That way you can see what customers actually do rather than what they say they do.
Poor research design like this is a common problem.
For instance, we’ve seen loyalty program research that, when asking customers if they want points or discounts, found they overwhelmingly want discounts. Then those same customers go on to collect points, respond to points promotions, etc.
And there was the set of supermarket research that asked customers if they would buy “store brands”. When you compared actual purchase data with customer responses, many customers with baskets full of store brand product said they would never lower their standards that low.
If you naively believed what customers told you in either of these circumstances you could easily build the completely wrong customer experience.
Consumer research is fraught with this kind of problem and when designing your research approach you need to act defensively to guard against getting incorrect or inaccurate results. If you don’t, you will get a chart that looks pretty in the report but the insights will be just plain wrong.
Another example of this type of not believing what customers say is the “how important is feature x” type of question. Understanding how important a service feature is to the customer is critical in designing good service processes. So questionnaires are often used to determine which features are most important.
The problem comes if the surveys are poorly implemented. You’ve probably seen the bad ones; the wording looks a little like this (paraphrasing):
Q1: How good is our price?
Q2: How important is price?
Q3: How good is our service responsiveness?
Q4: How important is responsiveness?
Q5: How good is our feature x?
Q6: How important is feature x?
Even before I see the results I know what the answers will be. Everything is important: 9s or 10s out of 10.
So what do you know now? Nothing more than you knew before because everything has the same high importance. You’re back to square 1.
Even if the results are not all 9s and 10s they will be skewed by what customers want you to think. For instance, no customer is going to tell you that price is unimportant, lest you decide to put up your price.
There are at least two better approaches than this:
- Some type of forced ranking: In this approach you force the customer to rank the importance of each feature using a points approach, ranking or a best-worst approach.
- Infer importance: If you design the survey in the right way you can infer what is important based on the answers you get from other questions or actual customer behaviour. This takes a bit of additional analysis but is well worth the extra effort.
Both of these alternatives will deliver a much more accurate outcome.
So, when you next look at the survey questionnaire your agency has provided, act defensively, and think about whether the answers you get will be accurate.
Have you seen other poor research approaches? Leave a comment and let me know.

Adam Ramshaw has been helping companies to improve their Net Promoter® and Customer Feedback systems for more than 15 years. He is on a mission to stamp out ineffective processes and bad surveys.
YOU CLEARLY DESCRIBED THE PROBLEM:
This very intelligent article lead us in the right direction, and up to the punchline: approach 2. “Infer importance: If you design the survey in the right way you can infer what is important based on the answers you get from other questions or actual customer behaviour.”
COULD YOU OUTLINE THE SOLUTION:
Could you please provide a template and/or examples of how to design the survey in order to infer importance…
THANK YOU!
mk
Moshe,
You make a good point — give me a few days and I’ll post a second blog that answers your question about how to design the survey to infer importance.
Stay tuned.
Adam
Moshe,
As promised I have created a post that explains how to infer importance in a customer survey.
See this link:
https://www.genroe.com/blog/how-do-you-determine-what-is-important-to-a-customer
Regards,
Adam
This problem is not confined to consumer research. Many people assume that B2B purchasers will answer survey questions entirely rationally and will be able to use rating scales to articulate what is important to them. Not so. We advocate extensive qualitative interview programmes to gain an understanding of the business drivers for each sector and each type of decision maker prior to quantitative survey work. When we have suffiently large samples we generate derived importance measures as well as stated importance because interesting inflection points emerge when you look at the differences between the two. Above all, we ask respondents to comment on the ratings they have given because analysis of the comments provides “meat on the bones” of the data and helps drive management action for improvement.
Francis,
I couldn’t agree more — with pretty much everything you have said. The last part is very important: qualitative information is critical for organisations in determining “what to do now”.