How do you determine what is important to a customer?

How do you determine what is important to a customer?

Download the Customer Feedback whitepaper

I realised recently that I have touched on the subject of determining what is important to customers a few times in recent blogs , but never given a full account of the different methods that you can use to do this.

This post completes the picture.

There are basically two approaches you can take:

1. Ask them (“Stated Importance”)

Under this heading there several methods that you can use but they all tend to suffer from the same drawbacks.

  • Customers don’t tell the truth: (intentionally and unintentionally).  In many situations customers either cannot or will not be honest with you.  A classic example of this problem is the question “How important is price”.  Very few customers will answer anything but very important for this aspect of your product or service, lest you raise the price.
  • Often socially or ego acceptable answers will arise: Few people want to answer questions in an anti-social way, even in a confidential survey.
  • Customers don’t really know: There are many times that a customer just doesn’t know how important an element of the product or service is to them.  They often purchase based on an ill-defined group of attributes and assigning specific importance to one attribute is very difficult.
  • “Industry requirements” can be misleading: Basic requirements, industry expectations and hygiene factors all fall into a category of attributes that just must be delivered for your product to be viable.  Think: bank statement accuracy — it’s not important until they make a mistake.

Approach 1: Ask for a rating

Simply ask the customer to rate how important a particular feature is to their purchase decision.  You’ve seen this sort of question before and it looks like this.

“From 10 to 1, How important is responsiveness to you?”

It almost always comes straight after a question that asks about how well the organisation is doing in performing the task.

Overall, this is the worst approach you can take.

It often leads to “ice skating” scores: 9.9, 9.9, 9.9, 9.9, 9.8.  Where everything is equally important — so you don’t know anything new.

While it is quick for customers to enter data, i.e. the survey doesn’t take long to do, there is also little to force respondent to take care in the evaluation.

Approach 2: Simple Ranking

You can also ask the customer to rank a list of attributes, forcing them to trade-off between each of the attributes.  For example:

Please rank the following in order of importance from highest to lowest

  • Delivering against your needs
  • Price
  • The accuracy and completeness the documentation
  • Technical competence of operational staff
  • Responsiveness in returning your call/email
  • Responsiveness in resolving you problem
  • Responsiveness in closing the loop after problem resolution

This is a better approach than the first.  For one, respondents cannot rank all of the attributes equally so you start to get some real information on importance.

However, it does take longer for the respondent to complete because they must think more carefully about their response.  Mind you that is generally a good thing, up to a point.

It also assumes that there is a difference in importance between all the different attributes and there may not be.

Lastly, this works well for short lists of maybe 4-6 items.  After that it can get difficult for the respondent to effectively rank the items.  In the worst case scenarios I’ve seen lists of 20 and 30 attributes, which are clearly impossible to rank effectively in this format.

If you have more than 6 items and enough respondents you can get around the problem by asking respondents to rank sub-sets of attributes.  Then use some fancy maths to combine all of the answers into one large ranking.

Approach 3: Best-Worst Ranking

This is a variation on the ranking idea above.  In this case you can have a large number of attributes (20 or more) and present them to respondents in groups of 5.  You then ask them to select the most important/least important attributes in each list.

This is a very powerful approach that can provide an accurately weighted and ranked list of key attributes.

It is relatively easy for the respondent to select just the most and least important item in each group of five.

There is one downside however, customers must answer 20 or more very similar questions, each with a slightly different group of five items.  This can cause survey fatigue and the dropout rate can be quite high so you need a larger number of potential respondents to get the required sample size.

Approach 4: Constant-Sum Allocation

This is also a better approach and asks the respondent to allocate points to different attributes. For instance:

“Please allocate 100 points between each of the following items where the more important an item the more points are allocated

  • Delivering against your needs
  • Price
  • The accuracy and completeness the documentation
  • Technical competence of operational staff
  • Responsiveness in returning your call/email
  • Responsiveness in Resolving you problem
  • Responsiveness in closing the loop after problem resolution”

This ensures that respondents weigh each attribute in the overall set and it allows them to give equal weighting to multiple attributes.  On the down-side this approach can often take the longest for respondents to complete.

You can also use this with quite large sets of attributes.  Respondents will tend to give the points to the really important items and ignore the other attributes in the list.

2. Derive it (“Derived Importance”)

This is the second key type of approach.  Instead of asking the respondent, you use statistics to infer what is important to them.  This gets around a lot of the issues cited above that occur when you ask outright.

The approach requires a “key outcome” measure.  This is a customer attribute or attributes that you want to influence.  You can use customer satisfaction but we would suggest Net Promoter Score.  If you can tie responses to customer data (revenue, revenue growth, gross margin, gross margin growth, etc) then that is very good as well.

In this approach you ask the respondent about the organisation’s performance in each attributes that you are investigating e.g.;

“How responsive are Company X in returning your call/email”

Then you use statistical analysis to calculate the relationship between the attribute and the outcome.

Using this approach you can infer the underlying drivers of whatever outcome measure you are trying to achieve without asking directly.  This is powerful becuase it can get at the importance levels of a basket of key attributes at the unconscious level.

The downside is that it requires higher levels of statistical competence than just graphing the numbers.

Adding this level of rigour to the process is not necessarily a bad thing as I’ve reviewed many customer survey reports over the years that make statements the numbers really can’t support.  All as a result of an incomplete understanding of the statistics being examined.

I've created a step by step guide to implementing an Effective Customer  Feedback process: Download it Here