Share on facebook
Share on twitter
Share on linkedin
Share on email
net promoter score survey design

The Perfect Net Promoter® Survey Design

Get the Business Leaders’ Practical Guide to Implementing Net Promoter
Share on facebook
Share on twitter
Share on linkedin
Share on email

“The one number you need to grow”: who’d have thought such a simple statement could cause such a fuss.

Net Promoter Score is of course famous for being that one number and this fame has lead some people to think that you only need one question in your Net Promoter Survey .

Unfortunately that doesn’t work: so let’s review the anatomy of a perfect Net Promoter Survey.

Your Survey Design Goals

A lack of design goals is why so many surveys seem to have a split personality. They ask a few market research questions then a few customer feedback questions and even toss in a couple of marketing pushes.

So what should the design goals be for your Net Promoter Survey?

Short

You want the survey as short as possible to drive up the response rate and lower the impact on your customers. This also allows you to run it in transactional survey mode which is very useful.

Consistent

The factors that can skew the response on a Net Promoter Score® are many and varied, see below for more details.

To reduce the changes in these factors as much as possible you want to ensure that the survey is consistent from one customer to the next and one wave or transaction to the next.

Resist the urge to re-order questions, add new ones, change the wording of existing questions, etc.

Enable driver identification

Asking one question, as some NPS® surveys do, is pretty much a waste of time unless you have an extraordinarily deep and wide customer behaviour dataset and the tools to do some very sophisticated data analysis. For most organisations you will need to add questions to understand what drives the score.

Understand how to improve

You know what drives the score but you also need to know how to move the score.

Collect NPS

You need, of course, to collect the NPS.

That’s it. No more, no less.

We don’t care if the customer might like to purchase that new service Marketing has been pushing or whether they follow us on Twitter.

Those questions are not relevant to the customer experience improvement process and have no place in our customer feedback survey.

Now that we have a good set of design goals the actual survey part should flow pretty easily.

1. The Net Promoter Question

Starting with the “Would Recommend” or “likelihood to recommend” question seems like a pretty good idea, and it is.

Putting this question at the start of the survey is generally considered best practice because it allows you to change later questions, if needed, and not affect the score from the first question.

The easiest approach is to just use the question as designed:

How likely is it that you would recommend [your company] to a friend or colleague? Where 0 is not likely at all and 10 is very likely.

Can We Change the Response Scale?

Yes, but why would you? Okay maybe that’s a bit harsh but it’s close to the truth.

Typically I hear this question presented:

Our Market Research/Customer Insights/Analytics Department has standardised on the 0-5/1-7/Alpha-Omega, response scale and we need to conform to their requirements.

Sure you can change the response scale but all of the analysis and published literature is based on the 0-10 response scale so all of that data is worthless if you change the scale. 0-10 works and has been proven to work.

If you can push back against Market Research/Customer Insights/Analytics Department then do so.

If you can’t it’s not the end of the world but it does make things a bit tricky.

Can We Change the Wording?

As long as you are consistent it is generally agreed that you can change the wording a minor way. This works quite well for transactional surveys.

For instance:

Based on your recent branch visit, how likely would you be to recommend Megabank to a friend or colleague.

2. The Qualitative Question(s) (Free text to you and me)

The qualitative or free text question is a staple in a Net Promoter Survey because it provides the how to match the what we discover later.

This is an open response question that, if the survey is well designed, will receive a 40-60% completion rate.

Please tell us why you gave that score.

There are nuances that you can introduce such as changing the question in response to high and low scores to the “would recommend” question, e.g.

Please tell us how we can improve.

Please tell us what you liked the most about us.

In general this is not necessary but there is no harm in doing so.

Enabling Driver Identification

Now you know the score and what to improve, you need know how to drive change so you need a way to identify what drives the NPS. This is the task of the driver identification part of the survey.

There are three main ways to run this section of the survey. Which you select depends on a variety of factors but let’s introduce each and then summarise the applications.

1. Driver Identification By Tagging The Free Text

The first method for driver analysis is to tag each of the free text comments with a theme or themes then use these tags to analyse the data and identify the most important tags.

This requires you to tag the data and there are several ways to achieve that task.

Manual Tagging

If you don’t have many responses, maybe just one or two hundred, you can use the manual tagging approach. This works well if the same person does all of the tagging and you limit the number of tags to just 10 or 20.

Of course this is quite subjective which is why you should have the same person do the tagging to keep it consistent. As you can imagine, though, this gets old pretty quickly.

Automated Text Analytics (Passive Tagging)

Computers can do a lot of things and analyse the text from your free text question is one of them. There are a variety of platforms that will take the text and parse it to provide a context and tag the response.

However, text analytics tools are not as point and shoot as they seem, or as advertised. These systems require training, technical skills and ongoing management to match them to your business.

They can also be expensive but they can be effective for large numbers of responses.

2. Customer Tagging (Active Tagging)

The last way to tag data is to have the customer complete the task. Of course you don’t call it “tagging” in the survey but that’s what they are doing.

To achieve this you add one more question to the survey with a series of check-boxes or radio buttons to allow the customer to tell you what was their biggest issue or what they liked the most.

This works very well and has the added of advantage of not needing to manage text analytics tools and never being wrong. After all, by definition, a customer can’t tag their data incorrectly.

3. Attribute Questions

The alternative to tagging of the responses is to use a series of attribute questions to score each of the attributes of your product or service. For example:

How responsive were we when you called, from 1 to 7, where 1 is very unresponsive and 7 is very responsive.

Using these scores you can determine which attributes are important to the customer and how well you are delivering them. This helps you to understand where you need to focus your improvement efforts.

This approach has the advantage of being quite clear cut to implement but the attribute list needs to be comprehensive.

The other issue with this approach is it increases the length of the survey (remember our goals). However, done carefully this works well.

Take care that you only include the attributes that are really essential otherwise you will grow a Christmas Tree survey: one with all sorts of questions hanging off it.

Attribute questions can make the task of identifying the driver attributes a little easier because the statistical analysis is relatively simple; predominately linear regression models.

Unintended Skewing of Results

Care needs to be taken to create a survey instrument and methodology that will reduce the skewing of customer responses as much as possible.

Here is a list of things to consider:

1. Consistent Data Collection Methodology

It has been found that different forms of data collection (face to face, telephone, internet, paper, etc), result in slightly different Net Promoter Scores for the same survey and customer base. The difference can be up to 40 points of NPS.

So, if possible, it’s best to use a consistent data collection methodology.

2. Consistent Invitation Timing

It has also been found that the timing of the survey invitation can impact on the Net Promoter Score. 

Generally the short the time between the transaction and the survey, the higher the score.  So, surveys taken the next day result in higher scores than those taken a week later.

3. Position of “Likelihood to Recommend” Question in Survey

Research shows a significant position effect for the “Recommend” question in surveys. In short, you get higher NPS scores if you ask the question first, lower if you ask it last in the survey.

4. Eliminate Leading Questions

Of course, it is possible to change the response to a survey question based on the questions that lead up to it. 

This classic Yes Minister segment is a great example of this approach. 

However, if you try not to skew the results like this then you should be fine.

The Pick List of Perfect Surveys

Each of the perfect Net Promoter Surveys below meets our original set of goals:

  1. Short
  2. Consistent
  3. Enable driver identification
  4. Understand how to improve
  5. Collect NPS

So let’s review the various versions and highlight when they should be used.

1. NPS Question + Free Text Response + Manual Tagging

Companies with small numbers of customers can benefit from this configuration.

2. NPS Question + Free Text Response + Automated Text Analytics

Suitable for larger B2C and B2B organisations with high volumes of responses and the technical resources to support and manage the text analytics engine.

3. NPS Question + Free Text Response + Customer Tagging

Suitable for all organisations with more than a couple of hundred responses. The customer tagging aspect is easy to implement and manage. It requires no specific technical skills.

4. NPS Question + Free Text Response + Attribute Questions

Again suitable for all organisations. The statistical analysis is relatively easy which makes it accessible to many organisations.

I've created a Business Leaders’ Practical Guide to Implementing Net Promoter.  Download it Here

Don’t forget to share this post!

Share on facebook
Share on twitter
Share on linkedin
Share on email

Send this to a friend