Recently, we performed some Net Promoter Score comment coding work for a successful Australian health fund called nib health insurance. Nib has recently de-mutualised and as a result, has been very focused on providing its customers with the best levels of service possible.
Towards the end of 2009, nib investigated the use of NPS®. And in January of 2010, they started to use transactional Net Promoter Score, i.e. they have started to collect NPS scores and comments for a variety of customer touch points.
Since then they have collected a large volume of free format comment data. Our task was to help them identify a set of comment themes for their data and then code the comments by the different themes. To be sure, it was a substantial task.
In the process, though, we uncovered a very interesting insight that they have been kind enough to allow us to share.
The nib survey process uses best practice transactional Net Promoter Score approaches; they contact the customer shortly after each touch point interaction and perform a short email survey. What is unique about the survey that nib perform is that they ask for a “customer satisfaction” score and comments along with the “would you recommend” score and comments.
This leads to rather good opportunity to compare what customers include in their analysis process when they score customer satisfaction and Net Promoter Score.
We used the same set of themes to code the data for both Customer Satisfaction and Would Recommend question. The chart below looks at how often each of the themes was coded for each question.
In the coding process we only identified the theme of the comment, not whether it was positive or negative. So the volumes that you see in the chart are representative of the customer’s perceptions of the relevance of that theme to either the customer satisfaction (CSat) or “would recommend” (NPS) question. Each comment could be coded for more than one theme.
The first area to note is the substantially different theme coding rates for “would recommend” and CSat. Very few themes are coded consistently between the two questions. You will also notice that the NPS themes are coded more evenly. The maximum coding rate for NPS is about 16% but for CSat it peaks at 31%.
Also, almost all of the themes are coded above the 1% rate for NPS while only 60% of the CSat theme are coded at the same rate. This implies that customers consider a wider array of elements of the overall offering when responding to the “would recommend” question than when they respond to the CSat question.
From this we can say that the NPS question appears to be a more rounded review of the business.
If we dive down into a couple of specific themes this distinction becomes even more evident. For instance you can see that Speed of Service and Staff Attitude is top of mind for customers when considering CSat but not considered as often when answering the NPS question.
On the other hand, pricing attributes are rarely considered when the customer provides feedback on CSat but for NPS they are coded relatively often.
As I say this was an imperfect, but relatively unique, opportunity to compare the areas that customers consider when providing a “customer satisfaction” or “would recommend” score. However, because it was not designed from the outset to test the different themes customers considered when scoring the two questions we must be careful not to overestimate the significance of these results.
For instance the order of the questions was always the same. This may lead to bias in the response. If you were to design this as an experiment from the ground up you would ensure that each question was shown first 50% of the time.
However, even allowing for its imperfect experimental design, it is a very interesting result. The work undertaken by the originators of NPS (Fred Reichheld et al) included a comparison of correlation to revenue growth of several different potential leading indicator questions, including customer satisfaction and “would recommend”. The outcome of that research showed that CSat was a less good predictor of revenue growth than NPS.
Perhaps in these results we can see part of the reason that CSat is less good as an indicator of future business growth. Our hypothesis here is that because customers refer to fewer themes in their responses to the CSat question than the NPS question, CSat focuses more explicitly on the immediate service experience.
Note though that while the immediate service experience may be more important for the CSat score, i.e. it may be skewed towards areas that are important for that experience, it may not cover all of those elements required to drive higher customer loyalty.
Potentially, customers consider a much wider range of areas in the total offering when answering the NPS question. Thus, the NPS question is a broader and more holistic response and better aligned to overall customer loyalty.
Once again we would like to acknowledge the support of nib in allowing us to publish these findings.