Focusing on increasing your survey response rate is important but I’m often asked: “what is a **minimum acceptable** survey response rate for a customer feedback survey: 1%, 15% 50%?”

For example this recent blog post comment:

…if I have two NPS of two different cinemas in January.

Cinema 1: Spectators: 18509. Survey responses: 17. Responses percent: 0,1%. NPS: 76.

Cinema 2: Spectators: 44190. Survey responses: 200. Responses percent: 0,5%. NPS: 43.My first question is: Can I do a valid comparison between the two cinemas?

And my second question is: [are the survey response rates statistically valid?]

It’s a good question but in general the focus on response rate is misplaced.

In this post we’ll discover that what is important, is not so much the response rate, but but the total number of responses. Then we’ll use that information to calculate the acceptable response rate for your survey.

**Note**: The approach here works well for standard customer feedback questions but if you are using Net Promoter Score you need to look at this post: How to calculate Margin of Error and other stats for NPS®

## How to Calculate Survey Response Rate

Let’s start by confirming how to calculate the response rate for your survey. There are only two variables in the calculation;

### 1. Number of invites

This is the number of people you invite to take your survey.

It could be a “census” in which you ask everyone in your audience or it could be a “sample” where you only invite a proportion of people.

### 2. Number of responses

This the number of completed responses you receive for your survey. Note that you might also count half completed responses if they provide valuable information for your business.

### Response Rate

Then the calculation for response rate is simple:

## Margin of Error and Confidence Intervals

Before we move on we need to be clear on a couple of important concepts.

If you’ve ever tried to understand the statistical terms around error and confidence interval it’s easy to become confused. So just quickly I want summarise the meaning of each for clarity.

**Standard Error**– is a generic term that applies to many types of sample statistics (mean, standard deviation, correlation, etc). When we talk about averages, the Standard Error is the standard deviation of the sample.

**Margin of Error**– the amount within which you are confident (e.g. 90%, 95%) that the population mean is above or below the sample mean

**Confidence interval**– this is the range within which you are confident (e.g. 90%, 95%) the population mean lies

Margin of Error = Standard Error x Z (a value related to level of confidence, 90%,95% etc)

Confidence Interval = Mean +/- Margin of Error

## What is “acceptable”?

Now that we have the calculation out of the way lets look at what makes a response rate “acceptable”.

In this context an acceptable survey response rate is one that allows you to use the information collected in the survey to make decisions in your business. That can vary dramatically.

While you could, for instance, make important business decisions with just 5 responses using the “rule of 5”, most times you will need more responses so you can answer important questions:

You will want to, for instance:

- Decide if the average score has changed from one period to the next; or
- Decide if two attributes are significantly (in the statistical sense) different

In these cases the acceptable response rate will be the statistically valid survey response rate. There are a few things that affect that and here is the calculation for the number of responses required.

Don’t worry too much about the detailed maths. What is important is what drives the number of responses you need.

Here

- n is the minimum sample size
- Z is related to how confident you want to be in the answer, e.g. 90% or 95%
- σ is related to how much inherent variation there is in the population: does almost everyone give you a 5 or 6, or do you get a wide range of scores all the way from 1’s up to 7’s.
- E is the Margin of Error you want to be able to detect

So the acceptable survey response rate depends on those variables:

- If you want to be more confident you have detected a change, you will need a higher response rate
- To detect smaller changes, you will need a higher response rate
- If your population has more variation, you will need a higher response rate

As you can see, the number of responses is very important and that, along with the number of invitees, drives the minimum required response rate.

## Small Survey Response Rates are not Always Bad

If you have a very large population then you may have a very small response rate but a large number of actual responses.

Take for example political polling. These polls try to forecast the winners of elections by surveying a very small proportion of overall voters.

You can see in the image below that the response rates are very small, e.g. just 0.0011% (1,690/ 146,311,000) but the data is still useful.

Source: https://ig.ft.com/us-elections/polls

http://www.statisticbrain.com/voting-statistics/

Note: strictly speaking this example doesn’t show response rates as not all voters were invited to respond. In this case I am using opinion polls to make the point that it’s not just the percentage of responses that are important but also the total number of responses.

## Calculating an Acceptable Survey Response Rate

Let’s move on to a practical use of this information. In a recent post I talked about Standard Error. When determining an acceptable response rate this is a very useful idea.

What we need to do now is to find the minimum number of responses that will give us the maximum error that we are happy to accept. That will then give us our acceptable response rate.

Said another way: how many responses do we need, so we are confident the average is in the range of x +/- y. We can do this with the equations above, just re-arranged a bit.

Here is a practical example where we want to be 95% certain that the average is in a range of +/- 0.5 in a 1-5 response scale. Here the acceptable response rate is 9%.

Note: This calculator ignores non-response bias, i.e. it assumes responses are evenly spread among the entire population. See below for more information.

## Other Considerations

Above and beyond the strictly mathematical there are other considerations when deciding if your response rate is high enough.

### Skewing of results and non-response bias

Sometimes you may be concerned about the skewing of results because some group of customers does not respond at the same rate as other groups. Typically, for customer surveys, the very happy and very unhappy respond more than the middle group who are neutral.

This can also be termed non-response bias:

Non response happens when there is a significant difference between those who responded to your survey and those who did not.

In practice addressing this can sometimes be important but doing so is problematic and there is no consistent and practical way to address the issue in this context.

## Typical Survey Response rates

Now we know the minimum response rate, let’s move on to what you can expect. There are also some published statistics around the response rates that companies are actually achieving and this is probably a good guide for how well you are doing overall.

FluidSurveys published their average response rate for Email Surveys = 24.8%

SurveyGizmo also published the following chart

CustomerGauge published the results of their extensive CustomerGauge NPS Benchmarks Survey Part One which showed the following ranges of response rates.

At Genroe our experience, for well crafted customer feedback surveys, is a response rate of between 10% and 30% depending on how engaged the audience is with the company.

## Conclusion

The acceptable survey response rate is not one number but a range of numbers – so you first have to decide what you want to know.

Tristan Chua says

Very thorough explanation in regards to survey response rate. Keep it up Adam!

Karen Nel says

Tx Adam – it was very helpful. Could you maybe indicate what the acceptable response rate be per research type. I am particularly interested in the acceptable response rate for e-mailed questionnaires to be completed by participants. Tx

Adam Ramshaw says

Karen,

I’m not sure what you mean “per research type”. The statistics are the same regardless of which approach you take and the calculations are the same for e-mailed questionnaires as for paper or for that matter telephone interviews.

Adam

floxy says

Great lesson Adam. I learnt a lot. Thanks

Erwin says

Very insightful and applicable – thanks, Adam.

Question: in your response rate calculator, the actual number of responses used to measure the STDEV (168) is different than the Number of Responses (20) used in your Response rate calculation. What’s the difference?

When I use your calculator I get a Min Acc Response rate of 0%. Are my responses insignificant? (95%conf, .5 error, stdev 1.18, min sample 21, responses 943, invitation 59460)

Adam Ramshaw says

Erwin,

Thanks for stopping by.

I can see what you mean and it is confusing. I didn’t use a calculated column for number of responses, just hard entered the number and it was not consistent with the other values. I’ve now updated the spreadsheet to automatically count the number of responses to eliminate the confusion.

Thanks.

Adam

Tom Ilvento says

I am a Professor and our Department teaches a survey course. I like what you put out and find it useful, but I have two issues.

The first is that you almost completely ignore nonresponse bias in your discussion. It can be an important factor when response rates are below 50%. It looks like your calculator treats the actual responses as random, such that nonresponse only results in a smaller sample size. That is rarely the case.

One other point. In looking at national opinion polls you indicated the response rate as a percent of the population, not as a percent of the invited audience. That is not the response rate. National opinion polls use substitution and other methods to make it difficult to determine response rates. However, the entire voting population is not the invited audience.

Adam Ramshaw says

Tom,

Thanks for stopping by and commenting.

Non-response bias: I agree this is not addressed and can be important but doing so is problematic. (yes, the calculator does assume the responses are random). I don’t know of any practical way to address this issue in this context so have so make an implicit assumption that the responses are random. I do allude to this issue in the “Skewing of results” section but I’ll add a note making this assumption explicit where it is used.

Invited audience: you are correct. In this case the example of opinion polls used to make the point that it’s not just the percentage of responses that are important but also the total number of responses. I’ll add a note to make that clear as well.

Adam