Focusing on increasing your survey response rate is important but I’m often asked: “what is a **minimum acceptable** survey response rate for a customer feedback survey: 1%, 15%, 50%?”. This question also comes in the form: what is the statistically valid response rate?

For example this recent blog post comment:

…if I have two NPS of two different cinemas in January.

Cinema 1: Spectators: 18509. Survey responses: 17. Responses percent: 0,1%. NPS: 76.

Cinema 2: Spectators: 44190. Survey responses: 200. Responses percent: 0,5%. NPS: 43.My first question is: Can I do a valid comparison between the two cinemas?

And my second question is: [are the survey response rates statistically valid?]

These are good questions but unfortunately the answer is not a single response rate but a range depending on what you are trying to determine.

In this post we’ll discover that what is important is not so much the response rate, but but the total number of responses. Then we’ll use that information to calculate the acceptable response rate for your survey.

**Note**: The approach here works well for standard customer feedback surveys but if you are using Net Promoter Score you need to look at this post: How to calculate Margin of Error and other stats for NPS®

There is a lot of content in this post so here are some quick links to key parts:

- How do you Calculate Survey Response Rate
- What is a good Survey Response rate
- What is a good NPS response rate
- Statistically Valid Response Rates
- What is an Acceptable Response Rate
- Calculating an Acceptable Survey Response Rate
- Low Survey Response Rates are not Always Bad
- Other Considerations

## How do you Calculate Survey Response Rate

Let’s start by confirming how to calculate the response rate for your survey. There are only two variables in the calculation;

### 1. Number of invites

This is the number of people you invite to take your survey.

It could be a “census” in which you ask everyone in your audience or it could be a “sample” where you only invite a proportion of people.

### 2. Number of responses

This the number of completed responses you receive for your survey. Note that you might also count partially completed responses if they provide valuable information for your business.

### Response Rate

The calculation for response rate is simple: it is the number of responses divided by the number of people you invited to respond. The number is generally reported as a percentage, e.g. a 25% response rate.

In an equation that formula looks like this:

## What is a good Survey Response rate

Technically speaking, a good response rate is one that is higher than the minimum acceptable report rate, see more below.

Practically speaking, a good survey response rate is one that is above average and, based on some industry data (see below), that would be anything above 25%, subject to having enough total responses. More on that later.

### Industry response rate data

FluidSurveys published their average response rate for Email Surveys = 24.8%

SurveyGizmo also published the following chart

At Genroe our experience, for well crafted customer feedback surveys, is a response rate of between 10% and 30% depending on how engaged the audience is with the company.

## What is a good NPS response rate

A good good NPS response rate is one that is above around 20%, subject to having enough total responses.

See this post for more details on exactly how many responses you need: How to calculate Margin of Error and other stats for NPS®

You can see from this data published by CustomerGauge, using their extensive CustomerGauge NPS Benchmarks Survey, that most organisations (~51%) have response rates of 20% or higher. If your response rate is below that figure you might like to review how you might improve your response rate.

## Statistically Valid Response Rates

In order to determine statistically valid response rates we first need to have a short discussion around Margin of Error and Confidence Intervals

If you’ve ever tried to understand the statistical terms around error and confidence intervals it’s easy to become confused. So just quickly I want summarise the meaning of each for clarity.

**Standard Error**– is a generic term that applies to many types of sample statistics (mean, standard deviation, correlation, etc). When we talk about Averages, the Standard Error is equal to the Standard Deviation of the sample.

**Margin of Error**– the amount within which you are confident (e.g. 90%, 95%) that the population mean is above or below the sample mean.

**Confidence interval**– this is the range within which you are confident (e.g. 90%, 95%) the population mean lies

Margin of Error = Standard Error x Z (a value related to level of confidence, 90%,95% etc)

Confidence Interval = Mean +/- Margin of Error

## What is an Acceptable Response Rate

Now we have a handle on the basic statistical terms lets look at what makes a response rate “acceptable”.

In this context an acceptable survey response rate is one that allows you to use the information collected in the survey to make decisions in your business. That rate can vary dramatically.

While you could, for instance, make important business decisions with just 5 responses, and a very low response rate, using the “rule of 5”, most times you will need more responses so you can answer important questions:

You will most often want to, for instance:

- Decide if the average score has changed from one period to the next; or
- Decide if two attributes are significantly (in the statistical sense) different

In these cases the acceptable response rate will be the **statistically valid survey response rate**. There are a few things that affect that and here is the calculation for the number of responses required.

Don’t worry too much about the detailed maths as we have a downloadable tool you can use to do the calculation. What is important, is understanding what drives the number of responses you need. Here:

- n is the minimum sample size
- Z is related to how confident you want to be in the answer, e.g. 90% or 95%
- σ is related to how much inherent variation there is in the population: does almost everyone give you a 5 or 6, or do you get a wide range of scores all the way from 1’s up to 7’s.
- E is the Margin of Error you want to be able to detect

So the acceptable survey response rate depends on those variables:

- If you want to be more confident you have detected a change, you will need a higher response rate
- To detect smaller changes, you will need a higher response rate
- If your population has more variation, you will need a higher response rate

As you can see, the number of responses is very important and that, along with the number of invitees, drives the minimum required response rate.

## Calculating an Acceptable Survey Response Rate

Let’s move on to a practical use of this information. Above I talked about Standard Error. When determining an acceptable response rate this is a very important idea.

What we need to do now is to find the minimum number of responses that will give us the maximum error that we are happy to accept. That will then give us our acceptable response rate.

Said another way:

How many responses do we need, so we are confident the average is in the range of x +/- y.

We can do this with the equations above, just re-arranged a bit.

Here is a practical example where we want to be 95% certain that the average is in a range of +/- 0.5 on a 1-5 response scale. Here the minimum number of responses we need to be sure we are within that error range is 9, and so the acceptable response rate is 9%.

Note: This calculator ignores non-response bias, i.e. it assumes responses are evenly spread among the entire population. See below for more information.

## Low Survey Response Rates are not Always Bad

If you have a very large population then you may have a very small response rate but a large number of actual responses.

Take for example political polling. These polls try to forecast the winners of elections by surveying a very small proportion of voters.

You can see in the image below that the response rates are very small, e.g. just 0.0011% (1,690/ 146,311,000) but the data is still useful.

Source: https://ig.ft.com/us-elections/polls

http://www.statisticbrain.com/voting-statistics/

Note: strictly speaking this example doesn’t show response rates as not all voters were invited to respond. In this case I am using opinion polls to make the point that it’s not just the percentage of responses that are important but also the total number of responses.

## Other Considerations

Above and beyond the strictly mathematical, there are other considerations when deciding if your response rate is high enough.

### Skewing of results and non-response bias

Sometimes you may be concerned about the skewing of results because one group of customers does not respond at the same rate as other groups. Typically, for customer surveys, the very happy and very unhappy respond more than the middle group who are neutral.

This is termed non-response bias:

Non response happens when there is a significant difference between those who responded to your survey and those who did not.

In practice addressing this can sometimes be important but doing so is problematic and there is no consistent and practical way to address the issue in this context.

## Conclusion

The acceptable survey response rate is not one number but a range of numbers – so you first have to decide what you want to know.

Adam Ramshaw has been helping companies to improve their Net Promoter® and Customer Feedback systems for more than 15 years. He is on a mission to stamp out ineffective processes and bad surveys.

Tristan Chua says

Very thorough explanation in regards to survey response rate. Keep it up Adam!

Karen Nel says

Tx Adam – it was very helpful. Could you maybe indicate what the acceptable response rate be per research type. I am particularly interested in the acceptable response rate for e-mailed questionnaires to be completed by participants. Tx

Adam Ramshaw says

Karen,

I’m not sure what you mean “per research type”. The statistics are the same regardless of which approach you take and the calculations are the same for e-mailed questionnaires as for paper or for that matter telephone interviews.

Adam

floxy says

Great lesson Adam. I learnt a lot. Thanks

Erwin says

Very insightful and applicable – thanks, Adam.

Question: in your response rate calculator, the actual number of responses used to measure the STDEV (168) is different than the Number of Responses (20) used in your Response rate calculation. What’s the difference?

When I use your calculator I get a Min Acc Response rate of 0%. Are my responses insignificant? (95%conf, .5 error, stdev 1.18, min sample 21, responses 943, invitation 59460)

Adam Ramshaw says

Erwin,

Thanks for stopping by.

I can see what you mean and it is confusing. I didn’t use a calculated column for number of responses, just hard entered the number and it was not consistent with the other values. I’ve now updated the spreadsheet to automatically count the number of responses to eliminate the confusion.

Thanks.

Adam

Tom Ilvento says

I am a Professor and our Department teaches a survey course. I like what you put out and find it useful, but I have two issues.

The first is that you almost completely ignore nonresponse bias in your discussion. It can be an important factor when response rates are below 50%. It looks like your calculator treats the actual responses as random, such that nonresponse only results in a smaller sample size. That is rarely the case.

One other point. In looking at national opinion polls you indicated the response rate as a percent of the population, not as a percent of the invited audience. That is not the response rate. National opinion polls use substitution and other methods to make it difficult to determine response rates. However, the entire voting population is not the invited audience.

Adam Ramshaw says

Tom,

Thanks for stopping by and commenting.

Non-response bias: I agree this is not addressed and can be important but doing so is problematic. (yes, the calculator does assume the responses are random). I don’t know of any practical way to address this issue in this context so have so make an implicit assumption that the responses are random. I do allude to this issue in the “Skewing of results” section but I’ll add a note making this assumption explicit where it is used.

Invited audience: you are correct. In this case the example of opinion polls used to make the point that it’s not just the percentage of responses that are important but also the total number of responses. I’ll add a note to make that clear as well.

Adam