Icon showing a circle of people around a questionnaire, one person is in red, the others are all grey.

Enhancing Response Rates and Minimising Nonresponse Bias

High response rates are usually considered as an indicator of good survey quality.

In the ESS Specification, a minimum target response rate of 70% in each country has been outlined: “… the minimum target response rate […] should be 70%. All countries are expected to aim for the 70% response rate or – where this is considered highly unlikely – plan for a higher response rate than in the previous round.”

To pursue high response rates a series of measures can be taken, such as training interviewers in obtaining cooperation, sufficient remuneration of the interviewers, advance letters and incentives for sampled persons, allowing time for multiple visits at different times of the day and in different days of the week, and close fieldwork monitoring.

Enhancing response rates is useful, because high response rates minimise the chance of nonresponse bias. On the other hand, nonresponse bias is not only a function of the response rate, but also of the differences between respondents and non-respondents. If some groups (e.g. people with a higher education) participate much more than other groups (e.g. people with a lower education), and education level is related to attitudes towards immigrants, there will be nonresponse bias. For that reason, it is not recommended to not only to strive for a high response rate, but also for a response rate that is similar for different subgroups. This is also called a balanced response rate.

At the end of fieldwork, standard response rates are calculated based on information from the contact forms, on which the outcomes of each interviewer visit has been recorded. In addition, nonresponse bias analyses are conducted. These can identify which groups are underrepresented, and provide guidance for subsequent rounds.

To adjust for unequal representation of subgroups post-stratification weights are provided. These ensure that the survey data represent the national populations of 15 years and older with respect to age, gender, education and region.

Research projects

Auxiliary Data Driven nonResponse Bias Analysis (ADDResponse)

One commonly suggested approach to learn more about the extent and sources of survey nonresponse bias is to make use of auxiliary data available for both respondents and non-respondents. The ADDResponse project explored the potential that external auxiliary data could help explain and correct potential nonresponse bias in the European Social Survey in the UK. Auxiliary data from three sources – small-area administrative data, commercial marketing data, and geocoded information on the physical location of sample units – were appended to the UK sample for ESS Round 6 (2012/13). The project provides a scoping study of the auxiliary data available, as well as a thorough analysis of whether and how these data may be useful in studying survey nonresponse.

The main finding from the project – which provides one of the most thorough tests of the auxiliary data approach to date – is to urge caution over the commonly advocated approach of using external sources of auxiliary data to address the problem of survey nonresponse. Using the auxiliary data linked in this project, it proved difficult to identify variables which are significantly associated with both response propensity and substantive survey variables. This remains the case even after exploring auxiliary data from a range of sources, at different levels of aggregation, covering the major theories of survey response and using a variety of statistical techniques. For more information about the project and the data sources used please see www.addresponse.org:

The project was funded by the UK Economic and Social Research Council (grant ES/L013118/1).

Related content

Data on this topic

For response rates see Survey Documentation Reports in the ESS Data Portal

ESS documents on this topic

Publications on this topic

Reports

Koch, A., Halbherr, V., Stoop, I.A.L., & Kappelhof, J.W.S. (2014).
Assessing ESS sample quality by using external and internal criteria

Articles

Billiet, J., Philippens, M., Fitzgerald, R., & Stoop, I. (2007).
Estimation of Response Bias in the European Social Survey: Using Information from Reluctant Respondents in Round One. Journal of Official Statistics, 23 (2), 135-162

Books

Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010).
Improving Survey Response. Lessons Learned from the European Social Survey. Chichester: Wiley