Reflections on QDET2


ESS Director, Dr. Rory Fitzgerald, attended QDET2 last month. Here, he reflects on some of the excellent papers presented at the conference and highlights some of the opportunities and challenges facing survey methodologists.

The 2nd International Conference on Questionnaire Design, Development, Evaluation, and Testing (QDET2) was held in Miami last month. The first QDET had been held in 2002 and therefore QDET2 was rather timely. The conference was focused around four key themes:

I particularly enjoyed the two keynotes (by Gordon Willis and Mario Callegaro) which set and reflected the tone for the conference perfectly: highlighting the progress made in our field whilst outlining how far we still have to go. 

The talk from Callegaro underlined the opportunities provided by new technology and the set of challenges that come in parallel with that. For example, the need to engage and utilise usability testing and the difficulty of transporting grids to online administration modes.

There is in my view a slight danger that opportunities afforded by new technology for data collection come to dominate our field and are prioritized over whether we meet our measurement aims with the words or other stimuli that we use in our questions. It is critical that we do not forget validity.

Various presentations at the conference demonstrated that we clearly have a new set of burdens that might be placed on respondents by using self-administered online administration. We therefore have a duty to do all we can to ensure we do not distract respondents from the questions we are actually asking (or give up on our survey entirely)! This requires us to resolve as many of these challenges as we can and, where possible, share resources for doing so.  

The conference was held at a critical time for the wider field of survey research. Polling is under pressure due to the failure to predict election and referendum outcomes in many countries (repeatedly). At the same time a lot of data collection is moving online or being collected using mixed mode in order to save costs. There is also an increasing reliance on panels for data collection. These developments are leading to lower quality in many cases in key areas, such as poor coverage of certain population groups as well as the introduction of mode and panel conditioning effects in the data collected (which, while noted, are often ignored in analysis).

On a more positive note, a number of papers showed how the use of new modes is making us look again at our questionnaires and leading us to redesign them - at last taking account of what we have learnt rather than being wedded to our old questionnaires.

In some ways this forced move to new modes is leading to better measurement through the use of simpler, more direct questions, even if it comes at a cost in other areas. However, there is a real danger that time series might be lost or distortions ignored. Papers by Census Bureau colleagues and those from official statistics offices have been particularly illuminating in that regard.

From a TSE perspective, instrument design is probably becoming more important in the overall scheme of survey error and the resources needed to effectively develop an instrument are relatively modest compared to many other elements. PIs of surveys should perhaps consider putting more resources into instrument design and pre-testing in this context.

With more traditional interviewer administered surveys we have become increasingly aware of the negative impact that interviewer effects can have on our data - again largely ignored in the majority of analysis. The paper by Thomas and Schnell (City, University of London) using the European Social Survey reminds us that we cannot ignore this error source in our instrument design - particularly for sensitive and attitudinal items.

As a cross-national researcher I had a particular interest in the cross-national and cross-cultural elements of the conference. There were some excellent contributions on these themes, though perhaps not as many as might have been expected. It was particularly interesting to see the paper by Dorothee Behr (GESIS, Mannheim) on the use of web probing for cross-national survey design and interpretation. This technique clearly has real potential to help improve the volume of question testing that might be possible in the future and could facilitate testing in a much larger number of countries than have traditionally been included (both for cross-national and country specific surveys). In a similar way, the opportunities for crowd sourcing for pre-testing presented in various papers are very exciting.

In his keynote, Callegaro highlighted how at the first QDET conference, Norman Bradburn had emphasised that the challenges of multicultural multi-language issues needed more attention, as well as stressing the benefits that could be gained by incorporating more sociolinguistics into questionnaire design. To some extent the CSDI Workshop and 3MC Conference have picked up on these issues yet it was somewhat surprising there was so little included on these themes at this conference -especially considering that migration makes this issue increasingly salient for single country survey research in more and more countries.

Furthermore, it is notable that cross-national researchers are particularly sensitive to whether construct validity is found in the data collected and whether specification error is avoided. This is because, in the cross-national field, we evaluate questionnaire quality using MTMM experiments, establishing data banks like SQP (paper session by Diana Zavala-Rojas UPF Barcelona) and by using equivalence testing to look at cross-national comparability both between countries and within the same country over time.

Perhaps greater post-hoc testing of question quality should be built into survey reporting along with response rates, non-response bias, processing error and so forth. Furthermore, the need to translate a questionnaire in a cross-national project forces a closer look at the design and wording of the items in the source instrument and that additional review probably has benefits for the design of all instruments.

On the sociolinguistic theme there was a very interesting paper by Ana Slavec (University of Ljubljana) examining whether linguistic resources can be used to detect low-frequency wordings which might serve as a tool for keeping questionnaires simple and lead to higher quality in future.

Specifically, on pre-testing there was an excellent paper by Aaron Maitland (Westat) and Stanley Presser (Joint Program in Survey Methodology) which looked at using pre-test results to predict survey question accuracy - more work in this area would be welcome. This could really help surveys PIs tailor their design and pre-testing strategies to measurement aims and available budgets. We need to move beyond mapping our tool kits in order to really provide guidance to researchers on what is essential and what might be optional from within them.

In his keynote, Willis highlighted how our field needs to find a balance between absolutism and contextualism. From my own work, I sometimes face the difficulty that rules or tools suggest changes to questions that I fear will - in the end - undermine their utility in terms of meeting the measurement aims that are the reason for asking them in the first place.

At the same time following the new absolute rules - such as moving away from using indirect questions and utilising optimal scale length and format - are clearly moving the field forward. This conflict will surely remain a challenge for researchers, highlighting the need for the QDET community to continue sharing their findings in a variety of ways that also reach beyond formal academic publications.

With the amount of change our field faces in this area as technology shifts I hope that we will not have to wait as long for the next QDET. Many thanks to Amanda Wilmot (Westat) and all of the organising committee for a great conference that will define the next phase for our field.