If you want to learn more about a person’s experiences and gather their feedback, sometimes the survey is a researcher’s jewel in the crown.
Surveys enable researchers to collect a variety of things that span the data-field, from quantitative to qualitative, and back. With the survey as a tool, researchers can implement psychometric scales, present multiple-choice questions, gather open-ended written responses along with demographics, psychographics, and even incentivize the participants.
Designing a quality survey takes time. There are best practices and common pitfalls that you should be aware of, especially in the design process. Check out these dos and don’ts—they’ll make your surveys go the extra mile.
Editor’s note: By the way, if you are looking for the best practices on writing a scale—including item generation and response option formatting—check out our post on scale writing in psychological research. The article goes into greater detail on processes like response option formatting and best writing practices.
Most surveys mainly feature a set of questions, but survey research involves the entire process of design, collection, and data analysis. This article focuses on developing self-report/self-administered assessments for survey research.
Survey research is a process with a purpose. Don’t try to survey without one. You may come to find that your question is not possible to answer, or even worse; it has been adequately answered by prior research (Kelley et. al., 2003).
The first step in survey research is to define the aim or purpose (Brace, 2004). You could measure any number of objectives, ranging from customer satisfaction to brand preferences, all the way to habits and perceptions. Stating the purpose of the research will help you identify the target population and any ensuing samples.
For example, if you plan to study customer satisfaction with a particular product, then your sample would consist of people that have purchased and consumed the product.
Whether it’s business or research, surveys are a versatile tool with pros and cons.
This list has been adapted from Kelley et. al., (2003).
To design the best survey that you can, it may be helpful to conceptualize the questionnaire as a means for indirect conversation between the researcher and the participant (Brace, 2004). In short form, the researcher asks and the respondent tells. However, as market researcher Ian Brace denoted in his book Questionnaire Design, these aren’t typical conversations.
“We ask them to recall events that to them are often trivial, such as the breakfast cereals that they bought, or the choice of flavours of yoghurt offered in the supermarket. We frequently ask them to analyse and report their emotions and feelings about issues that they have never consciously considered, such as their feelings about different brands of paint. Even if they can recognize their feelings and emotions, can they articulate them? Why should they make any effort to do so?” (2004, p. 3).
Some of the most common paint points in surveys revolve around the user experience.
While some users have different motives for taking a survey—some might want to give their feedback, others may be genuine volunteers—all users have valuable time on their hands. Time they are now giving to your survey.
To obtain quality data, the survey needs an efficient design. Quality data comes from participants that are able to pay attention, comprehend the questions, and remain engaged enough to provide honest feedback while completing the whole survey.
For the researcher, that means a few things. You must learn how to ask questions and design a survey to maximize comprehension, ease of use, and participation.
This section will briefly outline some of the question types that you can include in your survey. If you are looking for specifics on how to format items, select a measurement scale, label response options, or write effective questions, please refer to our post on scale writing in psychological research.
There are two main types of questions that you can include in your survey: open-ended and closed. Open-ended questions have an undefined range of responses, and the participant answers the question in their own words. This can provide more flexibility and detail. As one researcher phrased it, open-ended questions “let respondents describe the world as they see it” (Fink, 2003, p. 17).
But that detail comes at a cost. Open-ended questions require more effort from the respondents, and add more analysis to the researcher’s plate.
While more difficult to write, closed questions have several advantages. For starters, they enable respondents to select from a set of pre-coded response options. This can encourage participation. From the researcher’s end, closed questions are readily available for statistical analysis; they are easier to record and analyze with larger sample sizes; and they tend to be more reliable over time (Fink, 2003, Brace, 2004).
Once you have your questions prepared, it’s time to design your survey. The first step is to make sure that your survey is not too long. Think of the participant as a person, and not a data point. Would you want to sit through a 200 question survey if you did not have to? Probably not.
In terms of the aesthetic layout, Jones et. al., suggested that the visual elements of the survey should be “smooth, simple and symmetrical shapes” including soft colors and repetitive designs (2013). It is also helpful to indicate where the survey begins and ends.
Another aspect is how the questions are physically presented. Do you display them one at a time, or in a grid/matrix? Grids and matrices display multiple questions on a single page, which can reduce the amount of time and effort needed to complete the survey.
However, this can lead to an effect called straighlining, where the participant provides the same answer for each question in the matrix. To detect straightlining, researchers can include a reverse-coded question; to reduce straighlining, researchers can ask one question at a time (Qualtrics, 2022). While you can also shuffle the order of multiple choice options to combat straightlining, it is not a best practice to randomize ordinal (i.e., Likert or semantic differential) options.
Perhaps most critical is the ordering of the questions. Some researchers suggest that you place the easier questions at the beginning of the survey, and include the demographics at the end (Jones et. al., 2013). Others suggest placing behavioral questions before ones that measure attitude (Brace, 2004). Overall, all researchers agree that the order of the questions should not lead the participant towards any specific answers.
One concern in survey research is systematic bias introduced by insensitivity to cultural and racial differences. Research by Barreto et. al., highlighted that convenience samples can skew data (2018). To reduce bias, the researchers proposed a set of best practices. Here are a few of the essentials:
In the 21st century, a great number of surveys take place on the web and mobile devices. In order to overcome technical hurdles, one researcher suggested that emails to survey should:
In terms of digital design on the computer, the choice between a single-item format and grids is not straightforward. Grids (also known as a matrix) are fit for testing multiple hypotheses and variables, while single-item paradigms enable more accurate measurements while reducing biases (Debell et. al., 2021).
If you decide to use a matrix, some researchers found that a 5 x 5 format can minimize dropout rates on computers and mobile devices (Grady et. al., 2019).
When it comes to mobile devices, researchers found that grids, regardless of their size, reduce survey time but also increase straightlining. This effect is noticeably more present in mobile devices than computers (Stern et. al., 2016).
Privacy and security are user concerns, especially with e-surveys. Placing a statement about the steps the researcher takes to ensure privacy and security can build trust. If your survey generates a report, make sure it is only sent to the participant.
Finally, one way to encourage participation and engagement is through incentives. Whether it be money, a prize, entry into a raffle, or even a note of gratitude, rewarding the participant with something is better than nothing. Thank them for their time.
References:
Baatard, G. (2012). A Technical Guide to Effective and Accessible web Surveys. The Electronic Journal of Business Research Methods.
Barreto, M.A., Frasure-Yokley, L., Vargas, E.D., & Wong, J.S. (2018). Best practices in collecting online data with Asian, Black, Latino, and White respondents: evidence from the 2016 Collaborative Multiracial Post-election Survey. Politics, Groups, and Identities, 6. pp. 171 - 180.
Brace, I. (2004). Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research. United Kingdom: Kogan Page.
Debell, M., Wilson, C., Jackman, S., Figueroa, L. (2021). Optimal Response Formats for Online Surveys: Branch, Grid, or Single Item?, Journal of Survey Statistics and Methodology. 9(1). pp. 1–24.
Fink, A. (2003). The Survey Handbook. United Kingdom: SAGE Publications.
Grady, R. H., Greenspan, R. L., & Liu, M. (2019). What Is the Best Size for Matrix-Style Questions in Online Surveys? Social Science Computer Review, 37(3). pp 435–445.
Jones, T. L., Baxter, M. A., & Khanduja, V. (2013). A quick guide to survey research. Annals of the Royal College of Surgeons of England, 95(1), pp. 5–7.
Kelley, K., Clark, B., Brown, V., Sitzia, J. (2003). Good practice in the conduct and reporting of survey research, International Journal for Quality in Health Care, Volume 15, Issue 3. pp. 261–266.
Stern, M., Sterrett, D., & Bilgen, I. (2016). The Effects of Grids on Web Surveys Completed with Mobile Devices. Social Currents, 3(3). pp. 217–233.
Qualtrics. (2022). The Qualtrics handbook of question design.