Social science research is key for understanding and for predicting compliance with COVID-19 guidelines, and much of this research relies on survey data. While much focus is on the survey… Click to show full abstract
Social science research is key for understanding and for predicting compliance with COVID-19 guidelines, and much of this research relies on survey data. While much focus is on the survey question stems, less is on the response alternatives presented that both constrain responses and convey information about the assumed expectations of the survey designers. The focus here is on the choice of response alternatives for the types of behavioral frequency questions used in many COVID-19 and other health surveys. We examine issues with two types of response alternatives. The first are vague quantifiers, like "rarely" and "frequently." Using data from 30 countries from the Imperial COVID data hub, we show that the interpretation of these vague quantifiers (and their translations) depends on the norms in that country. If the mean amount of hand washing in your country is high, it is likely "frequently" corresponds to a higher numeric value for hand washing than if the mean in your country is low. The second type are precise numeric response alternatives and they can also be problematic. Using a US survey, respondents were randomly allocated to receive either response alternatives where most of the scale corresponds to low frequencies or where most of the scale corresponds to high frequencies. Those given the low frequency set provided lower estimates of the health behaviors. The choice of response alternatives for behavioral frequency questions can affect the estimates of health behaviors. How the response alternatives mold the responses should be taken into account for epidemiological modeling. We conclude with some recommendations for response alternatives for health behavioral frequency questions in surveys.
               
Click one of the above tabs to view related content.