What is the future of survey-based data collection for local government research?

Trends, strategies, and recommendations 

Rachel Krause (University of Kansas), S. Mohsen Fatemi (University of Kansas), Le Anh Nguyen Long (University of Twente), Gwen Arnold (University of California-Davis), and Sarah Hofmeyer (University of Kansas)

Local governments are “where the rubber meets the road” for many policies, which makes them an important foci for the study of policy adoption, diffusion, implementation, and management. However, the data needed to examine them in a manner that enables generalizable results can be hard to come by. As a result, researchers often have to collect their own data about municipal priorities, policies, and decision-making via survey efforts targeting local government officials.

All of the authors of this blog post and corresponding article have, in one way or another, been involved in the administration of such surveys; some of us have for over ten years. During this time, despite not significantly changing either our sampling frame or approach, we have experienced increasing challenges eliciting responses, particularly when administering surveys online. We wondered whether other scholars shared our experiences and decided to study that question. We also conducted an experiment to compare practical strategies to boost falling response rates.

Response rate trends in published studies

We reviewed articles published in 13 top urban studies and public administration journals between 2010 and 2021 and identified 166 articles that used survey data collected from city or county officials. These articles together employed data from 176 distinct surveys on topics including economic development, sustainability, public service motivation, and performance information use. The surveys were administered across 27 different countries. Over 70% targeted unelected municipal staff, with the remaining targeting elected local leadership or a mix of both elected and unelected officials.  

The figure below shows a two-way scatterplot of each survey’s response rate by the year it was administered. The trend line indicates a notable decline in average response rate over time. This downward trend is even more pronounced when considering only the subset of surveys administered online.

Figure 1. Survey response rate by year administered

An experiment comparing follow-up modes

In late 2021, we administered an online survey to staff from 2,049 US city governments concerning decision-making about environmental hazards. After the initial invitation email and three reminder emails, the response rate for completed surveys was slightly over 13% – much lower than what similarly structured survey efforts had yielded in the past. Following best practice, our next step was to use a non-web-based follow-up to try to increase this rate. However, we had a limited budget and little insight about how to maximize the response from this particular audience. This led us to conduct an experiment. The contacts from the 1,662 cities who neither responded to nor opted out of the emailed survey were randomly assigned to one of three groups for follow-up: a hard copy mailing, a postcard with a QR code, or a personal phone call. We recorded the labor and material costs associated with each approach in order to calculate their respective costs per contact and per response (see Table 1).

Although it was, by a significant margin the most expensive option, the traditional approach of mailing a paper survey yielded the largest number of completions and the best return on investment. Still, at a price of over $25 per additional completed survey, researchers must think hard about the value of even this approach.

Table 1. Cost Comparison of Survey Follow-up Modes

Conclusions and Recommendations

Because of their low costs, online surveys are an accessible way for researchers to collect survey data from local government officials. However, they have been particularly susceptible to falling response rates, meaning that researchers may no longer be able to rely on them alone. In addition to scheduling a longer survey administration period, in which more than the conventional two to three reminders can be sent, using alternate modes may be helpful. Our experiment suggests that, despite their high up-front costs, mailed paper surveys are the most cost-effective. Along these lines, researchers may be wise to exhibit caution when considering new, untested approaches for survey solicitation. For example, based on anecdotes and experience, we thought that the postcards containing QR codes that linked to the survey had potential to be a relatively low-cost method to increase responses, but were sorely disappointed. 

Surveys are an important and, to some degree, irreplaceable means of collecting data for urban research. The ease and affordability of online survey software has enabled the proliferation of surveys to the point that some local government officials have expressed survey fatigue. While other issues, including concerns about cybersecurity, are certainly at play, collective action is needed among urban researchers to ensure that the overuse of surveys do not threaten their own viability. Urban researchers, ourselves included, should do a better job of sharing survey data with each other in attempt to minimize the frequency with which the same sets of local officials are being tapped. While some researchers may be understandably reluctant to “give away” their laboriously collected data, sharing could be facilitated if researchers communicated with each other before launching surveys to find ways to spread the burden of their administration and ensure that the questions asked will maximize data usability.

Read the full UAR article here.


Rachel M. Krause is a professor in the School of Public Affairs and Administration at the University of Kansas. Her research focuses on issues of local governance and policy related to urban climate and sustainability. She is a coauthor of the book “Implementing City Sustainability: Overcoming Administrative Silos to Achieve Functional Collective Action.”

S. Mohsen Fatemi is a PhD student in the School of Public Affairs and Administration at the University of Kansas. His research interests include energy policy and justice, collaborative governance, and local government.

Le Anh Nguyen Long is an assistant professor in public administration at the University of Twente. She is a public policy scholar whose main areas of interest are energy, environment, and emergent technologies. She is a coauthor of the book “Co-creation and Smart Cities: Looking Beyond Technology.”

Gwen Arnold is an associate professor in Environmental Science and Policy at University of California, Davis. Her research interests include local government decision-making, policy advocacy, and community resilience. Her non-research interests focus on dogs.

Sarah L. Hofmeyer is a doctoral candidate in the School of Public Affairs and Administration at the University of Kansas. Her research focuses on sustainable community food systems and the policies that shape them.

Previous
Previous

The Emergence of Environmental Justice in General Plans

Next
Next

Student Spotlight: Shervin Ghaem-Maghami