Author: Jeffrey L. Bernstein (Department of Political Science, Eastern Michigan University).
We learned again in the 2020 election that polling is difficult. It is more art than science to simply poll voters to determine their preferences and then use those preferences to predict election results.
Already, the polling industry has been asking difficult questions about how they constructed samples, weighted respondents, and accounted for Trump voters during the post-mortem. These topics will be discussed in the lead-up to 2024’s presidential election.
These types of issues are important to understand how polls work. However, the exercise I describe here focuses only on one polling challenge: question formulation.
I use Walter Lippmann’s beautiful phrase “pictures of our heads” to explain this topic in my Introductory America Government course.
If asked about “late-term” and “partial-birth,” would respondents respond differently? Each side puts in a lot of effort to define the issue using their own vocabulary. This suggests that they believe that only a few people would respond differently to each question. This is partly due to the fact that the two ways these questions are formulated paint very different images inside the heads of respondents.
Questions with Biased Language
I will show you examples of questions that have biased wording during class. The surveys that political party organizations have put out are a good place to start.
A quick Google search will reveal many examples of “scientific polling questions” that have been used by parties. One of the Trump campaign’s polling questions asks “Whom do you trust most to protect America against foreign and domestic threats?” It offers two options: (a) President Trump and (b) a corrupt Democrat.
DNC Survey 2020 asks voters to identify the most disturbing aspects of Trump’s presidency. They assume that voters find the presidency disturbing and make assumptions that don’t necessarily follow good polling practice.
These types of questions are not to be taken seriously. This type of survey is known as FRUG-ing (Fund Raising Under the Guise Of Research).
It serves a political purpose but doesn’t teach much about the subtleties in question wording. It is much more fun and useful to create surveys that are legitimately formulated. Even subtle changes can make a big difference in the responses. Is it possible to change the word or the information that we give our respondents and influence their views on political issues?
An exercise on polling questions
Over twenty years ago, I used an exercise to examine polling questions and their wording. The exercise is divided into two classes. This exercise is now easier and more efficient thanks to advances in survey technology.
Part 1: Writing and collecting questions
I divide students into small groups and assign each group four questions. These questions can be divided into two sets.
I ask students to change one aspect of each set. This could be in terms of information or words used. This variation should not be significant, I stress. For a lesson in what not to do, I remind them of the partisan surveys we discussed earlier.
We don’t learn much from discovering that deeply biased questions produce different results. Instead, we focus on subtler forms of bias.
Each group selects two topics to ask questions about and writes two questions on each. The questions can be about any topic. While I encourage students to think about national issues that are widely discussed, I also allow students to poll on smaller topics.
My classes seem to have the same issues: climate change, legalization, and racism. This gives me a good window into the politics and lives of my students.
It is easiest to ask questions that combine one word with another. For example, do you support or oppose “pot” vs. marijuana? “Capital Punishment” vs. the “death penalty”, “marriage equality” vs. same-sex marital,” etc.
There are many variations in the verb choice: “protect” vs. “save” the environment, or “restrict” vs. “regulate” gun ownership.
Students may also choose to give slightly different information to one question than to the other. This could be used to ask a question and then see if different answers are given.
After collecting the questions, I sort them and choose about a dozen for the official survey. Survey Monkey is what I use to set up the survey.