WDDE
RadioLab
WDDE
RadioLab
Next Up: 4:00 PM Inside Europe
0:00
0:00
RadioLab
WDDE
0:00 0:00
Available On Air Stations

COMMENTARY: The pitfalls of polls

Public opinion polls drive much of America’s discourse about politics. Yet polls often are misinterpreted because many people in the media and the electorate do not have a clear idea of how they are conducted, how they should be interpreted, and how they should be used.

[caption id="attachment_2801" align="alignright" width="133" caption="David C. Wilson, professor of Political Science and International Relations, University of Delaware. Photo courtesy of University of Delaware."]https://www.wdde.org/wp-content/uploads/2010/09/davidwilson.jpg[/caption]Polls are most valuable when they are used as discussion points rather than accepted as facts, as they often are presented in the media. Polls reflect public opinion, not fact. And polls are only one indicator of public opinion, along with voting, campaign contributions, media coverage, endorsements, bumper stickers, and public forums.

It is crucial, therefore, to understand what polls tell us—and what they don’t.

Who paid for the poll, and why?

Some polls are paid for by a media outlet, a political party, or a private organization in hopes of producing information that supports an agenda or goal. Often poll sponsors choose to release only the results they judge to be most newsworthy, or the results that are most favorable to them. Be wary of polls that do not fully disclose population, sample size, how the data were collected, when the data were collected, the margin of error, and a complete report of the questionnaire and raw topline results.

What do the numbers really mean?

Also, understand that raw poll numbers are not necessarily meaningful. For accurate interpretation, poll results must be viewed in context.

For example, recent media reports have highlighted polls indicating relatively low approval ratings for President Obama. The implication is that the President is not performing well, and the public is unhappy. Such a narrative leaves many questions unasked and unanswered about the President’s rating relative to others. Are the results low in relation to all other presidents, to modern presidents, or to Obama during his presidency? And who is doing better than Obama?

In an August poll of 620 Delaware voters conducted by Public Policy Polling, the President’s approval rating was 50%. The raw number is average. Yet none of the Delaware political figures in the poll had a statistically higher rating than Obama. All scored equal to Obama or much lower: U.S. Representative Mike Castle, 51%; Governor Jack Markell, 50%; U.S. Senator Tom Carper, 47%; U.S. Senator Ted Kaufman, 37%; Democratic U.S. House candidate John Carney, 31%; New Castle County Executive and U.S. Senate candidate Chris Coons, 31%; Republican U.S. Senate candidate Christine O’Donnell, 23%; Republican U.S. House candidate Michele Rollins, 18%; Republican U.S. House candidate Glen Urquhart, 15%.

Among respondents identified as “moderates,” Obama received a 58% approval rating, compared with 58% for Markell, 56% for Castle, 55% for Carper, 43% for Kaufman, 35% for Carney, 33% for Coons, and 16% for Rollins, 15% for O’Donnell, and 12% for Urquhart.

The point is that all poll results are relative to something else and must be placed in a proper context to be interpreted correctly.

A snapshot, not a crystal ball

As a method for understanding public opinion, polls are designed to provide a snapshot of the moment, not a picture of the future. Political scientists know that using polls to predict voter behaviors and election outcomes is risky.

A poll result can be used in conjunction with other indicators to help predict outcomes, but this is only useful in the final weeks and months of an election season—and even then, the margin of error usually makes races too close to call.

Remember, a poll is a snapshot of opinions at one moment. Those opinions can shift dramatically in a day or a week, depending on both random and deliberate events.

Errors in polling

The larger point about polls is that no two are alike, and they all contain some form of error. Error does not invalidate polls. But the public needs to become savvy consumers of polling data, understanding their limitations and caveats.

Errors in polls come from many sources. When groups of individuals are missing or underrepresented in a population to be sampled in a poll, that is known as a coverage error. College students, individuals without phones, and military personnel are often underrepresented in polls.

Sometimes people who are sampled refuse to participate in a survey. That is a survey non-response error. Or they systematically refuse to answer certain questions—an item non-response error. People who distrust polls tend to not respond, and sensitive or threatening questions tend to illicit item non-response.

A measurement error occurs when poorly written or misleading survey questions lead to problems in interpreting the results. For instance, asking people how satisfied they are with “the government and its leaders” is problematic because it’s not clear whether the question is about government or the leaders of government.

A common but often misunderstood type of polling error is a sampling error. It occurs when only a fraction of an entire population is surveyed. Sampling error involves a tradeoff between the cost of contacting every member of a population vs. contacting a fraction, or sample, of them and accepting some amount of error. Pollsters express sampling error in terms of the margin of error. Most pollsters will tolerate a margin of error of up to 5%.

Interpreting a Delaware poll

For instance, an August 5 poll by Rasmussen Reports showed that in Delaware’s U.S. Senate race, Representative Mike Castle (R) was favored over New Castle County Executive Chris Coons (D) by 49% to 37%. The poll did not survey every Delawarean, of course, but rather took a sample of the population. Therefore, like most polls it had a sampling error. Rasmussen calculated that margin of error to be 4.5%. That means we can assume the figures to be accurate within a range of plus or minus 4.5%.

What the poll really said, then, is that somewhere between 44.5% and 53.5% of Delawareans supported Castle, while 33.5% to 41.5% supported Coons. So Castle could hold a 3% advantage—or a 20% advantage.

The raw numbers in this example do not add up to 100% because 5% supported another candidate and 9% said they were unsure.

Another reason that polls are not always accurate predictors of elections is that they are “expressed” support, not actual support. When that 9% of “unsure” respondents in the Rasmussen poll cast their ballots on Election Day, their votes will be distributed among the candidates. And some of the respondents might decide not to vote at all.

Even among supporters of a candidate, approval ratings don’t necessarily translate into votes. Local, national, or global crises, personal scandals, performance in debates, media interviews, political endorsements, and negative advertisements can change a candidate’s fortunes in a heartbeat.

Polling methods affect responses

The way pollsters collect data can influence the results as well. Organizations such as Rasmussen Reports, Public Policy Polling, and SurveyUSA Research use computerized voices (also known as IVR or “interactive voice response”) to ask questions over the phone. The Gallup Poll, the Pew Research Center, and news media polls tend to use human telephone interviewers. Many university-based polling centers, such as Quinnipiac University and Franklin & Marshall College, use student interviewers. The modes of data collection—face-to-face, telephone and IVR, internet, and mail surveys—create different conditions for responding to questions and can therefore influence response and interpretation.

The wording of survey questions and of the possible responses also influences poll results. For instance, to survey presidential approval, some pollsters ask, “Do you approve or disapprove of the President’s performance?” This question offers only two possible responses: approve or disapprove. Other polls ask, “How much do you approve or disapprove of the President’s performance?” They offer four response choices: strongly approve, approve, disapprove, or strongly disapprove. Variations in wording across polls make it difficult to track opinions over time and make comparisons among polls.

Finally, the way a survey is designed can produce errors as well. The ordering of questions and responses, the specific wording of questions, the race, sex, age, education, and experience of the interviewer, and the length of the survey all affect responses to survey questions.

The limitations—and value—of polls

Taking into account all these variables, social scientists understand that surveys are not facts. They are at best reasonable estimates of what the public is willing to express in a conversation with a stranger, or with a computerized voice, at one moment in time.

Polls are not inherently bad or good. They are simply a scientific means of gathering and estimating public opinion. In the end, they are valuable only to the extent that they are used and interpreted appropriately.

David C. Wilson is a professor of Political Science and International Relations at the University of Delaware.