They’re not necessarily untrue, but they’re not entirely fact.
“We don’t need it do we?
It’s fake that’s what it be to ‘ya, dig me?
Don’t believe the hype”
Don’t Believe the Hype
— Public Enemy
As November inches its way up, there is a noticeable upswing in polling. Worse is that media outlets feel compelled to bloviate about their results. Even if you were marginally attentive in 2016, you might have some residual PTSD as you hear about these polling results. These same “experts” repeatedly said that Hilary Clinton was all but ensured to become the first female American president.
As we all know, that did not happen.
We Americans are fascinated, and increasingly reliant, on the pictures that poll results purport to show. But, the stories they tell can often be misleading …or worse, fictitious.
HISTORY
The first known poll dates back to the 1824 presidential election. The data collection back then was conducted in “ taverns, militia offices and at public meetings,” and the results were then published in the Harrisburg Pennsylvanian in July 1824. The results showed Andrew Jackson leading John Quincy Adams in the race for president.
Whether the publication of the poll results swayed the vote, we don’t know. But Jackson went on to win the election, and thus a new methodology of prediction was born.
What in 1824 was just a cottage industry, by the 20th century, was a very powerful industry.
And then, in 1936, George Gallup created modern polling.
During that election between incumbent Democrat President Franklin D. Roosevelt and Republican challenger Governor Alf Landon of Kansas, Gallup’s main competitor, the weekly magazine The Literary Digest conducted a “straw poll” — popular opinion on a particular matter.
Having successfully navigated political contests in the past with this method, The Literary Digest used a simple 2.3 million postcard campaign to predict the 1936 presidential contest. So, in 1936, they confidently predicted Landon would win.
As we all know, that did not happen.
Conversely, Geroge Gallup selected a smaller but more demographically representative sample of citizens. As a result of this broader selection of people, Gallup’s poll showed a different result. And by the end of 1937, both Alf Landon and The Literary Digest would become a footnote in history.
However, George Gallup’s methodology of surveying public opinion through broader and smaller samples became the industry norm as well as the industry leader for decades.
The ensuing years have seen an increasing reliance on polling results. Journalists, television news, businesses, politicians, special interest groups, book clubs, social media influencers, and bar patrons are now almost solely relying on polling to shape their stories, practices, and opinions.
Polls shape and influence a lot. And if we learned anything from 2016, that’s not necessarily a good thing.
MODERN
Today’s leader in polling is considered to be the non-profit Pew Research Center. In 2012, Pew made themselves subject to a poll. What they found was:
“fewer than one in 10 Americans contacted for a Pew survey bothered to respond.”
A question to then consider would be, “Are polls reliable gauges for what we’re thinking?”
There are two primary types of polling, “scientific” and “unscientific.”
1. SCIENTIFIC
Scientific polls can accurately reflect and describe public opinion (Gallup, Pew Research).
2. UNSCIENTIFIC
Unscientific polls report what people reply (internet, most media outlets).
This is not to say that these polls on the internet can’t be scientific. They can, but it’s important to note that, according to their research, Pew reported that only about 10% of Americans use the internet.
Polling aggregator Real Clear Politics provides a lengthy and robust look at various pollsters and their results (scientific).
As time becomes a more precious resource, Americans have become more reliant on the information that polls provide. The more curious of people may be wondering which poll to believe. Unfortunately, we are living in a less and less interested world, and many people believe what they are told.
Without doing even a cursory look at the data, and the methodology inherent in the poll, the results can be egregiously misleading.
Can a poll of 1,600 American adults truly reflect the opinions of 327 million Americans?
If you take a brief pause and look into the poll results, there are a series of questions worth considering. Just thinking about five basic questions can help you have a clearer understanding of the poll you are looking at:
Who did the poll?
The most straightforward and most basic question. Was it an academic institution, a polling firm, media outpost, political campaign, or some other entity or person? Any poll worth examining will note who did the poll. If that essential information isn’t transparent or is being withheld, it’s safe to say it’s not a reliable poll.
Who paid for the poll?
Polls are conducted for particular reasons. With that in mind, there is a cost associated with them because they’re not free to execute. Knowing who paid for the polling can help tell you what these pollsters and polls are considering to be important.
Businesses’ may take a poll to test a new marketing campaign.
A special interest group may want to take the temperature of the public on a topic.
A politician may want to better understand where they stand in the run-up to election day.
However, with both politicians or special interest polls, knowing who paid for the poll will shape what the objective of the poll was. A political candidate or special interest group may frame questions in such a way as to bias the answers.
For example, a poll asking Americans to cut foreign aid spending from the budget may first result in a hair raising adverse reaction because Americans believe foreign aid to be over 25% of the budget.
A closer look reveals something different.
According to Oxfam, foreign aid accounts for less than 1% of the American budget. In this hypothetical poll example, if more accurate information was involved in the polling preamble, it may yield a different result.
How many people were interviewed?
As George Gallup proved in 1936, and many women will attest, size matters — larger isn’t always better.
One of the poll’s purpose is to provide estimated outcomes. If all things are equal, the more people interviewed, the smaller the margin of error. Although, in pretty much every place imaginable in the world, things are never equal.
Considering that, it’s safe to say that a smaller, more diverse representation will potentially furnish more accurate or representative data.
How were the people chosen?
Using a random or probability sample is the foundation of scientific polling. This states that when provided the opportunity to select people in a target population, it would then ensure the results would more appropriately reflect that population if randomly gathered.
In other words, selecting people at random will result in more representative information — presuming there is no bias in the phrasing of the questions.
Using the probability sampling, this is the reason that 1,600 American adults can reflect the opinions of 327 million Americans with only a narrow margin of error — again, presuming there is no bias in the phrasing of the questions.
What region and what group does it represent?
In big elections, roughly 60% of Americans vote. Understanding the regionality of polls is critical in the less substantial primaries or pre-election polls where only about 25% show up to vote.
But it’s this smaller or less substantial polling that can influence a vote in the larger, more impactful, elections. Therefore, it’s imperative to know from what sample the poll is drawn from.
In purely simplistic terms, a poll with results derived from only veterinarians in Texas will not reflect the view of all Americans …because not all Americans are veterinarians …and we all don’t live in Texas.
To lessen any confusion, misrepresentation, or fabrication, especially in political polls, look for wording like:
“registered voters”
“likely voters”
“among veterinarians in Texas”
Such distinctions are essential to understanding polling but are particularly critical for political polling. For better or worse, over the past 195 years, polling has embedded itself into American life.
As time becomes more valuable and attention spans become shorter, many are relying on polls to help them in their decision making. Pausing to consider these basic questions can help better frame what the information is showing.
Polls are no longer just informative. Since 2016, in America, incorrect polling has proven to have profound sociological, economical, and political consequences.
Bestowing one person, group, or entity with such influence and power should give everyone pause.
In and of themselves, polls should be didactic tools.
At their very best, they’re transparent with their data and reflect the population or constituency they’re serving.
At their very worst, they’re misunderstood, misinterpreted, and can have grave consequences.
If we learned anything from 2016, it should be that a polls inherent margin of error or biases could lead to catastrophe.
As we all know, we can’t let that happen …again.
Don’t believe the hype — vote.