“Never trust a poll you didn’t pay for.” Political axiom.
In the run-up to the November election we are bombarded by polls allegedly telling us how the candidates are doing. Results often vary widely which makes it confusing and difficult to understand the true situation. If you know how polls are done and what the numbers really mean you can better evaluate the information.
Polls are conducted by asking a sample of people a series of questions and then tallying the results. The variables in this process include the number of people you ask, which people you ask and what questions you ask them.
It is impractical to poll everyone so you poll a sample of a few hundred individuals hoping they are representative of the full population. With that hope comes uncertainty and the smaller the sample the greater then uncertainty. This uncertainty is called the Margin of Error (MOE) and it expresses how far your sample could deviate from the overall population. If you poll says Candidate A has an approval rating of 49% with an MOE of 3 percent then there is a 95 % certainty that the true rating is between 46% and 53%. The MOE does not account for a non-representative sample.
Imagine your town is 50-50 split between Democrats and Republicans but your polling sample is 60% Democrats. Your results will be skewed towards Democrats but that bias will not be reflected in the MOE.
Polls are usually expressed as percentages, such as Candidate A has 45% and Candidate B has 55%. When the polling gets close you will often see tenths of a percentage shown. For example, Candidate A has 50.2% and Candidate B hast 49.8%. People may assume that because there are more numbers the polling is more accurate because accuracy and precision are often considered to be the same thing. They are not. Accuracy describes how close something is to the truth. Precision is how many parts the measurement is divided by.
Imagine you need to measure a window for new blinds. You grab your father’s trusty wooden yardstick and measure the width of the windowsill. You measure carefully. The yardstick has divisions every 1/8th of an inch so it has a precision of 1/8th or 0.125 of an inch. You measure 33 and 3/8th of an inch and order your blinds. When they come they are nearly an inch too big and won’t fit in the sill. Why?
That trusty wooden yardstick has been in the family for decades. It hung on a nail in the broom closet since you were a child. Over all those years the yardstick’s wood dried and shrunk slightly, only 3%. If you were measuring something an inch long that shrinkage would be less than a 1/32 of an inch but when you measured the windowsill, even though you measured to 1/8 of an inch the measurement you made was in inch too long because the yardstick had shrunk by an inch end to end. The precision of the yardstick was still 1/8 of an inch but the overall accuracy was 1 inch.
If your news outlet is reporting that a candidate has 48.3% approval with an MOE of 3% then the numbers to the right of the decimal point are essentially meaningless because they are insignificant compared to the possible error. Reporting results to the tenth of a percent precision has little to accuracy and much more to do with persuasion. Don’t be fooled.
People like winners and if people know little about the candidates they will tend to like the candidate their friends like. How do people know what their friends like? Polling. Pollsters know this and so if the pollster wants to support a particular candidate they can adjust the variables to achieve the desired poll result. This is called Push Polling where you are pushing participants to a particular conclusion.
Adjusting the sample size will not directly change the results but who is in the sample and what questions are asked will definitely change the results. If you want to produce a poll that favors a particular candidate, all you need to do is to oversample people who are likely to support your candidate. You can also adjust the questions asked.
Push Poll questions typically start with a statement followed by the question. For example “Knowing that candidate B was convicted of a felony, would you likely vote for Candidate A or Candidate B?” No surprise Candidate A comes out ahead.
How do we know Push Polling happens? Because early in the campaign season there are polls with widely scattered results but as election nears the polls always seem to tighten. Pollsters understand that if their poll varies widely from the results of the election it will negatively affect their credibility. They want the last poll they report before the election to be the most accurate.
Accurate polling does happen. It is the internal polling done by the campaigns and not released to the public. Accurate polling is critical to campaigns because it drives the allocation of resources
Smart voters ignore polling results unless they have details about how the poll was conducted. Smart voters can infer the true situation by objectively observing the actions of the campaigns because those actions are driven by accurate internal polling. Pay attention to what they do, not to what they say.
It’s just common sense.
2 replies on “Evaluating Polls”
If you want a Margin of Error (MOE) of 3.0 percent from a poll, then you will need a sample size of 1,068 rather than just ”a few hundred individuals.” A quarter of that sample size, 267, intead will give you a Margin of Error of 6.0 percent.
Bob,
Please explain what to look for in the behavior of campaigns and what those actions mean. How can voters use that information to learn which candidate is being more honest, revealing more of what to expect from them if elected. For example, are they explaining actual policy actions they promise to take and how they will implement them, or are they just announcing results that sound good without revealing haw they would achieve that?