Posted on 10/21/2003 12:23:07 AM PDT by Wolfstar
Ah, the omnipresent poll. The media punditocracy is addicted to using polls to tell us what "the American people feel" (never think) about everything from a president's so-called approval rating, to how a candidate's chances stack up against others in a race, to our "feelings" about various policy and social issues. Although the public has absolutely no way to evaluate the vast majority of polls for accuracy, most of us simply accept them as incontrovertible indicators of truth. Why? The answer probably is because we're told that polls are "scientific" since they use statistical-type analysis, and most of us tend to equate science with a search for objective truth.
Despite the scientific patina of a Margin of Error, how does one incontrovertibly prove a poll to be accurate? For nearly all polls, the answer is that it can't be done. In the political realm, only polls taken closest to an election can be compared for accuracy to the actual results of real voters expressing their real attitudes at the ballot box.
Nevertheless, the public is conditioned not to look too closely at the accuracy of polls. We're told polls are "just a snapshot in time." So we learn to shrug when there are big swings in, say, a president's "approval ratings" from one month to the next even though nothing particularly new happened during that time. Despite the fact that only a handful of polls close to elections can be verified against objective results of real votes, polls affect our society in several ways. The worst is that many politicians, hanging on their every nuance, too often make crucial policy decisions based on polls wielded like clubs by the media and special-interest groups.
So what's the truth about polls? Few in the public have the time or resources to do an in-depth study in an attempt to answer that question, while those who butter their bread on the back of polls have no incentive to do so. Ah, but fate stepped in this year in the form of the California Recall and provided an excellent opportunity to do a case study of multiple polls taken in a highly compressed period of time. Since all poll results in this study were obtained relatively close to Election Day, a comparison to actual election results is not only easier, but also more instructive than, say, an apples-to-oranges comparison of a year's worth of "approval ratings" to an election outcome.
This study encompasses 20 polls taken by nine polling organizations between Aug. 7 and Oct. 5, 2003. It looks at the five most-watched poll questions those which purported to measure the percent of vote for:
Because there are 20 polls and five questions, this study encompasses 100 individual results. When analyzed by several criteria, such as poll date and accuracy for each of the five questions, an eye-popping picture of polling precision or lack thereof comes into sharp focus. Note that "accurate" in this study means: (1) a poll result within that poll's MOE, and (2) as compared to the actual election results. "Inaccurate" or "wrong" means a poll result outside that poll's MOE as compared to the actuals.
Q: Are political polls accurate?
A: Based on this study, the answer is: While some results for some questions in a multi-query poll may be accurate, most polls, when taken in their entirety, are not. Here's why: Of the 100 individual question results, more than half (57) were wrong (outside their MOE's), as compared to the actual election results.
However, the stunning fact is that only 1 poll in 20 gave accurate results across-the-board for all five questions. This was the last poll taken by Survey USA from Oct. 3-5. Only 4 of 20 got both the Yes and No on recall questions right, while only that last Survey USA poll gave accurate Candidate results. In other words:
Looking at accuracy another way, of the 100 individual question results, less than half (43) were right within their poll MOE's. Most (67) under-estimated the actual election results, and only seven poll questions called that election result precisely.
Q: Does averaging several polls over a period of time give a more accurate picture?
A: Some pollsters, reporters and others who rely on them believe either a tracking poll, or an average of several polls taken over a period of time, are the best methods of obtaining an accurate picture of public attitudes. Due to the compressed two-month recall campaign, all 20 polls, taken together, constitute a form of tracking poll. Yet, as already noted, their often wildly inaccurate results only contributed to a false week-to-week perception of the race. So would averaging the results of all 20 polls give a more accurate picture? The answer based on this study is a qualified yes. Here's why. First, all 20 MOE's were averaged to establish a baseline, which works out to ±3.6%.
Q: Is the Margin of Error (MOE) really useful in assessing a poll's accuracy?
A: Based on this study, the answer is a resounding NO! The smallest MOE given was ±2%; the largest ±5.6%. Interestingly, ±2% was for one of the earliest, most inaccurate polls, while ±5.6% was for the last and most accurate.
Q: Do polls become more accurate closer to an election?
A: The broad answer is a qualified yes qualified because, in this study, the polls were inconsistent on this question. Results for some questions in early polls were quite accurate, while some late poll results were very inaccurate. However, the trend was to become more accurate closer to the election.
As already noted, the last poll was the only one that got all five questions right within its MOE. The following table shows the total poll questions that the first/last seven polls got right within each poll's MOE. An accuracy of 100% in this instance would be 35 questions right (7 polls x 5 questions). Note that just under twice as many question results were right in late polls than in early ones. Nevertheless, even the late polls (last two weeks) got less than half (49%) of the questions right.
First 7 polls (8/8 to 9/8)
|
x
|
Last 7 polls (9/24 to 10/5)
|
||
Question |
# Poll Questions |
x |
Question |
# Poll Questions |
YES |
2
|
x |
YES |
5 |
NO |
1 |
x |
NO |
2
|
AS |
0 |
x |
AS |
1 |
CB |
1 |
x |
CB |
4 |
TC |
5 |
x |
TC |
5 |
TOTAL |
9 (26%) |
x |
TOTAL |
17 (49%) |
Q: Are political polls biased?
A: If any given poll is biased, the hard question to answer is whether or not it is due to ideology or methodology. Every expert on polling says that variables such as the way a question is worded; who the respondents are; the order of questions; even what time of day/week a poll is taken can create a bias. (Many polling organizations do not make their methodology public.) As the following demonstrates, an argument can be made either way for these California recall polls:
So are political polls biased? Whatever the answer, the staggeringly inaccurate polling for Schwarzenegger and moderately inaccurate results for Bustamante as compared to the surprisingly accurate, even slightly inflated results for McClintock certainly should raise a lot of eyebrows. Of the five poll questions in this study, results for Arnold Schwarzenegger were by far the most inaccurate, while those for Tom McClintock were the most accurate. Was there really a mid-to-late September surge for McClintock? Or, as many suspected, were the polling organizations trying to inflate impressions of his strength as compared to that for Schwarzenegger? And did the polls underestimate Bustamante's vote strength in order to boost Gray Davis? One would be tempted to say "yes" to these questions were it not for the fact that 90% of the polls also underestimated the No on recall vote.
Q: Which polling organization was the most accurate?
A: The following table speaks for itself, although the reader is encouraged to take particular note of the poor performance of two big national polls, Time/CNN and Gallup.
Polling Organization |
# of Polls |
Total # of Results |
# of Correct |
Percent |
Knight Ridder |
1 |
5 |
3 |
60% |
Los Angeles Times |
3 |
15 |
9 |
60% |
Survey USA |
5 |
25 |
13 |
52% |
CA Chbr of Commerce |
2 |
10 |
5 |
50% |
Field Poll |
3 |
15 |
7 |
47% |
Time/CNN |
1 |
5 |
2
|
40% |
Public Policy Institute |
2 |
10 |
3 |
30% |
Gallup |
2 |
10 |
1 |
10% |
Stanford U. |
1 |
5 |
0 |
0% |
Q: What conclusions can be drawn from this study?
A: Even the most accurate polls in this study were wrong 40% of the time overall (based on above accuracy table). The accuracy of each of their internals was worse. So, when the national media tout polls from Gallup, Time/CNN, Newsweek, Zogby, and such about what "the American people feel" regarding something insubstantial like "presidential approval;" or whether or not they want to re-elect the president; or which issues are most important to them; or how a person who's name is all but unknown nationally suddenly becomes "the frontrunner" for a party's nomination, it's wise to keep three things in mind:
Notes for Tables of Results:
YES/NO Table Of Results:
Final Results |
|
55% |
|
|
45% |
|
|
|
Poll |
Date |
YES |
+Over |
Within MOE |
NO |
+Over |
Within MOE |
MOE |
Survey USA |
Oct. 35 |
57% |
+2 |
Y |
43% |
-2 |
Y |
±5.6% |
Knight Ridder |
Oct. 14 |
54% |
-1 |
Y |
41% |
-4 |
N |
±3% |
Field Poll |
Sep. 29Oct. 1 |
57% |
+2 |
Y |
39% |
-6 |
N |
±4.8% |
Survey USA |
Sep. 2830 |
61% |
+6 |
N |
39% |
-6 |
N |
±3.7% |
Los Angeles Times |
Sep. 2529 |
56% |
+1 |
Y |
42% |
-3 |
Y |
±3% |
Gallup |
Sep. 2527 |
63% |
+8 |
N |
35% |
-10 |
N |
±3% |
CA Chbr of Commerce |
Sep. 2425 |
53% |
-2 |
Y |
41% |
-4 |
N |
±3.5% |
Survey USA |
Sep. 1922 |
57% |
+2 |
Y |
42% |
-3 |
Y |
±3.5% |
Public Policy Institute |
Sep. 817 |
53% |
-2 |
Y |
42% |
-3 |
Y |
±3% |
Los Angeles Times |
Sep. 610 |
50% |
-5 |
N |
47% |
+2 |
Y |
±3% |
Survey USA |
Sep. 68 |
62% |
+7 |
N |
37% |
-8 |
N |
±3.7% |
Field Poll |
Sep. 47 |
55% |
exact |
Y |
40% |
-5 |
N |
±4.5% |
CA Chbr of Commerce |
Sep. 14 |
52% |
-3 |
Y |
41% |
-4 |
N |
±3.1% |
Stanford U. |
Aug. 29Sep.8 |
62% |
+7 |
N |
38% |
-7 |
N |
±3.4% |
Survey USA |
Aug. 2325 |
64% |
+9 |
N |
35% |
-10 |
N |
±3.7% |
Los Angeles Times |
Aug. 1621 |
50% |
-5 |
N |
45% |
exact |
Y |
±3% |
Field Poll |
Aug. 1013 |
58% |
+3 |
Y |
37% |
-8 |
N |
±5% |
Public Policy Institute |
Aug. 817 |
58% |
+3 |
N |
36% |
-9 |
N |
±2% |
Time/CNN |
Aug. 8 |
54% |
-1 |
Y |
35% |
-10 |
N |
±4.3% |
Gallup |
Aug. 710 |
69% |
+14 |
N |
26% |
-19 |
N |
±4% |
Average of 20 polls |
|
57% |
+2 |
Y |
39% |
-6 |
N |
3.6% |
# Results within MOE |
|
|
|
11 |
|
|
6 |
|
# Results outside MOE |
|
|
|
9 |
|
|
14 |
|
# Same as actual |
|
|
1 |
|
|
1 |
|
|
# Over actual |
|
|
12 |
|
|
1 |
|
|
# Under actual |
|
|
7 |
|
|
18 |
|
|
CANDIDATE Table Of Results:
Final Results |
|
49% |
|
|
32% |
|
|
13% |
|
|
|
Poll |
Date |
Arnold S. |
AS +Over |
Within MOE |
Bustamante |
CB +Over |
Within MOE |
McClintock |
TM +Over |
Within MOE |
MOE |
Survey USA |
Oct. 35 |
46% |
-3 |
Y |
34% |
+2 |
Y |
13% |
exact |
Y |
±5.6% |
Knight Ridder |
Oct. 14 |
37% |
-12 |
N |
29% |
-3 |
Y |
15% |
+2 |
Y |
±3% |
Field Poll |
Sep. 29Oct. 1 |
36% |
-13 |
N |
26% |
-6 |
N |
16% |
+3 |
Y |
±4.8% |
Survey USA |
Sep. 2830 |
45% |
-4 |
N |
28% |
-4 |
N |
16% |
+3 |
Y |
±3.7% |
Los Angeles Times |
Sep. 2529 |
40% |
-9 |
N |
32% |
exact |
Y |
15% |
+2 |
Y |
±3% |
Gallup |
Sep. 2527 |
40% |
-9 |
N |
25% |
-7 |
N |
18% |
+5 |
N |
±3% |
CA Chbr of Commerce |
Sep. 2425 |
35% |
-14 |
N |
31% |
-1 |
Y |
17% |
+4 |
N |
±3.5% |
Survey USA |
Sep. 1922 |
39% |
-10 |
N |
32% |
exact |
Y |
18% |
+5 |
N |
±3.5% |
Public Policy Institute |
Sep. 817 |
26% |
-23 |
N |
28% |
-4 |
N |
14% |
+1 |
Y |
±3% |
Los Angeles Times |
Sep. 610 |
25% |
-24 |
N |
30% |
-2 |
Y |
18% |
+5 |
N |
±3% |
Survey USA |
Sep. 68 |
39% |
-10 |
N |
29% |
-3 |
Y |
16% |
+3 |
Y |
±3.7% |
Field Poll |
Sep. 47 |
25% |
-24 |
N |
30% |
-2 |
Y |
13% |
exact |
Y |
±4.5% |
CA Chbr of Commerce |
Sep. 14 |
28% |
-21 |
N |
33% |
+1 |
Y |
12% |
-1 |
Y |
±3.1% |
Stanford U. |
Aug. 29Sep. 8 |
40% |
-9 |
N |
28% |
-4 |
N |
8% |
-5 |
N |
±3.4% |
Survey USA |
Aug. 2325 |
45% |
-4 |
N |
29% |
-3 |
Y |
11% |
-2 |
Y |
±3.7% |
Los Angeles Times |
Aug. 1621 |
22% |
-27 |
N |
35% |
+3 |
Y |
12% |
-1 |
Y |
±3% |
Field Poll |
Aug. 1013 |
22% |
-27 |
N |
25% |
-7 |
N |
9% |
-4 |
Y |
±5% |
Public Policy Institute |
Aug. 817 |
23% |
-26 |
N |
18% |
-14 |
N |
5% |
-8 |
N |
±2% |
Time/CNN |
Aug. 8 |
25% |
-24 |
N |
15% |
-17 |
N |
9% |
-4 |
Y |
±4.3% |
Gallup |
Aug. 710 |
42% |
-7 |
N |
22% |
-10 |
N |
13% |
exact |
Y |
±4% |
Average of 20 polls |
|
34% |
-15 |
N |
28% |
-4 |
N |
13% |
exact |
Y |
3.6% |
# Results within MOE |
|
|
|
1 |
|
|
11 |
|
|
14 |
|
# Results outside MOE |
|
|
|
19 |
|
|
9 |
|
|
6 |
|
# Same as actual |
|
|
0 |
|
|
2 |
|
|
3 |
|
|
# Over actual |
|
|
0 |
|
|
3 |
|
|
10 |
|
|
# Under actual |
|
|
20 |
|
|
15 |
|
|
7 |
|
|
The Editors of RealClearPolitics are usually pretty sharp cookies. But on this unique election, with the evidence in their hands, they are still missing the boat. Big time.
Congressman Billybob
Why is it that between the late 1700's and the mid 1900's, Americans managed to elect all sorts of people to all sorts of positions without polls? Yet these days, we seem more dependent on them than ever. We are like people using a crutch when it isn't necessary. Well, a good challenge every now and again never hurt any institution.
It's worth bearing in mind that this is a CASE study, not a broad, all-encompassing study. One of my concerns about polling is that few pollsters make their methodology public. That goes to the heart of your point about whether or not one is a push poll. Every single one of these organizations claimed to be accurate. As a member of the general public, I decided to use the tools available to me to test their claims.
A potential voter has three choices on any ballot issue: to vote yes, to vote no, or not to vote. They are independent choices, and ALL pollsters give separate percentages for yes and no. Because they give separate predictions, each prediction can be tested against the MOE.
The MOE's are incorrectly described for multiple choice polls
In what way? A pollster says he predicts a vote of 55% for yes and 45% for no, and then gives a range plus or minus those figures within which he can claim to be accurate. If the actual election result falls within his MOE, he's predicted the outcome correctly. If not, he's wrong. What's so complicated?
RealClearPolitics.com did not do this case study, I did. I used the list of polls on their site. They only listed one for Stanford. Please use the links provided to see for yourself. I took those 20 polls as a CASE and tested their accuracy.
I'm just a member of the general public who has no pretensions, but does have a desire to evaluate the quality of the product pollsters sell. After all, many politicians make vital policy decisions only after checking the latest polls.
People lie to others, to themselves, people don't know and will not admit it, or polls don't allow them, people feel pressure answering questions, people are vain, cynical, flippant, etc.
The only polls that matter are at the cash register (or the ballot box as the case may be.) It's been demonstrated many times that people lie, consciously or not, about the food they eat, about their driving habits, and so on. I used to participate in focus groups and one of the more interesting things I noted was that the participants expressed totally different, and seemingly more honest opinions in the elevator going down after the focus group session from those expressed during the session. Astrology works better and is more fun too!
What I have done in a kind of big way is to reflect exactly what the general public, most in the media, and even pollsters, themselves, do after every election. Election after election, people routinely look at the last polls that came out, compare them to the actual election results, and make judgements about which polling organization was the most accurate based on whose poll percentages came the closest to the actuals. In other words, they do exactly as I have done in this case study. Except in this case, I looked at 20 polls instead of just a handful.
Maybe the variation is accounted for by the minority which didn't make up their minds early on. Maybe the people who said they made up their minds months ago were lieing.
But incorrect. This is not what the MOE means. Neither press nor the pollsters generally explain things very well.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.