Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Are political polls accurate? CA recall study of 20 polls says...NEVER TRUST POLLS AGAIN!
RealClearPolitics.com & original material ^ | 10/21/03 | Wolfstar

Posted on 10/21/2003 12:23:07 AM PDT by Wolfstar

Ah, the omnipresent poll. The media punditocracy is addicted to using polls to tell us what "the American people feel" (never think) about everything from a president's so-called approval rating, to how a candidate's chances stack up against others in a race, to our "feelings" about various policy and social issues. Although the public has absolutely no way to evaluate the vast majority of polls for accuracy, most of us simply accept them as incontrovertible indicators of truth. Why? The answer probably is because we're told that polls are "scientific" since they use statistical-type analysis, and most of us tend to equate science with a search for objective truth.

Despite the scientific patina of a Margin of Error, how does one incontrovertibly prove a poll to be accurate? For nearly all polls, the answer is that it can't be done. In the political realm, only polls taken closest to an election can be compared for accuracy to the actual results of real voters expressing their real attitudes at the ballot box.

Nevertheless, the public is conditioned not to look too closely at the accuracy of polls. We're told polls are "just a snapshot in time." So we learn to shrug when there are big swings in, say, a president's "approval ratings" from one month to the next even though nothing particularly new happened during that time. Despite the fact that only a handful of polls close to elections can be verified against objective results of real votes, polls affect our society in several ways. The worst is that many politicians, hanging on their every nuance, too often make crucial policy decisions based on polls wielded like clubs by the media and special-interest groups.

So what's the truth about polls? Few in the public have the time or resources to do an in-depth study in an attempt to answer that question, while those who butter their bread on the back of polls have no incentive to do so. Ah, but fate stepped in this year in the form of the California Recall and provided an excellent opportunity to do a case study of multiple polls taken in a highly compressed period of time. Since all poll results in this study were obtained relatively close to Election Day, a comparison to actual election results is not only easier, but also more instructive than, say, an apples-to-oranges comparison of a year's worth of "approval ratings" to an election outcome.

This study encompasses 20 polls taken by nine polling organizations between Aug. 7 and Oct. 5, 2003. It looks at the five most-watched poll questions — those which purported to measure the percent of vote for:

Because there are 20 polls and five questions, this study encompasses 100 individual results. When analyzed by several criteria, such as poll date and accuracy for each of the five questions, an eye-popping picture of polling precision — or lack thereof — comes into sharp focus. Note that "accurate" in this study means: (1) a poll result within that poll's MOE, and (2) as compared to the actual election results. "Inaccurate" or "wrong" means a poll result outside that poll's MOE as compared to the actuals.

Q: Are political polls accurate?
A: Based on this study, the answer is: While some results for some questions in a multi-query poll may be accurate, most polls, when taken in their entirety, are not. Here's why: Of the 100 individual question results, more than half (57) were wrong (outside their MOE's), as compared to the actual election results.

However, the stunning fact is that only 1 poll in 20 gave accurate results across-the-board for all five questions. This was the last poll taken by Survey USA from Oct. 3-5. Only 4 of 20 got both the Yes and No on recall questions right, while only that last Survey USA poll gave accurate Candidate results. In other words:

Looking at accuracy another way, of the 100 individual question results, less than half (43) were right within their poll MOE's. Most (67) under-estimated the actual election results, and only seven poll questions called that election result precisely.

Q: Does averaging several polls over a period of time give a more accurate picture?
A: Some pollsters, reporters and others who rely on them believe either a tracking poll, or an average of several polls taken over a period of time, are the best methods of obtaining an accurate picture of public attitudes. Due to the compressed two-month recall campaign, all 20 polls, taken together, constitute a form of tracking poll. Yet, as already noted, their often wildly inaccurate results only contributed to a false week-to-week perception of the race. So would averaging the results of all 20 polls give a more accurate picture? The answer based on this study is a qualified yes. Here's why. First, all 20 MOE's were averaged to establish a baseline, which works out to ±3.6%.

Q: Is the Margin of Error (MOE) really useful in assessing a poll's accuracy?
A: Based on this study, the answer is a resounding NO! The smallest MOE given was ±2%; the largest ±5.6%. Interestingly, ±2% was for one of the earliest, most inaccurate polls, while ±5.6% was for the last and most accurate.

Q: Do polls become more accurate closer to an election?
A: The broad answer is a qualified yes — qualified because, in this study, the polls were inconsistent on this question. Results for some questions in early polls were quite accurate, while some late poll results were very inaccurate. However, the trend was to become more accurate closer to the election.

As already noted, the last poll was the only one that got all five questions right within its MOE. The following table shows the total poll questions that the first/last seven polls got right within each poll's MOE. An accuracy of 100% in this instance would be 35 questions right (7 polls x 5 questions). Note that just under twice as many question results were right in late polls than in early ones. Nevertheless, even the late polls (last two weeks) got less than half (49%) of the questions right.

First 7 polls (8/8 to 9/8)
x
Last 7 polls (9/24 to 10/5)

Question

# Poll Questions
Within MOE

x

Question

# Poll Questions
Within MOE

YES

2

x

YES

5

NO

1

x

NO

2

AS

0

x

AS

1

CB

1

x

CB

4

TC

5

x

TC

5

TOTAL

9 (26%)

x

TOTAL

17 (49%)

Q: Are political polls biased?
A: If any given poll is biased, the hard question to answer is whether or not it is due to ideology or methodology. Every expert on polling says that variables such as the way a question is worded; who the respondents are; the order of questions; even what time of day/week a poll is taken can create a bias. (Many polling organizations do not make their methodology public.) As the following demonstrates, an argument can be made either way for these California recall polls:

So are political polls biased? Whatever the answer, the staggeringly inaccurate polling for Schwarzenegger — and moderately inaccurate results for Bustamante — as compared to the surprisingly accurate, even slightly inflated results for McClintock certainly should raise a lot of eyebrows. Of the five poll questions in this study, results for Arnold Schwarzenegger were by far the most inaccurate, while those for Tom McClintock were the most accurate. Was there really a mid-to-late September surge for McClintock? Or, as many suspected, were the polling organizations trying to inflate impressions of his strength as compared to that for Schwarzenegger? And did the polls underestimate Bustamante's vote strength in order to boost Gray Davis? One would be tempted to say "yes" to these questions were it not for the fact that 90% of the polls also underestimated the No on recall vote.

Q: Which polling organization was the most accurate?
A: The following table speaks for itself, although the reader is encouraged to take particular note of the poor performance of two big national polls, Time/CNN and Gallup.

Polling Organization

# of Polls
Taken

Total # of Results
(# Polls x 5 Ques.)

# of Correct
Results

Percent
Correct

Knight Ridder

1

5

3

60%

Los Angeles Times

3

15

9

60%

Survey USA

5

25

13

52%

CA Chbr of Commerce

2

10

5

50%

Field Poll

3

15

7

47%

Time/CNN

1

5

2

40%

Public Policy Institute

2

10

3

30%

Gallup

2

10

1

10%

Stanford U.

1

5

0

0%

Q: What conclusions can be drawn from this study?
A: Even the most accurate polls in this study were wrong 40% of the time overall (based on above accuracy table). The accuracy of each of their internals was worse. So, when the national media tout polls from Gallup, Time/CNN, Newsweek, Zogby, and such about what "the American people feel" regarding something insubstantial like "presidential approval;" or whether or not they want to re-elect the president; or which issues are most important to them; or how a person who's name is all but unknown nationally suddenly becomes "the frontrunner" for a party's nomination, it's wise to keep three things in mind:

  1. There is no objective way to verify the accuracy of most polls.

  2. It is part of human nature to want to predict (thus control) the future. However, this study demonstrates unequivocally that, whether or not it's due to political bias or flawed methodology, polls often drastically misinform the public.

  3. Only 1 in 20 polls in this study got all five questions right. In other words, 95% polls were wrong on one or more of their questions. So when a pollster uses the technique of summing one individual internal question result to another in order to claim something about public opinion, all the pollster may be doing in reality is compounding errors. For example, when Zogby adds answers for, say, "fair" and "poor" together, if either the result for "fair," or the one for "poor," or both are wrong, all he is doing is compounding errors and giving false information to the media and public.

Notes for Tables of Results:

  1. The 20 polls and their MOE's were obtained through RealPolitics.com, and Google searches for those where the RealPolitics.com links no longer worked.
  2. Results are as of Oct. 20 with 100% of precincts reporting.
  3. Over/Under = number of points over (+n) or under (-n) the actual election result.

YES/NO Table Of Results:

Final Results

 

55%

 

 

45%

 

 

 

Poll

Date

YES

+Over
-Under

Within MOE

NO

+Over
-Under

Within MOE

MOE

Survey USA

Oct. 3–5

57%

+2

Y

43%

-2

Y

±5.6%

Knight Ridder

Oct. 1–4

54%

-1

Y

41%

-4

N

±3%

Field Poll

Sep. 29–Oct. 1

57%

+2

Y

39%

-6

N

±4.8%

Survey USA

Sep. 28–30

61%

+6

N

39%

-6

N

±3.7%

Los Angeles Times

Sep. 25–29

56%

+1

Y

42%

-3

Y

±3%

Gallup

Sep. 25–27

63%

+8

N

35%

-10

N

±3%

CA Chbr of Commerce

Sep. 24–25

53%

-2

Y

41%

-4

N

±3.5%

Survey USA

Sep. 19–22

57%

+2

Y

42%

-3

Y

±3.5%

Public Policy Institute

Sep. 8–17

53%

-2

Y

42%

-3

Y

±3%

Los Angeles Times

Sep. 6–10

50%

-5

N

47%

+2

Y

±3%

Survey USA

Sep. 6–8

62%

+7

N

37%

-8

N

±3.7%

Field Poll

Sep. 4–7

55%

exact

Y

40%

-5

N

±4.5%

CA Chbr of Commerce

Sep. 1–4

52%

-3

Y

41%

-4

N

±3.1%

Stanford U.

Aug. 29–Sep.8

62%

+7

N

38%

-7

N

±3.4%

Survey USA

Aug. 23–25

64%

+9

N

35%

-10

N

±3.7%

Los Angeles Times

Aug. 16–21

50%

-5

N

45%

exact

Y

±3%

Field Poll

Aug. 10–13

58%

+3

Y

37%

-8

N

±5%

Public Policy Institute

Aug. 8–17

58%

+3

N

36%

-9

N

±2%

Time/CNN

Aug. 8

54%

-1

Y

35%

-10

N

±4.3%

Gallup

Aug. 7–10

69%

+14

N

26%

-19

N

±4%

Average of 20 polls

 

57%

+2

Y

39%

-6

N

3.6%

# Results within MOE

 

 

 

11

 

 

6

 

# Results outside MOE

 

 

 

9

 

 

14

 

# Same as actual

 

 

1

 

 

1

 

 

# Over actual

 

 

12

 

 

1

 

 

# Under actual

 

 

7

 

 

18

 

 

CANDIDATE Table Of Results:

Final Results

 

49%

 

 

32%

 

 

13%

 

 

 

Poll

Date

Arnold S.

AS +Over
-Under

Within MOE

Bustamante

CB +Over
-Under

Within MOE

McClintock

TM +Over
-Under

Within MOE

MOE

Survey USA

Oct. 3–5

46%

-3

Y

34%

+2

Y

13%

exact

Y

±5.6%

Knight Ridder

Oct. 1–4

37%

-12

N

29%

-3

Y

15%

+2

Y

±3%

Field Poll

Sep. 29–Oct. 1

36%

-13

N

26%

-6

N

16%

+3

Y

±4.8%

Survey USA

Sep. 28–30

45%

-4

N

28%

-4

N

16%

+3

Y

±3.7%

Los Angeles Times

Sep. 25–29

40%

-9

N

32%

exact

Y

15%

+2

Y

±3%

Gallup

Sep. 25–27

40%

-9

N

25%

-7

N

18%

+5

N

±3%

CA Chbr of Commerce

Sep. 24–25

35%

-14

N

31%

-1

Y

17%

+4

N

±3.5%

Survey USA

Sep. 19–22

39%

-10

N

32%

exact

Y

18%

+5

N

±3.5%

Public Policy Institute

Sep. 8–17

26%

-23

N

28%

-4

N

14%

+1

Y

±3%

Los Angeles Times

Sep. 6–10

25%

-24

N

30%

-2

Y

18%

+5

N

±3%

Survey USA

Sep. 6–8

39%

-10

N

29%

-3

Y

16%

+3

Y

±3.7%

Field Poll

Sep. 4–7

25%

-24

N

30%

-2

Y

13%

exact

Y

±4.5%

CA Chbr of Commerce

Sep. 1–4

28%

-21

N

33%

+1

Y

12%

-1

Y

±3.1%

Stanford U.

Aug. 29–Sep. 8

40%

-9

N

28%

-4

N

8%

-5

N

±3.4%

Survey USA

Aug. 23–25

45%

-4

N

29%

-3

Y

11%

-2

Y

±3.7%

Los Angeles Times

Aug. 16–21

22%

-27

N

35%

+3

Y

12%

-1

Y

±3%

Field Poll

Aug. 10–13

22%

-27

N

25%

-7

N

9%

-4

Y

±5%

Public Policy Institute

Aug. 8–17

23%

-26

N

18%

-14

N

5%

-8

N

±2%

Time/CNN

Aug. 8

25%

-24

N

15%

-17

N

9%

-4

Y

±4.3%

Gallup

Aug. 7–10

42%

-7

N

22%

-10

N

13%

exact

Y

±4%

Average of 20 polls

 

34%

-15

N

28%

-4

N

13%

exact

Y

3.6%

# Results within MOE

 

 

 

1

 

 

11

 

 

14

 

# Results outside MOE

 

 

 

19

 

 

9

 

 

6

 

# Same as actual

 

 

0

 

 

2

 

 

3

 

 

# Over actual

 

 

0

 

 

3

 

 

10

 

 

# Under actual

 

 

20

 

 

15

 

 

7

 

 



TOPICS: Front Page News; News/Current Events; Politics/Elections; US: California
KEYWORDS: california; catrans; poll; polls; recall; recallanalysis; study
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-89 next last
To: Uncle Hal
Thanks for your reply. Don't want or need to get paid for it. My "payment" is a lively discussion on FR with the hope that people no longer just accept polls at face value. I think those who are arguing against my case study's accuracy because of how simply I took the MOE's miss the point — I'm reflecting the average person's viewpoint, not the scientist or pseudo-scientist.
61 posted on 10/21/2003 12:17:34 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 56 | View Replies]

To: Wolfstar
you said:
"Fourth, my premise, which is completely true, is that there is no objective way for the general public to verify the accuracy of most polls."

On this we agree. But, you have tried to take your results and claim that they are valid for all polls generally. In order to do this, you need more than a single data point (i.e. more than a result of single election). And even then, it would apply only to election polls and not other types of polls.

Why more than one election results? Because there are hundreds of reasons why a poll could be wrong and some of these reasons could cause all the polls for a single election to be inaccurate. But some other election may not suffer from this problem.

Basically, you did the equivalent of using the LA Lakers to study the average height of all humans. Yes your results can be valid for the LA Lakers (the CA recall), but not for the general public (all polls in general) and possible not even be valid for other NBA teams (other election polls) such as the team that used to have Buggsie Malone.

Therefore, you must have polls from more than one election result to determine how often these reasons occur and how much they affect the accuracy of the results.

you said
"Sure it would be nice to have the time and resources you describe, with hundreds of elections and polls to study and computer programs to sift through them, "

The point I was trying to make is that to support your conclusions, it would require a study of such magnitude it would qualify for a Master's Thesis, perhaps even a PHD in statistics.

The ironic part of all this is that the type of study you are attempting is a close cousin to a poll, because they are both attempts to apply statistical analysis to a data set and thus are very dependant on a good data set.
62 posted on 10/21/2003 12:19:54 PM PDT by RatSlayer
[ Post Reply | Private Reply | To 33 | View Replies]

To: Doctor Stochastic
Thanks for your posts, Doc. You may very well be right in that the way MOE's are explained or used by the media and pollsters gives an incorrect impression of their true meaning. But I, absorbing the information as "everyman," take the media and pollsters at face value. If they say the percent a poll came up with for a question is accurate plus or minus several percentage points, that forms the basis I, as "everyman," will use to compare against an actual election result.

Don't you understand? My point is that there is no way whatsoever for "everyman" — the ordinary person running around taking care of their busy lives — can possibly verify the accuracy of polls. When I am told by Gallup, or CNN/Time, or USA Today, or Zogby, or any other organization what the totally artificial "presidential approval rating" is today, how the heck am I supposed to know if it's true? By what objective criteria? Because somebody throws some mumbo-jumbo about an MOE at me? You're talking to "everyman," who frequently can barely balance a checkbook and make ends meet.

Don't you get it? If "everyman" can take a bunch of polls, do some simple arithmetic at the "balance-the- checkbook-level" and find what he understands to be major inaccuracies, then something is very wrong with the whole house of cards that is polling today. It may be, as you point out, not on the statistical side, but in the way polls are analyzed and reported. I as much as said the same thing in my own analysis above: whether what the arithmetic showed was due to methodology or ideology could not be determined.

I don't know of any other "science" that is unverifiable. Do you?

63 posted on 10/21/2003 12:39:58 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 60 | View Replies]

To: RatSlayer
But, you have tried to take your results and claim that they are valid for all polls generally.

Again, this was a CASE study. The case being the California recall and how 20 polls did in predicting the outcome. I am not claiming anything other than that the results of this study demonstrate that polls can't be trusted. How is the general public to know which poll is accurate and which is not. What benchmark can they use? Why should they take your word, or my word, or anyone's word that some polls are accurate? Which ones?

I do agree that this case study is similar to polling in this way: Polls take a small sample of respondents and use them as a case study from which to extrapolate opinions held by the public at large. If small sample size is a flawed approach for a case study, certainly the same must hold true for polling.

64 posted on 10/21/2003 12:50:46 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 62 | View Replies]

To: PhiKapMom
Midday ping. Interesting discussion. If this thread just gets folks to thinking about the central point — that there is no objective way to verify the accuracy of most polls — then all this effort will have been worth it.
65 posted on 10/21/2003 12:52:46 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 64 | View Replies]

To: Wolfstar; MeeknMing; onyx; JohnHuang2; Dog Gone; Dog; isthisnickcool; OKSooner; VOA; mhking; ...
Thanks for posting this information! Much appreciated!
66 posted on 10/21/2003 12:56:29 PM PDT by PhiKapMom (AOII Mom -- Don't forget to Visit/donate at http://www.georgewbush.com)
[ Post Reply | Private Reply | To 49 | View Replies]

To: Wolfstar
Let's use an example of 45% +/- 3%. The pollster can claim to be right if the percent of the vote comes out at 42, 43, 44, 45, 46, 47, or 48 percent.

I'm sorry, but this simply is not true. For example, please read this on MOE.

67 posted on 10/21/2003 1:47:54 PM PDT by KayEyeDoubleDee (const tag& constTagPassedByReference)
[ Post Reply | Private Reply | To 31 | View Replies]

To: Wolfstar
Polling data is a product just like the tangible ones such as cars, appliances, or clothes. Just what happens if you are in the unfortunate position as a polling sales person and you know your customer who is willing to pay you handsomly for today's business and has done so in the past, and you know the data you are going to supply is devastating or at best very bad news?

You put lipstick on the pig, cash the check, and start writing spin scenareo's for when the phone calls come. Wallstreet analysts operated this way for an entire decade. It's a pollster's turn now. The only problemo is that the Rat nest has a tendancy to shoot the messenger. So look for more of the same cooked up poll results to provide adequate levels of customer satisfaction.

68 posted on 10/21/2003 1:48:00 PM PDT by blackdog ("This is everybody's fault but mine")
[ Post Reply | Private Reply | To 1 | View Replies]

To: KayEyeDoubleDee
Thank you very much for going to the trouble of finding that information and providing a link. I read it carefully and find that it just reinforces my point. I know my take on it will surprise you, but bear with me.

The linked info used a CNN/USA Today/Gallup poll re Clinton-Dole to highlight how MOE's are incorrectly reported:

"The CNN/USA Today/Gallup poll taken June 27-30 of 818 registered voters showed Clinton would beat his Republican challenger if the election were held now, 54 to 39 percent, with seven percent undecided. The poll had a margin of error of plus or minus four percentage points."

The "everyman" I've been trying to give a voice to hears or reads that and internalizes it as I previously described, that is, plus or minus 4 percentage points for Clinton's 54, and plus or minus 4 for Dole. The report describes both the poll results and the MOE in percentage terms. What other conclusion is anyone other than a mathematician or a statistician going draw?

Niles goes on to write:

"In this case, the CNN et al. poll had a four percent margin of error. That means that if you asked a question from this poll 100 times, 95 of those times the percentage of people giving a particular answer would be within 4 points of the percentage who gave that same answer in this poll."

Unless I am completely missing his point, in a different way he's saying the same thing I did — with the exception that he adds information about asking the same question 100 times and expecting that 95 times the resulting percent would be within four points of the first result. He does not clarify the use of plus or minus, but I don't see any other way to interpret it except that, using his example, it's +4 and -4 percentage points over or under the original percent. If there is another interpretation, please let me know.

In the recall case study, as already mentioned, I've done exactly what many, many people — including polling firms, themselves — do after elections. Compare a pollster's percentages to actual election results. If this is a flawed way to guage the accuracy of the relative handfull of polls that can be checked against actual elections, then a great many people labor under exactly the same misconception.

69 posted on 10/21/2003 3:17:19 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 67 | View Replies]

To: blackdog
Polling data is a product just like the tangible ones such as cars, appliances, or clothes.

Most assuredly yes. However, unlike cars, appliances, etc., and even the service professions such as medicine, law, accounting and the like, the consumer has a way to evaluate the quality of the product. Polling is an altogether different kettle of fish, because both the consumer and the reporter of polls haven't got a clue whether or not any given poll is accurate. Yet polls intrude into how our laws are made, how we are governed, who we vote for and how our elections are conducted, and so on. In media hands, they help shape and drive real public opinion. So challenging them is a healthy thing.

70 posted on 10/21/2003 3:24:38 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 68 | View Replies]

To: Wolfstar
Thank you! However we had eight years of a president (s) running this country by POLLS and the media treated them as if they were pound sterling - when often just the opposite was happening and unreported.

One of the best things about FR is that you can always find the truth and facts from our posters - truly a pound sterling bunch with some pretty enviable knowledge.

71 posted on 10/21/2003 3:37:20 PM PDT by yoe (Term Limits - and 2 terms are the limit for all elected to a Federal office!!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
Nice!
72 posted on 10/21/2003 4:47:35 PM PDT by Southack (Media bias means that Castro won't be punished for Cuban war crimes against Black Angolans in Africa)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Southack; yoe
Thank you both. Polls are sort of like chocolate chip cookies — great fun to consume, but not good for you in the long run.
73 posted on 10/21/2003 4:59:48 PM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 72 | View Replies]

To: Wolfstar
you said:

"I do agree that this case study is similar to polling in this way: Polls take a small sample of respondents and use them as a case study from which to extrapolate opinions held by the public at large. If small sample size is a flawed approach for a case study, certainly the same must hold true for polling."

Yeah, now your starting to get it. I'm not disputing your conclusions. In fact, I think you're right that most polls can't be trusted. I'm just saying that your methodology falls short of what is needed for proof. And that in some ways, it falls short in the same way that polls fall short.

BTW, if you'd like to see some examples of polling that are usually highly accurate, take a look at exit polls. They are generally much more accurate than pre-election polls (or almost any other type of poll for that matter).
74 posted on 10/21/2003 7:41:30 PM PDT by RatSlayer
[ Post Reply | Private Reply | To 64 | View Replies]

To: Wolfstar
. . . excellent work -- you've given us all much to ponder!!
75 posted on 10/21/2003 7:56:27 PM PDT by DrDeb
[ Post Reply | Private Reply | To 1 | View Replies]

To: RatSlayer; Wolfstar
BTW, if you'd like to see some examples of polling that are usually highly accurate, take a look at exit polls.

Roger Ailes on Exit Polls - Congress...funny.

BTW Woflstar, I think you're take on this is right on. Polls can, and likely do, show generally whatever the pollster thinks they should show. Not unlike our friends in the media, eh?

FGS

76 posted on 10/21/2003 7:57:20 PM PDT by ForGod'sSake (ABCNNBCBS: An enemy at the gates is less formidable, for he is known and carries his banner openly.)
[ Post Reply | Private Reply | To 74 | View Replies]

To: Wolfstar
BTW Woflstar, I think you're take on this is right on.

What I meant to add to my earlier commment was that your findings(?) seem intuitively correct to the great unwashed; that's us. I gathered that's all you were really hoping for.

FGS

77 posted on 10/21/2003 8:14:14 PM PDT by ForGod'sSake (ABCNNBCBS: An enemy at the gates is less formidable, for he is known and carries his banner openly.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: RatSlayer
Always appreciate thoughtful responses, like yours, that advance the dialogue. Exit polls are more accurate for a good reason — exist pollsters catch real voters just after they've participated in a real election, and ask them how they voted.
78 posted on 10/22/2003 11:12:52 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 74 | View Replies]

To: DrDeb
Thanks, DrDeb. That's all I can ask — that others think through the issues raised.
79 posted on 10/22/2003 11:13:38 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 75 | View Replies]

To: ForGod'sSake
What I meant to add to my earlier commment was that your findings(?) seem intuitively correct to the great unwashed; that's us. I gathered that's all you were really hoping for.

Yep. Trying to give a voice to "everyman" whose suspicions about polling go totally ignored by the elites who keep trying to tell him how he "feels."

80 posted on 10/22/2003 11:17:33 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 77 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-89 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson