Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Are political polls accurate? CA recall study of 20 polls says...NEVER TRUST POLLS AGAIN!
RealClearPolitics.com & original material ^ | 10/21/03 | Wolfstar

Posted on 10/21/2003 12:23:07 AM PDT by Wolfstar

Ah, the omnipresent poll. The media punditocracy is addicted to using polls to tell us what "the American people feel" (never think) about everything from a president's so-called approval rating, to how a candidate's chances stack up against others in a race, to our "feelings" about various policy and social issues. Although the public has absolutely no way to evaluate the vast majority of polls for accuracy, most of us simply accept them as incontrovertible indicators of truth. Why? The answer probably is because we're told that polls are "scientific" since they use statistical-type analysis, and most of us tend to equate science with a search for objective truth.

Despite the scientific patina of a Margin of Error, how does one incontrovertibly prove a poll to be accurate? For nearly all polls, the answer is that it can't be done. In the political realm, only polls taken closest to an election can be compared for accuracy to the actual results of real voters expressing their real attitudes at the ballot box.

Nevertheless, the public is conditioned not to look too closely at the accuracy of polls. We're told polls are "just a snapshot in time." So we learn to shrug when there are big swings in, say, a president's "approval ratings" from one month to the next even though nothing particularly new happened during that time. Despite the fact that only a handful of polls close to elections can be verified against objective results of real votes, polls affect our society in several ways. The worst is that many politicians, hanging on their every nuance, too often make crucial policy decisions based on polls wielded like clubs by the media and special-interest groups.

So what's the truth about polls? Few in the public have the time or resources to do an in-depth study in an attempt to answer that question, while those who butter their bread on the back of polls have no incentive to do so. Ah, but fate stepped in this year in the form of the California Recall and provided an excellent opportunity to do a case study of multiple polls taken in a highly compressed period of time. Since all poll results in this study were obtained relatively close to Election Day, a comparison to actual election results is not only easier, but also more instructive than, say, an apples-to-oranges comparison of a year's worth of "approval ratings" to an election outcome.

This study encompasses 20 polls taken by nine polling organizations between Aug. 7 and Oct. 5, 2003. It looks at the five most-watched poll questions — those which purported to measure the percent of vote for:

Because there are 20 polls and five questions, this study encompasses 100 individual results. When analyzed by several criteria, such as poll date and accuracy for each of the five questions, an eye-popping picture of polling precision — or lack thereof — comes into sharp focus. Note that "accurate" in this study means: (1) a poll result within that poll's MOE, and (2) as compared to the actual election results. "Inaccurate" or "wrong" means a poll result outside that poll's MOE as compared to the actuals.

Q: Are political polls accurate?
A: Based on this study, the answer is: While some results for some questions in a multi-query poll may be accurate, most polls, when taken in their entirety, are not. Here's why: Of the 100 individual question results, more than half (57) were wrong (outside their MOE's), as compared to the actual election results.

However, the stunning fact is that only 1 poll in 20 gave accurate results across-the-board for all five questions. This was the last poll taken by Survey USA from Oct. 3-5. Only 4 of 20 got both the Yes and No on recall questions right, while only that last Survey USA poll gave accurate Candidate results. In other words:

Looking at accuracy another way, of the 100 individual question results, less than half (43) were right within their poll MOE's. Most (67) under-estimated the actual election results, and only seven poll questions called that election result precisely.

Q: Does averaging several polls over a period of time give a more accurate picture?
A: Some pollsters, reporters and others who rely on them believe either a tracking poll, or an average of several polls taken over a period of time, are the best methods of obtaining an accurate picture of public attitudes. Due to the compressed two-month recall campaign, all 20 polls, taken together, constitute a form of tracking poll. Yet, as already noted, their often wildly inaccurate results only contributed to a false week-to-week perception of the race. So would averaging the results of all 20 polls give a more accurate picture? The answer based on this study is a qualified yes. Here's why. First, all 20 MOE's were averaged to establish a baseline, which works out to ±3.6%.

Q: Is the Margin of Error (MOE) really useful in assessing a poll's accuracy?
A: Based on this study, the answer is a resounding NO! The smallest MOE given was ±2%; the largest ±5.6%. Interestingly, ±2% was for one of the earliest, most inaccurate polls, while ±5.6% was for the last and most accurate.

Q: Do polls become more accurate closer to an election?
A: The broad answer is a qualified yes — qualified because, in this study, the polls were inconsistent on this question. Results for some questions in early polls were quite accurate, while some late poll results were very inaccurate. However, the trend was to become more accurate closer to the election.

As already noted, the last poll was the only one that got all five questions right within its MOE. The following table shows the total poll questions that the first/last seven polls got right within each poll's MOE. An accuracy of 100% in this instance would be 35 questions right (7 polls x 5 questions). Note that just under twice as many question results were right in late polls than in early ones. Nevertheless, even the late polls (last two weeks) got less than half (49%) of the questions right.

First 7 polls (8/8 to 9/8)
x
Last 7 polls (9/24 to 10/5)

Question

# Poll Questions
Within MOE

x

Question

# Poll Questions
Within MOE

YES

2

x

YES

5

NO

1

x

NO

2

AS

0

x

AS

1

CB

1

x

CB

4

TC

5

x

TC

5

TOTAL

9 (26%)

x

TOTAL

17 (49%)

Q: Are political polls biased?
A: If any given poll is biased, the hard question to answer is whether or not it is due to ideology or methodology. Every expert on polling says that variables such as the way a question is worded; who the respondents are; the order of questions; even what time of day/week a poll is taken can create a bias. (Many polling organizations do not make their methodology public.) As the following demonstrates, an argument can be made either way for these California recall polls:

So are political polls biased? Whatever the answer, the staggeringly inaccurate polling for Schwarzenegger — and moderately inaccurate results for Bustamante — as compared to the surprisingly accurate, even slightly inflated results for McClintock certainly should raise a lot of eyebrows. Of the five poll questions in this study, results for Arnold Schwarzenegger were by far the most inaccurate, while those for Tom McClintock were the most accurate. Was there really a mid-to-late September surge for McClintock? Or, as many suspected, were the polling organizations trying to inflate impressions of his strength as compared to that for Schwarzenegger? And did the polls underestimate Bustamante's vote strength in order to boost Gray Davis? One would be tempted to say "yes" to these questions were it not for the fact that 90% of the polls also underestimated the No on recall vote.

Q: Which polling organization was the most accurate?
A: The following table speaks for itself, although the reader is encouraged to take particular note of the poor performance of two big national polls, Time/CNN and Gallup.

Polling Organization

# of Polls
Taken

Total # of Results
(# Polls x 5 Ques.)

# of Correct
Results

Percent
Correct

Knight Ridder

1

5

3

60%

Los Angeles Times

3

15

9

60%

Survey USA

5

25

13

52%

CA Chbr of Commerce

2

10

5

50%

Field Poll

3

15

7

47%

Time/CNN

1

5

2

40%

Public Policy Institute

2

10

3

30%

Gallup

2

10

1

10%

Stanford U.

1

5

0

0%

Q: What conclusions can be drawn from this study?
A: Even the most accurate polls in this study were wrong 40% of the time overall (based on above accuracy table). The accuracy of each of their internals was worse. So, when the national media tout polls from Gallup, Time/CNN, Newsweek, Zogby, and such about what "the American people feel" regarding something insubstantial like "presidential approval;" or whether or not they want to re-elect the president; or which issues are most important to them; or how a person who's name is all but unknown nationally suddenly becomes "the frontrunner" for a party's nomination, it's wise to keep three things in mind:

  1. There is no objective way to verify the accuracy of most polls.

  2. It is part of human nature to want to predict (thus control) the future. However, this study demonstrates unequivocally that, whether or not it's due to political bias or flawed methodology, polls often drastically misinform the public.

  3. Only 1 in 20 polls in this study got all five questions right. In other words, 95% polls were wrong on one or more of their questions. So when a pollster uses the technique of summing one individual internal question result to another in order to claim something about public opinion, all the pollster may be doing in reality is compounding errors. For example, when Zogby adds answers for, say, "fair" and "poor" together, if either the result for "fair," or the one for "poor," or both are wrong, all he is doing is compounding errors and giving false information to the media and public.

Notes for Tables of Results:

  1. The 20 polls and their MOE's were obtained through RealPolitics.com, and Google searches for those where the RealPolitics.com links no longer worked.
  2. Results are as of Oct. 20 with 100% of precincts reporting.
  3. Over/Under = number of points over (+n) or under (-n) the actual election result.

YES/NO Table Of Results:

Final Results

 

55%

 

 

45%

 

 

 

Poll

Date

YES

+Over
-Under

Within MOE

NO

+Over
-Under

Within MOE

MOE

Survey USA

Oct. 3–5

57%

+2

Y

43%

-2

Y

±5.6%

Knight Ridder

Oct. 1–4

54%

-1

Y

41%

-4

N

±3%

Field Poll

Sep. 29–Oct. 1

57%

+2

Y

39%

-6

N

±4.8%

Survey USA

Sep. 28–30

61%

+6

N

39%

-6

N

±3.7%

Los Angeles Times

Sep. 25–29

56%

+1

Y

42%

-3

Y

±3%

Gallup

Sep. 25–27

63%

+8

N

35%

-10

N

±3%

CA Chbr of Commerce

Sep. 24–25

53%

-2

Y

41%

-4

N

±3.5%

Survey USA

Sep. 19–22

57%

+2

Y

42%

-3

Y

±3.5%

Public Policy Institute

Sep. 8–17

53%

-2

Y

42%

-3

Y

±3%

Los Angeles Times

Sep. 6–10

50%

-5

N

47%

+2

Y

±3%

Survey USA

Sep. 6–8

62%

+7

N

37%

-8

N

±3.7%

Field Poll

Sep. 4–7

55%

exact

Y

40%

-5

N

±4.5%

CA Chbr of Commerce

Sep. 1–4

52%

-3

Y

41%

-4

N

±3.1%

Stanford U.

Aug. 29–Sep.8

62%

+7

N

38%

-7

N

±3.4%

Survey USA

Aug. 23–25

64%

+9

N

35%

-10

N

±3.7%

Los Angeles Times

Aug. 16–21

50%

-5

N

45%

exact

Y

±3%

Field Poll

Aug. 10–13

58%

+3

Y

37%

-8

N

±5%

Public Policy Institute

Aug. 8–17

58%

+3

N

36%

-9

N

±2%

Time/CNN

Aug. 8

54%

-1

Y

35%

-10

N

±4.3%

Gallup

Aug. 7–10

69%

+14

N

26%

-19

N

±4%

Average of 20 polls

 

57%

+2

Y

39%

-6

N

3.6%

# Results within MOE

 

 

 

11

 

 

6

 

# Results outside MOE

 

 

 

9

 

 

14

 

# Same as actual

 

 

1

 

 

1

 

 

# Over actual

 

 

12

 

 

1

 

 

# Under actual

 

 

7

 

 

18

 

 

CANDIDATE Table Of Results:

Final Results

 

49%

 

 

32%

 

 

13%

 

 

 

Poll

Date

Arnold S.

AS +Over
-Under

Within MOE

Bustamante

CB +Over
-Under

Within MOE

McClintock

TM +Over
-Under

Within MOE

MOE

Survey USA

Oct. 3–5

46%

-3

Y

34%

+2

Y

13%

exact

Y

±5.6%

Knight Ridder

Oct. 1–4

37%

-12

N

29%

-3

Y

15%

+2

Y

±3%

Field Poll

Sep. 29–Oct. 1

36%

-13

N

26%

-6

N

16%

+3

Y

±4.8%

Survey USA

Sep. 28–30

45%

-4

N

28%

-4

N

16%

+3

Y

±3.7%

Los Angeles Times

Sep. 25–29

40%

-9

N

32%

exact

Y

15%

+2

Y

±3%

Gallup

Sep. 25–27

40%

-9

N

25%

-7

N

18%

+5

N

±3%

CA Chbr of Commerce

Sep. 24–25

35%

-14

N

31%

-1

Y

17%

+4

N

±3.5%

Survey USA

Sep. 19–22

39%

-10

N

32%

exact

Y

18%

+5

N

±3.5%

Public Policy Institute

Sep. 8–17

26%

-23

N

28%

-4

N

14%

+1

Y

±3%

Los Angeles Times

Sep. 6–10

25%

-24

N

30%

-2

Y

18%

+5

N

±3%

Survey USA

Sep. 6–8

39%

-10

N

29%

-3

Y

16%

+3

Y

±3.7%

Field Poll

Sep. 4–7

25%

-24

N

30%

-2

Y

13%

exact

Y

±4.5%

CA Chbr of Commerce

Sep. 1–4

28%

-21

N

33%

+1

Y

12%

-1

Y

±3.1%

Stanford U.

Aug. 29–Sep. 8

40%

-9

N

28%

-4

N

8%

-5

N

±3.4%

Survey USA

Aug. 23–25

45%

-4

N

29%

-3

Y

11%

-2

Y

±3.7%

Los Angeles Times

Aug. 16–21

22%

-27

N

35%

+3

Y

12%

-1

Y

±3%

Field Poll

Aug. 10–13

22%

-27

N

25%

-7

N

9%

-4

Y

±5%

Public Policy Institute

Aug. 8–17

23%

-26

N

18%

-14

N

5%

-8

N

±2%

Time/CNN

Aug. 8

25%

-24

N

15%

-17

N

9%

-4

Y

±4.3%

Gallup

Aug. 7–10

42%

-7

N

22%

-10

N

13%

exact

Y

±4%

Average of 20 polls

 

34%

-15

N

28%

-4

N

13%

exact

Y

3.6%

# Results within MOE

 

 

 

1

 

 

11

 

 

14

 

# Results outside MOE

 

 

 

19

 

 

9

 

 

6

 

# Same as actual

 

 

0

 

 

2

 

 

3

 

 

# Over actual

 

 

0

 

 

3

 

 

10

 

 

# Under actual

 

 

20

 

 

15

 

 

7

 

 



TOPICS: Front Page News; News/Current Events; Politics/Elections; US: California
KEYWORDS: california; catrans; poll; polls; recall; recallanalysis; study
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-89 next last
To: Wolfstar
you said
"The study focuses on the accuracy of polls, not on their validity. My underlying premise is that the accuracy of most polls can't be proven."

While I applaud your effort, your study is invalid. To do a valid study of your premise, you would have to study the results of polling in a few hundred elections, or at least a couple of dozen elections, not one.

Your study is an extreme case of cherry picking the data. I'm not alleging it was intentional, just that you happened to pick an election in which the unusual voter turnout almost guaranteed inaccurate polls.

Actually, to avoid any charges of cherry picking the data, you should really identify a couple of thousand elections for which you can get some polling data and the election results. Then let a computer program do a random drawing of the couple of hundred elections which will be your study sample. Only at this point, do you actually go out and collect the data. This way someone cannot accuse you of knowing which elections to include in the random drawing.

Since polls are entirely dependant on the demographic choices made by the pollsters, Obviously, in an election such as the recall, where a large percentage of first time voters cast ballots, the odds of the polls being wrong are greatly increased.

An interesting study would be to determine why the demographics of that one poll yielded good results. But, that would take inside information.
21 posted on 10/21/2003 3:01:42 AM PDT by RatSlayer
[ Post Reply | Private Reply | To 11 | View Replies]

To: Wolfstar
Nice! Thanks for all you do.










22 posted on 10/21/2003 3:20:45 AM PDT by Marie Antoinette (Caaaarefully poke the toothpick through the plastic...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
I think your analysis is mathematically flawed. Trouble with independent events and all that. For instance, "YES" and "NO" are not independent. The stated margin of error (MOE) in a poll is purely the 95-percentile sampling error. It is only a statement of the uncertainty attributable to a finite sample size.

The bigger problem with polls is systematic error. People who take time to talk to pollsters (I'm not one of them) are not representative of the population. An honest poll works to suppress systematic error, there are so-called "push polls" designed to produce a certain result where every attempt is made to build in systemic error.

To illustrate MOE: If there are one million red balls and one million black balls and you draw 1000 balls "at random" the 95-percentile sampling error is about 31. This means 95% of the time you draw between 468-532 red (or black) balls, even though the though the "expected" number is 500. Given that you drew 47% red balls (with a MOE of 3.1%) you would also report 532 black balls with the same MOE. The two events are not independent.
23 posted on 10/21/2003 3:33:11 AM PDT by Lonesome in Massachussets (Uday and Qusay and Idi-ay are ead-day)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
Wow! Bump and THANKS for all your hard work! I understand that you are evaluating the polls only on the terms of their own stated accuracy (MOE), and not their validity. I'm still not sure about the dynamics of a push poll (LATimes) on the actual result, for example. Some of the number shifts were due to external situations (Indian gaming money, the Arnold smear, and candidates dropping out), while others were probably due to forcing a vote in the survery, and not allowing an "undecided" response. And I would LOVE to see the internal polls from each party or candidate to see how they tracked with the public polls.
24 posted on 10/21/2003 5:03:08 AM PDT by alwaysconservative (95% of the California pre-election polls were wrong. You gotta love it!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
But, but … how will we know how to vote if the talking heads don’t tell us who is the favorite?
25 posted on 10/21/2003 5:12:29 AM PDT by R. Scott
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
Polling today is part of the political corruption in this country. Years ago, polls were used to find out what people think. Today, with the introduction of demographics, polls are used to influence the voting public. You can find a group of people (demographic) who will respond in a certain way to a certain wording of the questions. If the result is not exactly what you want, you alter the wording of question and run the poll again. This is repeated until you get the percentages to come out and that is the result that gets published. When you see a poll it's important to look at who commissioned it. You usually know what the results are going to be when ABC, CBS, Time, CNN, MSNBC, Newsweak, and any other liberal organization is paying the bills.
26 posted on 10/21/2003 5:27:49 AM PDT by OrioleFan
[ Post Reply | Private Reply | To 1 | View Replies]

To: Wolfstar
Thanks for your excellent work in this thread.
27 posted on 10/21/2003 6:58:05 AM PDT by Grampa Dave (Get a free FR coffee mug! Donate $10 monthly to Free Republic or 34 cents/day!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BibChr
bookmark
28 posted on 10/21/2003 7:05:09 AM PDT by BibChr ("...behold, they have rejected the word of the LORD, so what wisdom is in them?" [Jer. 8:9])
[ Post Reply | Private Reply | To 1 | View Replies]

To: onyx
Thanks, Onyx. The more informed we are, the better.
29 posted on 10/21/2003 7:34:43 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 14 | View Replies]

To: Lonesome in Massachussets
An honest poll works to suppress systematic error, there are so-called "push polls" designed to produce a certain result where every attempt is made to build in systemic error.

But what is a push poll anyway?

Fundamentally, what people label a push poll isn't a poll at all. A push poll is political telemarketing masquerading as a poll. No one is really collecting information. No one will analyze the data. You can tell a push poll because it is very short, even too short. (It has to be very short to reach tens of thousands of potential voters, one by one). It will not include any demographic questions. The "interviewer" will sometimes ask to speak to a specific voter by name. And, of course, a push poll will contain negative information - sometimes truthful, sometimes not - about the opponent.

http://www.cbsnews.com/stories/2000/02/14/politics/main160398.shtml

30 posted on 10/21/2003 7:37:13 AM PDT by JohnnyZ (Red Sox in 2004)
[ Post Reply | Private Reply | To 23 | View Replies]

To: KayEyeDoubleDee
A margin of error gives a pollster a plus or minus swing within which he can claim to be right. A typical MOE is +/- 3% of the percentage the pollster arrives at for a candidate or issue. Let's use an example of 45% +/- 3%. The pollster can claim to be right if the percent of the vote comes out at 42, 43, 44, 45, 46, 47, or 48 percent.
31 posted on 10/21/2003 7:40:29 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 16 | View Replies]

To: Wolfstar
Thanks for your work. I've wondered (mostly kiddingly) whether polling should be outlawed.

Dan
(c8
32 posted on 10/21/2003 7:55:22 AM PDT by BibChr ("...behold, they have rejected the word of the LORD, so what wisdom is in them?" [Jer. 8:9])
[ Post Reply | Private Reply | To 1 | View Replies]

To: RatSlayer
Thanks for your reply. I appreciate it. First, I didn't pick the polls. They were the organizations doing polling on the recall. RealPolitics.com followed and listed them during the entire recall campaign. I simply took their list and tested the accuracy of the data the pollsters came up with against the actual election results.

Second, it is entirely random from several points of view, not the least of which is the unforeseen nature of the event, which set up a situation that was less influenced by media and other manipulation that may be typical in most elections.

Third, if polling is scientific, it should not matter if voter turnout is "unusual." Either pollsters can go out and accurately determine answers to their questions, or they can't.

You wrote: "Since polls are entirely dependant on the demographic choices made by the pollsters, Obviously, in an election such as the recall, where a large percentage of first time voters cast ballots, the odds of the polls being wrong are greatly increased." The whole point of my study is to challenge that very notion.

I challenge the notion that polls are at all scientific. Again, either pollsters can go out and accurately determine answers to their questions, or they can't. Why do they need ideal conditions? It's always the exception that proves the rule. So it seems to me that the unusual conditions of this recall were the perfect opportunity for pollsters to show their stuff. Yet they failed pretty convincingly.

Fourth, my premise, which is completely true, is that there is no objective way for the general public to verify the accuracy of most polls. This study is fully accurate within its own data. I made it a point to emphasize that what conclusions can be drawn are based on this study. Sure it would be nice to have the time and resources you describe, with hundreds of elections and polls to study and computer programs to sift through them, etc. But who in the general public has access to such resources?

The study shows what it shows, without artifice, and with a simple methodology that anyone can follow.

33 posted on 10/21/2003 8:10:48 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 21 | View Replies]

To: Marie Antoinette; Jim Robinson
You are most welcome. Thanks to Jim Robinson for giving us all the opportunity to have such dialogues.
34 posted on 10/21/2003 8:12:02 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 22 | View Replies]

To: Lonesome in Massachussets
MOE's are given as PLUS or MINUS a percentage. So that I don't have to repeat myself, please be kind enough to see my posts #31 and #33 on this thread. Now, I don't claim to be a whizbang mathemetician. I try to reduce things down to a place where I can understand it, and assume that if I understand it, most others will, too.

If someone tells me they offer, say, a 45% prediction of something occuring, plus or minus 3%, that tells me they give themselves a range of seven percentage points within which they can claim to be right. Given an objective actual against which to compare the prediction, I can then see whether or not it was accurate. Very simple. Very clear. Very straightforward. The numbers speak for themselves.

35 posted on 10/21/2003 8:20:43 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 23 | View Replies]

To: alwaysconservative
Hi, "AC." Thanks for adding your thoughts to this thread. Either pollsters can turn out an accurate product, or they can't. An unusual and unexpected event like the recall should have been an ideal opportunity for them to show their stuff, but they didn't. So how on earth are we in the general public supposed to know when they are accurate and when they are not? Your post buys into all the analysis of the situation the media fed us. That's fine. But what I'm doing is challenging the entire pollster/media edifice.
36 posted on 10/21/2003 8:28:25 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 24 | View Replies]

To: R. Scott
Exactly!
37 posted on 10/21/2003 8:29:06 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 25 | View Replies]

To: OrioleFan
Polling today is part of the political corruption in this country.

Truer words were never spoken — er, written. :-)

People are free to accept or reject the results of this case study. But I would hope most would allow their assumptions about polling to be challenged a bit.

38 posted on 10/21/2003 8:31:21 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 26 | View Replies]

To: Grampa Dave
Good morning, Grampa Dave. I appreciate your kindness. Just hoping to shake up a few cherished notions about polling.
39 posted on 10/21/2003 8:32:43 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 27 | View Replies]

To: BibChr
You are most welcome. Sometimes I wonder the same thing myself, LOL! Actually, what I've wondered for a long time are things like who invented "presidential approval ratings" and such, why, and how on earth can I trust something that I have no way to test for accuracy. The recall gave me such an opportunity. Most enlightening.
40 posted on 10/21/2003 8:37:00 AM PDT by Wolfstar (NO SECURITY = NO ECONOMY)
[ Post Reply | Private Reply | To 32 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-89 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson