Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Response rates and poll accuracy (A leftist pollster's spin on polling)
The Hill ^ | 11/30/05

Posted on 11/30/2005 8:08:18 PM PST by Jean S

After weeks of substance in this space, let’s switch to a nerdy discussion of technique.

The rash of inaccurate public polls before last month’s election provides reason to focus on an oft-cited culprit: declining response rates.

Fewer people respond to polls, the argument goes, and therefore polls grow less accurate. It’s one of those widely accepted pieces of received wisdom that has almost no basis in fact.

There is little doubt that response rates are declining. Even the very careful studies done by agencies like the Centers for Disease Control and Prevention saw response rates decline by over 20 points between 1993 and 2000. People are busier, more suspicious, tired of telemarketers, protected by caller ID, answering machines and screening capabilities. Though there is dispute about the precise numbers, between 1.5 and 4 percent of the public (not voters) only have cell phones and can’t be polled. A relatively small problem now, this one is likely to grow as 15 percent claim they will abandon their landlines over the next five years.

In analyzing the impact on political polls of what we call in the trade “non-response,” the key question is not how many people don’t take surveys but how different politically those voters we can contact are from those we cannot. To date, the evidence suggests not very. Surprising as it may seem, there is no scientific study that indicates inaccurate election polls result from non-response.

In fact, the evidence suggests no relationship at all. Say what you will about the exit poll (and much of what has been said is foolish), but exhaustive analysis indicates no meaningful relationship between the magnitude of error in precincts and voters’ willingness to participate.

Others have examined what happens when heroic efforts are made to increase response rates. A few years ago, analysts compared two otherwise identical surveys, one with a 36 percent response rate, the other with a 61 percent response rate. Most of the significant differences between the two were on demographics. Responses to attitudinal questions were nearly identical. Across all 91 items, including demographics, the average difference was about two points, well within the margin of error.

While complaining about low response rates, political commentators often ridicule the steps most likely to increase them. How many times have you read derogatory references to surveys in the field 10 or 20 days? In the context of fast-moving political campaigns, a poll done over the course of weeks seems ridiculous. But long field times are the single most effective way to increase response rates. Increasing the response rate in the survey above from 36 percent to 61 percent required keeping it in the field for eight weeks instead of five days.

Another approach to increasing response rates involves sending advance letters to respondents alerting them to expect a survey call from a reputable researcher. This method can increase response rates by three-13 percentage points. But a recent study by Yale’s Christopher Mann, working with The Washington Post and Quinnipiac polls in 2002, found that while the letters increased response rates somewhat, there was no evidence that the higher response rates produced more accurate forecasts of the election outcomes.

Indeed, extraordinary efforts to increase cooperation make surveys less accurate by bringing into the sample normally uninterested people who are much less likely to vote.

None of this should be read to suggest that methodology is unimportant or that we should forget about increasing cooperation. Even if noncooperation isn’t degrading results now, it may as the problem worsens. But the data do suggest that the conventional wisdom about non-response is wrong, for now. We need to look at a host of other factors, from faulty sampling to poor analysis, to find the villain.

In the end, all pollsters share the same respondents, but many of us regularly get it right.

Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982, including Sen. John Kerry (D-Mass.) last year.


TOPICS: Editorial; Politics/Elections
KEYWORDS: polls

1 posted on 11/30/2005 8:08:19 PM PST by Jean S
[ Post Reply | Private Reply | View Replies]

To: JeanS
"Surprising as it may seem, there is no scientific study that indicates inaccurate election polls result from non-response."

How about those resulting from pollster bias, Larry (Bud)?
2 posted on 11/30/2005 8:13:25 PM PST by decal (Mother Nature and Real Life are conservatives; the Progs have never figured this out.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

The only polls that count are held on Election Day. The rest are just spun guesses.


3 posted on 11/30/2005 8:13:36 PM PST by clee1 (We use 43 muscles to frown, 17 to smile, and 2 to pull a trigger. I'm lazy and I'm tired of smiling.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

I don't trust the polls & when we are called, we never give correct information.


4 posted on 11/30/2005 8:17:02 PM PST by cabinman (Remember to do good and shun evil.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: clee1

"Surprising as it may seem, there is no scientific study that indicates inaccurate election polls result from non-response."

Nonsense. Besides, you don't need a scientific study. If the nonrespondents as a group mirror the demographics of the population, no problem. However, if the nonrespondent group is "skewed", for example, all Republicans, then the poll results will be extremely biased. It's called "non response bias" in the survey statistics business, and numerous methods exist to account for it.


5 posted on 11/30/2005 8:19:20 PM PST by Buck W. (Yesterday's Intelligentsia are today's Irrelevantsia.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Buck W.

In other words, if a significant number of conservatives refuse to participate in your screwy polls because they think you are biased against them and will twist their answers, you'll wind up with polls that show Kerry leading 2% the day before he concedes.


6 posted on 11/30/2005 8:38:11 PM PST by Fenris6 (3 Purple Hearts in 4 months w/o missing a day of work? He's either John Rambo or a Fraud)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Fenris6

Precisely!


7 posted on 11/30/2005 8:40:07 PM PST by Buck W. (Yesterday's Intelligentsia are today's Irrelevantsia.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: JeanS

1) "After weeks of substance in this space, let’s switch to a nerdy discussion of technique."

-So, where's the discussion of technique? Non-response is not a technique. Jeesh.


2) The rash of inaccurate public polls before last month’s election provides reason to focus on an oft-cited culprit: declining response rates."

And he spends the rest of the article telling us that declining response rates DO NOT really affect the accuracy of the poll; and of course if you cannot get your sample in 5 days, you stretch out the time until you do. Ninny.

3) We are still left with this guy's embarassing admission that the polls are notoriously "inaccurate"- but we know why, don't we, FReepers!


8 posted on 11/30/2005 8:43:38 PM PST by Anselma
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

"None of this should be read to suggest that methodology is unimportant"

Methodology is all-important. It's the difference between lying and at least trying to be accurate. Which is why he didn't belabor the point.


9 posted on 11/30/2005 8:46:47 PM PST by Anselma
[ Post Reply | Private Reply | To 1 | View Replies]

To: decal
How about those resulting from pollster bias

Zogby calls that his "special sauce."

10 posted on 11/30/2005 8:49:29 PM PST by SmithL (There are a lot of people that hate Bush more than they hate terrorists)
[ Post Reply | Private Reply | To 2 | View Replies]

To: JeanS
Let's see. I am 31 years old. I've had my name in the phone book for 11 years. I've never missed an election. My father is 57. He has had his name in the phone book since 1971, when my folks married and they got their own place. He missed one election in the 70s due to unexpectedly staying out of town longer than planned on a work trip. My mother has had her own entry in the phone book since their 1995 divorce: she has never missed an election. My folks have both been registered as Dems and Independents, although both of them (and I) are registered Republicans now.

The same can be said for my in-laws, except they are still married to each other, with respect to their phone book listings and voting record.

Not a one of us has EVER been called for a legitimate political poll.

Who do these pollsters call?

11 posted on 11/30/2005 9:30:38 PM PST by GoBucks2002
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

This must be in response to those annoying polls that show widespread support for our continued efforts in Iraq and in the GWOT.

Typical lefty Bravo Sierra.


12 posted on 11/30/2005 9:57:54 PM PST by Solemar (a/k/a Larry the Lounge Lizard)
[ Post Reply | Private Reply | To 1 | View Replies]

To: GoBucks2002
***Who do these pollsters call?****

  1. Democratic National Committee
    430 S. Capitol St. SE
    Washington, DC 20003
    202-863-8000

  2. The Communist Party USA National Office
    235 W. 23rd Street
    New York, NY 10011
    212 989-4994

The above is just a guess mind you :-)

13 posted on 12/01/2005 5:03:53 AM PST by Condor51 (Leftists are moral and intellectual parasites - Standing Wolf)
[ Post Reply | Private Reply | To 11 | View Replies]

To: JeanS
Surprising as it may seem, there is no scientific study that indicates inaccurate election polls result from non-response.

Probably because you can't get the non-responsive to respond to the study, by definition.

14 posted on 12/01/2005 5:07:33 AM PST by kevkrom (403-3)
[ Post Reply | Private Reply | To 1 | View Replies]

To: GoBucks2002
Not a one of us has EVER been called for a legitimate political poll. Who do these pollsters call?

I get called from time to time. I either rufuse to participate or flat-out lie.

15 posted on 12/01/2005 5:10:06 AM PST by kevkrom (403-3)
[ Post Reply | Private Reply | To 11 | View Replies]

To: JeanS

I've been polled. It never seems to be just a few simple questions, the typical poll consists of strings of interlocking questions that end up taking 10 to 15 minutes to churn through. They never seem to be "yes or no" questions, they love those "strongly agree, somewhat agree, somewhat disagree, strongly disagree" types. They never tell you for whom the poll is being taken, but you can usually tell after a few questions.

No wonder people hang up on these idiots.


16 posted on 12/01/2005 5:18:46 AM PST by Fresh Wind (Democrats are guilty of whatever they scream the loudest about.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

Polls published by the MSM are nothing more than DNC talking points. If you wish to talk about ping elephants you simple create one by reporting to the world the existence of pink elephants. The MSM also believes that if they report the existence of pink elephants often enough they do exist.

I do not trust their fabricated and/or twisted and packaged and repackaged et' hem news reports which express the Liberal view as opposed to reality so why would I believe polling from the same sources?

Votes counts, actual votes not fabricated poll/spin data. We could have polled the Democrats in the House of Representatives on pulling out of Iraq immediately and the poll would have probably shown that 100 percent of them supported the idea but if actually required to vote on it, well we know the rest of the story....


17 posted on 12/01/2005 5:32:32 AM PST by TheForceOfOne
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

From Polipundit for your information:

Polling Methodology




Many people have questions about polls and polling methodology. Here is a brief description of the process and how we handle the polls.

How are Polls Conducted?
Several organizations conduct state political polls, usually when commissioned to do so by their customers, which include local and national newspapers and television stations, as well as political consultants, and candidates. Different polling organizations use different methodologies and there is considerable controversy about which method is most accurate. To conduct a poll, the company first develops a questionnaire (together with the client). The phrasing of the questions is known to influence the results. Consider the following options:

If the election were held today, would you vote for George Bush or John Kerry?
If the election were held today, would you vote for John Kerry or George Bush?
If the election were held today, would you vote for Bush, Kerry, or Nader?
If the election were held today, who would you vote for?
The questions are then read to 500 to 1000 randomly chosen people called by telephone. Usually additional questions are added about age, gender, ethnicity, political affililation, education, income, and other factors to allow breakdowns by these categories. Often there are questions designed to determine if the person is likely to vote. These may include:

Are you currently registered to vote?
Did you vote in 2000?
Did you vote in 1996?
Do you believe that it is every citizen's duty to vote?
Do you think your vote matters?
Some polling companies give the results based on all adults they survey. Others include only registered voters. Yet others include only likely voters, using their proprietary formula for determining who is likely to vote based on questions like those above. Depending on exactly how the voting questions are phrased and which category of respondents is included in the poll, some systematic bias may be introduced. Some pollsters publish the results for both likely voters and all registered voters. Until Sept. 18, this site used likely voters if there was a choice (not often). Starting Sept. 18 the choice was made for registered voters if there was a choice on the grounds that there is increasing evidence the old formulas for screening likely voters will not work in 2004.

A recent development is the use of automated polls. With this technology, the company's computer dials telephone numbers at random and then plays a message asking whoever answers the demographic and political questions, to which they respond by pressing buttons on the telephone. The percentage of people who hang up quickly when this technique is much higher than when a human being conducts the poll. Nevertheless, Survey USA and Rasmussen rely heavily on this technique because it is fast and cheap, allowing them to charge less than their competitors in the polling business. Traditional polling companies criticize the methodology on the grounds that it does not adequately filter out teenagers too young to vote but definitely old enough to play games with the system. Chuck Todd, editor of the Hotline, a daily political tipsheet was once called by Survey USA and was effortlessly able to pass himself off as a 19-year old Republican Latina, something he could never have done with a human pollster. In response, the companies using automated polling, have numerous studies comparing their polls to traditional ones showing that they get the same results as their nonautomated competitors. But the issue of automated polling remains controversial.

Yet another factor is the day of the week the calls are made. Calls made Monday through Friday have a larger probability of getting a woman than a man, because there are more housewives than househusbands. Since women are generally more favorable to the Democrats than men are, this effect can introduce bias. Also, calls made Friday evening may miss younger voters, who may be out partying, and thus underweight them in the results. To counteract this effect, some polling companies call for an entire week, instead of the usual three days, but this approach results in polls that do not respond as quickly to events in the news. The most extreme example of this approach is Rasmussen, which polls people in the key battleground states every day and summarizes the results for the previous month at the start of each new month. More information about the polling processes is provided by this tutorial on polling written by the Gallup Poll.

Who Conducts Polls?
The websites of some of the major polling organizations are listed below. Note that most of them do not give much useful data for free. To get the numbers, you have to buy a subscription, in which case a wealth of data is provided. Also note that a few of the polling companies keep track of the state-by-state electoral vote, but all of them use only their own data. Since no polling company polls every state every week, using only one company's data means that their maps are often based on obsolete data. For this site, we base the maps on the results of four paid subscriptions, some of which, like www.pollingreport.com, themselves subscribe to multiple polling companies. Other sources, such as polls published by major media outlets are also used.

American Research Group
The Gallup Poll
Mason-Dixon
Quinnipiac University Polling Institute
Rasmussen Reports
Research 2000
Strategic Vision (GOP)
Survey USA
Zogby
Which Polls Do You Use?
The methodology of polling is not simple. There are numerous subtle issues involved. But given a collection of polls, which ones should we use? To avoid arbitrary judgments, there has to be an unambiguous rule. This site originally had a simple policy: use the most recent major poll. There are two key words here: 'recent' and 'major.' Most polls take 2-3 days to conduct. For these polls, the poll ending most recently wins.

However, Rasmussen conducts monthly tracking polls in which they poll people every day for an entire month. These polls are treated differently because they cover such a long time span. If some other poll has a middle date on or after the 15th of the month, it is considered more recent than the Rasmussen poll, otherwise not. For example, a poll taken July 14-16 is more recent than the July Rasmussen poll, but a poll taken July 13-15 is not.

It has been suggested to average the the last three polls instead of using only the most recent one. Obviously this is possible, and some election websites do this. However, suppose the most recent one ended yesterday, the next most recent ended 59 days ago and the one before that ended 89 days ago. Should these really be averaged?

The next improvement is to let polls expire: only count 'recent' ones. But what is recent? Less than 30 days? 60 days? 90 days? It begins to get arbitary here. Even worse, this formula leads to strange effects. Consider the case cited above with polls yesterday, 59 days ago, and 89 days ago. Somebody is ahead. Next day, even in the absence of any polls, the oldest poll expires and a new average is computed. If one of the candidates has made great progress in the past three months in some state and all of a sudden the old poll weighing him down magically vanishes, he unexpectedly leaps ahead. It could thus happen that the score changes dramatically even with no new poll because some poll taken three months ago has just gone beyond the 90-day look-back period. Consequently different websites, newspapers, pollsters, etc. may come to different conclusions about the horse race, even using exactly the same polling reports. And you thought you could understand the election campaign without a Ph.D. in statistics?

Starting Oct 4 the methodology was changed. There were so many polls and they were so far apart that the most recent 3 polls per state were averaged.

The second key word is 'major.' What is a major poll? Is requesting your grandmother in Michigan to ask all her friends who they are going to vote for a major poll? Probably not. Not even if she is a wonderful lady and has many, many friends. Well-designed scientific polls are not easy to conduct. The primary issue is selecting participants at random from the correct pool, which itself is arguable (all adults? all registered voters? all likely voters? what about voters living overseas who are registered in the state?). For purposes of this site, a major poll is one conducted by a professional polling organization and published by a mainstream media outlet. These organizations include half a dozen or so national polling companies, and some regional ones. However, an increasing number of universities are also getting into the act because they have the key ingredients in abundance, namely, professors who understand political science and statistics and lots of cheap labor, sometimes referred to as students.

What Does Margin of Error Mean Exactly?
There is no concept as confusing as 'Margin of Error.' It is used a lot but few people understand it. Suppose a polling company calls 1000 randomly selected people in a state that is truly divided 50-50, they may, simply by accident, happen to call 520 Democrats and 480 Republicans and announce that Kerry is ahead 52% to 48%. But another company on the same day may happen to get 510 Republicans and 490 Democrats and announce that Bush is ahead 51% to 49%. The variation caused by having such a small sample is called the margin of error and is usually between 2% and 4% for the sample sizes used in state polling. This means that with a margin of error of, say, 3%, a reported 51% really means that there is a 95% chance that the correct number is between 48% and 54% (and a 5% chance that it is outside this range).

In the first above example, with a 3% MoE, the 95% confidence interval for Kerry is 49% to 55% and for Bush 45% to 51%. Since these overlap, we cannot be 95% certain that Kerry is really ahead, so this is called a statistical tie. Nevertheless, the probability that Kerry is ahead is greater than the probability that Bush is ahead, only we cannot be very sure of the conclusion. When the ranges of the candidates do not overlap (i.e., the difference between them is at least twice the margin of error), then we can be 95% certain the leader is really ahead.

For this reason, the white states in our maps should be regarded as tossups no matter who is currently slightly ahead; the results could easily flip in the next poll without a single voter changing his or her mind. Of course, the margin of error can be reduced by using a bigger sample, but that takes longer and costs more money, so most clients opt for 500 to 1000 respondents.

Should I believe poll results?
Sometimes. But you should try to get answers to these questions about any polls you see. Unfortunately, some pollsters are completely unethical, as described in this article.


18 posted on 12/01/2005 6:18:53 AM PST by conservativecorner
[ Post Reply | Private Reply | To 1 | View Replies]

To: JeanS

Additional info from same site:

Poll Methodology - A 2004 Guide


There has been intense interest in the polls this year, and the recent disagreement about the range of position has only highlighted discussion. Some people like to support a poll with results they like, without any sort of examination about why that poll is different from others. And some reject polls on a charge of outright bias or prejudice, which I can understand, given the partisan comments from supposedly objective people like John Zogby and Larry Sabato, but I must caution the readers to be careful to consider the evidence before accepting or rejecting a poll.

Let’s start with the obvious; more information is better, especially if it is relevant to how the numbers were driven. By relevant, I mean two things: The information should show valid evidence to support the poll’s main conclusion, and the information should be consistent with past polls, so that trends and historical benchmarks may be seen. To that end, I discovered that in terms of methodology, we can separate the polls into three broad types – the polls which provide demographic internal data, the polls whose questions show mood in the main issues, and those polls which refuse to provide internal data.

The best way to find out how the polls developed their methodologies, is to look for that information. Some publish their methodologies at the bottom of their poll releases, others are so proud of their methodologies, they wrote up special articles to explain their process. Others did not have their methodologies handy, but responded when I asked them how they did their polling. And others, well, they were neither forthcoming nor cooperative, and that speaks for itself. This article allows you to get to know the polls all over again, this time starting form the inside. I figure, this guide will help you figure out for yourself, whose word is worth listening to, and who is nothing but hooey. I am listing the polls in alphabetical order. All telephone polls referenced employ Random-Digit-Dialing (RDD); RDD is used to pre-select Area codes and exchanges, then use a randomizer to select the last 3 or 4 digits, depending on the poll. When I say ‘pure’ RDD, I mean that the respondent poll is new; some polls appear to use an initial pool of respondents for future polling, and I will note this where it shows up. All references to “Margin of Error” reflect a standard 95% confidence level by the polls. When I reference ‘NCPP’, I mean the National Council on Public Polls, who published guidelines for demographic weighting and internal responsibility, which they expect their members to follow. Another national group for pollers is the American Association of Public Opinion Research (AAPOR), but they appear to be much smaller, and have looser standards than the NCPP. It’s worth noting, though, that neither the NCPP nor AAPOR appears to have any deterrent in their policies; there is no specified penalty for not meeting their standards, nor any formal auditing authority. That, of course, is one reason I’m doing this review.

ABC News/Washington Post: This agency uses a call center for its polling. The subcontractor at present is TNS of Horsham, Pa. The poll is performed by telephone, calling roughly 1,200 “randomly selected adults nationwide”, from which self-identified registered voters are polled for the report’s information. The respondent poll is pure RDD for each poll. ABC/WP says their Margin of Error is +/- 3 points. The ABC/WP poll cites results by gender, race, age, education, and sometimes also by income groups. Regarding the weighting of their poll data, ABC says, “Final data are weighted using demographic information from the Census to adjust for sampling and non-sampling deviations from population values. Respondents customarily are classified into one of 48 cells based on age, race, gender and education. Weights are assigned so the proportion in each of these 48 cells matches the actual population proportion according to the Census Bureau’s most recent Current Population Survey” The weighting appears to be in line with NCPP guidelines.

American Research Group: ARG’s methodology for national polling was not published, but from their primary polling in the spring, ARG stated “The results for this tracking survey are based on completed telephone interviews among a statewide random sample of likely primary voters in [the state]. Telephone households are selected by adding random digits to telephone exchanges selected in a random probability-proportionate-to-size sample of the state. Respondents in the telephone households are randomly selected and then screened for voter registration (including intent to register on or before primary day) and likelihood of voting in the primary.” On September 22, ARG released a nationwide compilation of state polls, which revealed they use a 53-47 weighting of women to men, and a party weighting of 41.4% Democrats, 35.5% Republicans, and 23.1% Independents. These do not conform to 2000 or 2002 exit polling, nor the 2000 Census, and are not in line with NCPP guidelines. ARG’s respondent poll may sometimes be pure RDD, but at other times appears to be a reserved poll from previous polls, in order to track possible opinion shifts in the same pool. ARG does not explain whether this is the case, and did not respond to a request for clarification.

Associated Press/Ipsos : The Associated Press Poll is conducted by Ipsos-Public Affairs. The poll is a telephone poll of randomly selected numbers, with a sample of roughly 1,500 adults nationwide, produced between 1,200 and 1,300 registered voters ,whose responses produce the poll’s information, along with a smaller number of self-described ‘likely’ voters, who are defined as voters who voted in 2000 and are 8-10 on a likelihood of voting (1-10 scale), or who did not vote in 2000, but are ‘10’ this year. The respondent poll is pure RDD. Ipsos weights its poll, but does not detail the breakdown in its Press Release or Questionnaire, though some demographic information was released in their latest poll. Instead, Ipsos concentrates on the trend of questions measuring a degree of support on key issues, such as Overall Leadership, Foreign Policy, the Economy, Domestic Issues, and Terrorism. Ipsos’ reported Margin of Error is +/- 2.5 points for adults, +/- 2.7 points for registered voters.

Ayres McHenry: “Ayres, McHenry, & Associates belongs to the American Association of Public Opinion Research, and the American Association of Political Consultants, where Ayres serves as a member of the Board of Directors.”
That’s all they have. Nothing about weighting or breakdown of samples, which is contrary to the AAPOR’s written Code of Professional Ethics and Practice. Ayres McHenry did not respond to a request for more information. As they are a Republican-sponsored firm, this agency does not provide any supporting evidence for its statements, and should be not generally be considered a reliable indicator of voters’ true opinion.

Battleground Poll : The Battleground Poll uses two firms for its interviews and analysis; the Tarrance Group and Lake, Snell, & Perry. The Battleground Polls started in 1991, so they have some history to track. The Battleground Poll did not publish its methodology, but the Tarrance Group was kind enough to answer a request for more information (hat tip to Brian Nienaber); Lake, Snell, & Perry did not respond to a request for information. Overall, the Battleground Poll uses a “stratified” sample design, and pure RDD for respondent pooling. Battleground explains their weighting thusly; “quotas are set by state and by gender based on voter registration statistics or voting age population, where applicable. Prior election turnout statistics are also used to set these quotas. For the 2004 Battlegrounds, we have been applying a weight to the demographics of race and party identification. Race is weighted to White=80%, African Americans=10%, Hispanics=6%, and Other races=4%. Party identification is weighted to even with Republican=42.3%, Democrat=42.3%, and Independent=15.4%.” Note that the demographics are consistent with 2000 Census, and the party weighting presumes parity. Battleground release the demographic breakdown of their respondents, but does not publish polling results by demographic groups. Like the Associated Press, most questions reflect a trend of national mood on the major issues. The sample used is for 1,000 registered voters who self-describe as “likely”. Tarrance estimates their Margin of Error to be +/- 3.1 points.

CBS News, and CBS News/NY Times : Telephone interviews with adults in the continental United States. Phone calls are randomly generated within system-selected area codes and exchanges. CBS goes to some length to brag about their methodology, and you know what? They should. While CBS and the NY Times tend to over-weight the poll in favor of Democrats, their demographics not only follow NPCC guidelines by matching the 2000 Census, they also publish their demographics regularly, and have for the last five years. If you don’t like their numbers, at least you can take them apart to see where they came from, and this with no subscription fee or doubletalk to hide the trends. Obviously, the “60 Minutes” guys and Dan Rather have nothing to do with the polling at CBS. The polls are consistent and complete, and frankly, very impressive in their detail and history. CBS/NYT generally calls about 1,000 adults in each survey, with around 78-80% as registered voters. The respondent poll is pure RDD. Their cited Margin of Error is +/- 3 points

CNN/USA Today/Gallup: This poll uses random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. Demographics details are available, but generally only to Gallup subscribers. The weighting matches NPCC guidelines. The respondent poll is pure RDD.

Democracy Corps : This Democrat-sponsored polling agency (James Carville is one of the owners, that should tell you a lot) uses Greenberg, Quinlan, Rosler (GQR) for it’s interview sampling. They do not respond to queries, and do not explain their methodology. Note that NPR uses the same sub-contractor for their polling. This agency should be recognized as partisan and biased by design.

Fox News/Opinion Dynamics : Opinion Dynamics Corporation conducts a national telephone poll 1,000 self-described ‘likely voters’ from random contacts. Fox includes internal details by gender and party affiliation, but not race. Their website says “Generally, Fox News poll results are not weighted. The national probability sample, if conducted properly, should accurately reflect national attitudes. However, particularly because the survey is often conducted over only two nights (limiting the opportunity for callbacks), some demographic deviation is possible. Opinion Dynamics Corporation has a constantly updated database of demographic information about the national samples and, if one should deviate significantly from past averages, weighting is used to bring the sample into conformity with other samples”. In English, that suggests that Fox will weight some polls, but not others, which is a strike against consistency. There is no information to determine whether or not the respondent poll is pure RDD or pre-selected. The same website admits that Fox weights their polls by gender, 47% Men and 53% Women, even though this is not in line with NPCC guidelines, Census data, or consistent with Exit Polls from past elections. Neither Fox News nor Opinion Dynamics responded to a request for clarification.

Gallup: The gold standard of opinion polling. Gallup presents demographic and trend data for every poll they have anything to do with. Whether on their own or in combination with other groups (the CNN/USA Today/Gallup poll, for example), Gallup insists on consistent procedures to insure consistency. Their respondent poll is pure RDD for the Presidential Trial Heats. Gallup weights their polls in line with NPCC guidelines, and releases internal data on race, gender, party affiliation, age, region, education, economic strata, union/non-union, veteran/non-veteran, religious preference, and sexual orientation. Gallup polls are random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. The down side to the demographics details, is that they are generally only available to Gallup subscribers. With a 69-year track record, Gallup is able to show an impressive record for their predictions and tracking.

Harris: The Harris Poll is one of the oldest polls in the nation, after Gallup. For some reason, though, Harris not nearly as successful as their older sibling, and I think I know why. They like to ask questions, but they don’t answer them. The Harris Poll is a random telephone poll, as most of the polls are, interviewing roughly 1,000 adults nationwide in each poll, and producing around 80% registered voters from that pool. The respondent poll for their telephone interviews is pure RDD. Harris also has an Interactive Poll, but there is no established benchmark for the accuracy of the Interactive poll, nor do they explain their methodology for the Interactive poll; I suspect it is similar to their telephone poll, since they produce similar results, but cannot confirm this possibility. Harris weights their responses by the NCPP guidelines, for age, gender, race, education, number of adults, number of voice/telephone lines in the household, region and size of place, in order to “align them with their actual proportions in the population”. Harris cites a +/- 3 point Margin of Error. Unfortunately, when it comes to releasing their information, well, they don’t. I’ve been part of the Interactive polling as a respondent, and even then, they are parsimonious with hard data. From the lack of response I’ve had from them, I get the strong impression they are all about chasing the corporate patrons, and only put out the occasional public poll to keep their name in the press. OK, that’s their right, but other polls can chase sponsors, without looking like the Information Age version of Ebenezer Scrooge (pre-Ghost Visit). My advice? Ignore these guys, unless they start putting some hard data behind the headlines in their releases.

Investor’s Business Daily/Christian Science Monitor: The Christian Science Monitor is a long-established, well-respected name, but they have no experience in polling. Investor’s Business Daily is a publication I’d never heard of, until they showed up with their releases. They began polling in February 2001, using something they called “Indices” for various factors they considered important. The Indices are developed using random nationwide telephone interviews with approximately 900 adults each month. The respondent pool appears to be pure RDD, but there is no confirmation. They seem very impressed with themselves. I’m not impressed. Since they don’t release much hard data at all, and pretty much diddly to support their claims, and their ‘indices’ don’t seem to follow any established method for determining public opinion, my opinion of IBD is rather like the old Monty Python skit about an especially bad brand of wine: “This is not a poll for enjoying, it’s a poll for laying down, and avoiding”.

Investor’s Business Daily/TIPP : See Investor’s Business Daily/Christian Science Monitor, above.

LA Times : The Los Angeles Times wants to be a big-time newspaper. I write it that way ,because while they want the glory, they don’t seem to feel as though they should have to earn it. The LA Times uses telephone interviews nationwide, of at least 1,500 adults, using pure RDD sampling. They produce a subset of registered voters, at around 77-80% of the adult number. The LA Times says the samples are “weighted slightly to conform with their respective census figures for gender, race, age, and education”, which may or may not be in alignment with NCPP guidelines. The Times’ announced Margin of Error is +/- 3 points. The Times releases details by party alignment and gender, and appears to over-weight Democrats.

Marist College Institute for Public Opinion: Marist is a college up in New York, who produces polls on the Presidency. Marist does not release a lot of details, however, including their methodology. Their website notes that “MIPO adheres to the current Code of the National Council on Public Polls and the AAPOR Code of Professional Ethics and Practices”, which at least suggests they use the 2000 Census for their weights, although this does not speak to party alignment or sampling methodology. They haven’t put anything out for a long time, so it may not matter, but if they pop up again, the fact that they don’t back up their statements with supporting data should be a warning sign.

NBC News: NBC News uses a sub-contractor for its polling. Princeton Survey Research Associates (PSRA), whom NBC contracts for some of their polls, was kind enough to provide specific details by email at my request (hat tip to Evans Witt). The July poll where Kerry picked Edwards for his running mate, NBC used Braun Research, Inc. for the interviews, using a sample designed by Survey Sampling International, LLC. The sample was relatively small (504 registered voters), but used pure RDD. NBC says that “statistical results are weighted to correct known demographic discrepancies”, which “parameters came from a special analysis of the Census Bureau’s 2003 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States that had a telephone”. NBC estimates their Margin of Error at +/- 5 points, due to the smaller sample size compared to normal poll pools. NBC does not release demographic breakdowns of votes in their polls.

NBC also examined their response rate, which is an often overlooked factor in poll analysis. NBC states “the response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:
Contact rate – the proportion of working numbers where a request for interview was made – of 47 percent
Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused – of 32 percent
Completion rate – the proportion of initially cooperating and eligible interviews that were completed – of 99 percent
Thus the response rate for this survey was 15 percent.”

Newsweek : Like NBC, Newsweek has also used Princeton Survey Research Associates International (PSRA) to do their polls, and again, they are pure RDD telephone interviews, of roughly 1,000 registered voters nationally. They seem to weight by NCPP guidelines. Newsweek did not publish the response rates from respondents, but they are very good about including the demographic response in their releases, including party support, gender, non-whites, geography, and by age groups. Newsweek reports their Margin of Error to be +/- 4 points.

NPR-POS/GQR : NPR uses a sub-contractor for their polls, Greenberg Quinlan Rosner Research (GQR) and Public Opinion Strategies (POS) for National Public Radio (NPR). GQR also does work for the Democracy Corps firm, indicating a loose possible relation between the two polls. NPR presents its results for “likely voters”, defined as “registered voters, voted in the 2000 presidential election or the 2002 congressional elections (or were not eligible) and indicated they were almost certain or certain to vote in 2004”. Pure RDD was used for the pool selection. GQR interviews around 800 voters, and reports a Margin of Error at +/- 3.5 points. NPR does not release demographic responses, and did not respond to a request for further information.

Pew Research Center: Pew conducts its research using the same sub-contractor as NBC News and Newsweek, Princeton Survey Research Associates International. PSRA performs a pure RDD pool of respondents, interviewing a national sample of adults by telephone, for roughly 1,000 or 2,000 respondents, of which 78-80% are registered voters. As with other PSRA work, it appears NCPP guidelines are followed for weighting. Pew publishes an extensive report, reflecting not only national mood on key issues, but demographic response by party response, gender, age, and regional groups. Like the CBS News poll, I find the political weighting a little bit off, but I can’t complain about their work ethic or standards. Pew is very consistent, and is particularly useful for measuring shifts in demographic trends. Pew estimates their overall response Margin of Error is +/- 2.5 points, and +/- 3.5 points for registered voters.

Quinnipiac University: This school in Connecticut performs polls on “politics and public policy in New York, New Jersey, and Connecticut”, as well as the occasional national poll, such as the Presidential Election. They use pure RDD with their on-campus Institute to contact roughly 1,000 registered voters or more nationally by telephone, over a five or six day period. They release results by overall weighted response, party affiliation, by gender, and by black/white racial group responses. Quinnipiac does not detail their methodology for weighting demographics, and did not respond to a request for more information. Quinnipiac estimates their Margin of Error at +/- 3.1 points, less than that if the sample size is larger.

Rasmussen: Rasmussen Research performs more national polls than anyone else right now, with a poll taken every day. Unfortunately there is no methodology released to the general public. Not the size of the respondent poll, nor whether the sample is randomly developed, there is no weighting method cited, and there is no breakdown of respondents’ responses, which might allow analysts to compare Rasmussen’s results with anyone else. Scott was kind enough to respond to a request for more information, but only to say that he is very busy right now, and will answer later. By the time this article went to publishing, no information on his methodology had been provided, so I must regard this poll to be unsupported in its claims. There is no evidence to confirm whether or not the response is weighted, and if so how, or whether any standardized methods are employed in this poll.

Survey USA: Survey USA is a unique polling agency. On the one hand, they do not perform national polls on the Presidential race, yet they poll in almost every state on the Presidential race. Survey USA has been around since 1992, and they love to punch out state polls Survey USA and Zogby are in a horse race for who will put out the most state polls this year. I also included Survey USA in this list, because they have strong opinions about polling methodology, and they printed an extensive article, far too long for me to copy here, so read it here.

Survey USA uses pure RDD for their telephone polls,usually between 500 and 1,000 self-identified “likely voters”. While Survey USA does not define “Likely Voter” for their methodology, they do take pains to emphasize that their polls “are conducted in the voice of a professional announcer. SurveyUSA is the first research company to appreciate that opinion research can be made more affordable, more consistent and in some ways more accurate by eliminating the single largest cost of conducting research, and a possible source of bias: the human interviewer.”. This suggests they use an automated voice, which is certainly original. Survey USA opines that human error in pronunciation, diction and unintended inflection leads to flaws in the voter response. Survey USA uses weighting in line with NCPP guidelines. The reports are specific to states, but lack demographic breakdowns or votes by demographic group. Survey USA estimates their Margin of Error to be +/- 4.5 points.

It’s interesting to note three additional comments made by Survey USA. First, Survey USA makes a point of the need to verify results, disparaging “call-in” polls as unscientific, and strongly suggests Internet polling is about as useless as the “call-in” polls. Survey USA notes that “only a few large research companies employ their own telephone interviewers. Almost all small pollsters, and even some of the nation’s most prominent firms, outsource all of their interviewing to a third party.” This appears to imply that contractors are not as valid as independent firms, but from my review of the polls, a number of the contractors are equal or superior to established polls which are better known, Princeton Survey Research Associates International in particular.

Finally, Survey USA mentioned a practice I had heard about, but which is impossible to prove: “curbstoning”. This is, as Survey USA explains, where a pollster “may not interview a respondent at all, but just make up the answers to questions”. It’s rare, says Survey USA, but the problem is, unless you check your pollsters carefully, you really don’t know if they are putting down the real response or not, or whether they are putting down what they think their boss wants to hear. I agree that I think this practice is not very common, as I believe poll clients really do want honest numbers, so they can see where they stand, but it is important to recognize that this problem exists.

TIME : TIME magazine hires a contractor, Schulman, Ronca, & Bucuvalas (SRBI) to perform their polling, including the interviews. They average roughly 1,000 registered voters, and 850-900 self-described “likely voters”. Pure RDD is used for the contact. SRBI follows NCPP guidelines for demographics and weights the part affiliation, as follows: Likely voters, 34% Republican, 35% Democrat, 22% Independent. Registered voters 31% Republican, 32% Democrat, 26% Independent, which shows a rough parity. TIME does not publish results by demographic group response, but to measure the mood in key questions, and reflects trends by asking the same consistent questions. SRBI estimates their Margin of Error is +/- 3 points for registered voters, +/- 4 points for likely voters.

Wall Street Journal: The Wall Street Journal does not do its own polling, but co-sponsors polls with other groups. Earlier this year, the WSJ was partnered with NBC News, but is now partnered with Zogby. No additional information was available from the Journal.

Zogby: Back in 1996, pollster Zogby hit the bullseye in predicting the results of the Presidential election. In 2000, they were close again, though their aggregate error tied them with 5 other national polls. In 2002, Zogby appeared to show a lean in favor of the Democrats, and he was way off in his mid-term election predictions. This year, at the end of the spring, John Zogby actually came out and predicted John Kerry would win the election, which appeared to indicate his bias had reached the point of full-blown partisanship against the President, reflected in a growing number of opinions made out of personal preference, rather than on the evidence. Zogby’s refusal to show his work, only magnifies the apparent distortion of his results.

Zogby runs two polls; a telephone poll and an Interactive Internet poll. Unlike almost every other poll, Zogby’s telephone poll is not RDD. Zogby describes his list as follows: “The majority of telephone lists for polls and surveys are produced in the IT department at Zogby International. Vendor-supplied lists are used for regions with complicated specifications, e.g., some Congressional Districts. Customer-supplied lists are used for special projects like customer satisfaction surveys and organization membership surveys.
Telephone lists generated in our IT department are called from the 2002 version of a nationally published set of phone CDs of listed households, ordered by telephone number. Residential (or business) addresses are selected and then coded by region, where applicable. An appropriate replicate1 is generated from this parent list, applying the replicate algorithm repeatedly with a very large parent list, e.g., all of the US.
Acquired lists are tested for duplicates, coded for region, tested for regional coverage, and ordered by telephone, as needed.” Zogby notes that “regional quotas are employed to ensure adequate coverage nationwide.” That is, Zogby takes pains to insure that his respondent poll is not random.

As for his weighting, Zogby states “Reported frequencies and crosstabs are weighted using the appropriate demographic profile to provide a sample that best represents the targeted population from which the sample is drawn from. The proportions comprising the demographic profile are compiled from historical exit poll data, census data, and from Zogby International survey data.”

In other words, Zogby uses his own polls to drive some of his demographic parameters, a practice not approved, much less recommended, by either the NCPP or the AAPOR.

All in all, Zogby’s habit of confusing his personal opinion with data-driven conclusions, his admitted practice of manipulating the respondent pool and his demographic weights, by standards not accepted anywhere else, along with mixing Internet polls with telephone interview results, forces me to reject his polls as unacceptable; they simply cannot be verified, and I strongly warn the reader that there is no established benchmark for the Zogby reports, even using previous Zogby polls, because he has changed his practices from his own history.

Except for some specific polls whose practices earned remarks for their excellence or a distinct lack of it, I have tried not to rank or grade the polls. I would also recommend the reader read through the polls himself, to determine which is most thorough in its work and results. But hopefully, this guide will help sort through who is chasing the money, and who is serious about their work.

-- DJ Drummond


19 posted on 12/01/2005 6:19:59 AM PST by conservativecorner
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson