Skip to comments.Poll Methodology - A Comprehensive 2004 Guide
Posted on 09/24/2004 12:00:35 PM PDT by USA_Soccer
Finally an excellent article on the various poll methodology.
Holloween mask sales is all I follow for the last 4 elections. Its worked everytime(unfortunatly Slick was most sold the two times he won also). Bush up 15% over Kerry masks.
It seems to me that with random nationwide calls there is a far greater chance that you will reach someone that lives in a large city. I wonder how often people from Idaho are called?
There has been intense interest in the polls this year, and the recent disagreement about the range of position has only highlighted discussion. Some people like to support a poll with results they like, without any sort of examination about why that poll is different from others. And some reject polls on a charge of outright bias or prejudice, which I can understand, given the partisan comments from supposedly objective people like John Zogby and Larry Sabato, but I must caution the readers to be careful to consider the evidence before accepting or rejecting a poll.
Lets start with the obvious; more information is better, especially if it is relevant to how the numbers were driven. By relevant, I mean two things: The information should show valid evidence to support the polls main conclusion, and the information should be consistent with past polls, so that trends and historical benchmarks may be seen. To that end, I discovered that in terms of methodology, we can separate the polls into three broad types the polls which provide demographic internal data, the polls whose questions show mood in the main issues, and those polls which refuse to provide internal data.
The best way to find out how the polls developed their methodologies, is to look for that information. Some publish their methodologies at the bottom of their poll releases, others are so proud of their methodologies, they wrote up special articles to explain their process. Others did not have their methodologies handy, but responded when I asked them how they did their polling. And others, well, they were neither forthcoming nor cooperative, and that speaks for itself. This article allows you to get to know the polls all over again, this time starting form the inside. I figure, this guide will help you figure out for yourself, whose word is worth listening to, and who is nothing but hooey. I am listing the polls in alphabetical order. All telephone polls referenced employ Random-Digit-Dialing (RDD); RDD is used to pre-select Area codes and exchanges, then use a randomizer to select the last 3 or 4 digits, depending on the poll. When I say pure RDD, I mean that the respondent poll is new; some polls appear to use an initial pool of respondents for future polling, and I will note this where it shows up. All references to Margin of Error reflect a standard 95% confidence level by the polls. When I reference NCPP, I mean the National Council on Public Polls, who published guidelines for demographic weighting and internal responsibility, which they expect their members to follow. Another national group for pollers is the American Association of Public Opinion Research (AAPOR), but they appear to be much smaller, and have looser standards than the NCPP. Its worth noting, though, that neither the NCPP nor AAPOR appears to have any deterrent in their policies; there is no specified penalty for not meeting their standards, nor any formal auditing authority. That, of course, is one reason Im doing this review.
ABC News/Washington Post: This agency uses a call center for its polling. The subcontractor at present is TNS of Horsham, Pa. The poll is performed by telephone, calling roughly 1,200 randomly selected adults nationwide, from which self-identified registered voters are polled for the reports information. The respondent poll is pure RDD for each poll. ABC/WP says their Margin of Error is +/- 3 points. The ABC/WP poll cites results by gender, race, age, education, and sometimes also by income groups. Regarding the weighting of their poll data, ABC says, Final data are weighted using demographic information from the Census to adjust for sampling and non-sampling deviations from population values. Respondents customarily are classified into one of 48 cells based on age, race, gender and education. Weights are assigned so the proportion in each of these 48 cells matches the actual population proportion according to the Census Bureau’s most recent Current Population Survey The weighting appears to be in line with NCPP guidelines.
American Research Group: ARGs methodology for national polling was not published, but from their primary polling in the spring, ARG stated The results for this tracking survey are based on completed telephone interviews among a statewide random sample of likely primary voters in [the state]. Telephone households are selected by adding random digits to telephone exchanges selected in a random probability-proportionate-to-size sample of the state. Respondents in the telephone households are randomly selected and then screened for voter registration (including intent to register on or before primary day) and likelihood of voting in the primary. On September 22, ARG released a nationwide compilation of state polls, which revealed they use a 53-47 weighting of women to men, and a party weighting of 41.4% Democrats, 35.5% Republicans, and 23.1% Independents. These do not conform to 2000 or 2002 exit polling, nor the 2000 Census, and are not in line with NCPP guidelines. ARGs respondent poll may sometimes be pure RDD, but at other times appears to be a reserved poll from previous polls, in order to track possible opinion shifts in the same pool. ARG does not explain whether this is the case, and did not respond to a request for clarification.
Associated Press/Ipsos : The Associated Press Poll is conducted by Ipsos-Public Affairs. The poll is a telephone poll of randomly selected numbers, with a sample of roughly 1,500 adults nationwide, produced between 1,200 and 1,300 registered voters ,whose responses produce the polls information, along with a smaller number of self-described likely voters, who are defined as voters who voted in 2000 and are 8-10 on a likelihood of voting (1-10 scale), or who did not vote in 2000, but are 10 this year. The respondent poll is pure RDD. Ipsos weights its poll, but does not detail the breakdown in its Press Release or Questionnaire, though some demographic information was released in their latest poll. Instead, Ipsos concentrates on the trend of questions measuring a degree of support on key issues, such as Overall Leadership, Foreign Policy, the Economy, Domestic Issues, and Terrorism. Ipsos reported Margin of Error is +/- 2.5 points for adults, +/- 2.7 points for registered voters.
Ayres McHenry: Ayres, McHenry, & Associates belongs to the American Association of Public Opinion Research, and the American Association of Political Consultants, where Ayres serves as a member of the Board of Directors.
Thats all they have. Nothing about weighting or breakdown of samples, which is contrary to the AAPORs written Code of Professional Ethics and Practice. Ayres McHenry did not respond to a request for more information. As they are a Republican-sponsored firm, this agency does not provide any supporting evidence for its statements, and should be not generally be considered a reliable indicator of voters true opinion.
Battleground Poll : The Battleground Poll uses two firms for its interviews and analysis; the Tarrance Group and Lake, Snell, & Perry. The Battleground Polls started in 1991, so they have some history to track. The Battleground Poll did not publish its methodology, but the Tarrance Group was kind enough to answer a request for more information (hat tip to Brian Nienaber); Lake, Snell, & Perry did not respond to a request for information. Overall, the Battleground Poll uses a stratified sample design, and pure RDD for respondent pooling. Battleground explains their weighting thusly; quotas are set by state and by gender based on voter registration statistics or voting age population, where applicable. Prior election turnout statistics are also used to set these quotas. For the 2004 Battlegrounds, we have been applying a weight to the demographics of race and party identification. Race is weighted to White=80%, African Americans=10%, Hispanics=6%, and Other races=4%. Party identification is weighted to even with Republican=42.3%, Democrat=42.3%, and Independent=15.4%. Note that the demographics are consistent with 2000 Census, and the party weighting presumes parity. Battleground release the demographic breakdown of their respondents, but does not publish polling results by demographic groups. Like the Associated Press, most questions reflect a trend of national mood on the major issues. The sample used is for 1,000 registered voters who self-describe as likely. Tarrance estimates their Margin of Error to be +/- 3.1 points.
CBS News, and CBS News/NY Times : Telephone interviews with adults in the continental United States. Phone calls are randomly generated within system-selected area codes and exchanges. CBS goes to some length to brag about their methodology, and you know what? They should. While CBS and the NY Times tend to over-weight the poll in favor of Democrats, their demographics not only follow NPCC guidelines by matching the 2000 Census, they also publish their demographics regularly, and have for the last five years. If you dont like their numbers, at least you can take them apart to see where they came from, and this with no subscription fee or doubletalk to hide the trends. Obviously, the 60 Minutes guys and Dan Rather have nothing to do with the polling at CBS. The polls are consistent and complete, and frankly, very impressive in their detail and history. CBS/NYT generally calls about 1,000 adults in each survey, with around 78-80% as registered voters. The respondent poll is pure RDD. Their cited Margin of Error is +/- 3 points
CNN/USA Today/Gallup: This poll uses random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. Demographics details are available, but generally only to Gallup subscribers. The weighting matches NPCC guidelines. The respondent poll is pure RDD.
Democracy Corps : This Democrat-sponsored polling agency (James Carville is one of the owners, that should tell you a lot) uses Greenberg, Quinlan, Rosler (GQR) for its interview sampling. They do not respond to queries, and do not explain their methodology. Note that NPR uses the same sub-contractor for their polling. This agency should be recognized as partisan and biased by design.
Fox News/Opinion Dynamics : Opinion Dynamics Corporation conducts a national telephone poll 1,000 self-described likely voters from random contacts. Fox includes internal details by gender and party affiliation, but not race. Their website says Generally, Fox News poll results are not weighted. The national probability sample, if conducted properly, should accurately reflect national attitudes. However, particularly because the survey is often conducted over only two nights (limiting the opportunity for callbacks), some demographic deviation is possible. Opinion Dynamics Corporation has a constantly updated database of demographic information about the national samples and, if one should deviate significantly from past averages, weighting is used to bring the sample into conformity with other samples. In English, that suggests that Fox will weight some polls, but not others, which is a strike against consistency. There is no information to determine whether or not the respondent poll is pure RDD or pre-selected. The same website admits that Fox weights their polls by gender, 47% Men and 53% Women, even though this is not in line with NPCC guidelines, Census data, or consistent with Exit Polls from past elections. Neither Fox News nor Opinion Dynamics responded to a request for clarification.
Gallup: The gold standard of opinion polling. Gallup presents demographic and trend data for every poll they have anything to do with. Whether on their own or in combination with other groups (the CNN/USA Today/Gallup poll, for example), Gallup insists on consistent procedures to insure consistency. Their respondent poll is pure RDD for the Presidential Trial Heats. Gallup weights their polls in line with NPCC guidelines, and releases internal data on race, gender, party affiliation, age, region, education, economic strata, union/non-union, veteran/non-veteran, religious preference, and sexual orientation. Gallup polls are random telephone interviews, with around 1,000 adults on average, around 76-80% registered voters responding. Announced Margin of Error is +/- 4 points. The down side to the demographics details, is that they are generally only available to Gallup subscribers. With a 69-year track record, Gallup is able to show an impressive record for their predictions and tracking.
Harris: The Harris Poll is one of the oldest polls in the nation, after Gallup. For some reason, though, Harris not nearly as successful as their older sibling, and I think I know why. They like to ask questions, but they dont answer them. The Harris Poll is a random telephone poll, as most of the polls are, interviewing roughly 1,000 adults nationwide in each poll, and producing around 80% registered voters from that pool. The respondent poll for their telephone interviews is pure RDD. Harris also has an Interactive Poll, but there is no established benchmark for the accuracy of the Interactive poll, nor do they explain their methodology for the Interactive poll; I suspect it is similar to their telephone poll, since they produce similar results, but cannot confirm this possibility. Harris weights their responses by the NCPP guidelines, for age, gender, race, education, number of adults, number of voice/telephone lines in the household, region and size of place, in order to align them with their actual proportions in the population. Harris cites a +/- 3 point Margin of Error. Unfortunately, when it comes to releasing their information, well, they dont. Ive been part of the Interactive polling as a respondent, and even then, they are parsimonious with hard data. From the lack of response Ive had from them, I get the strong impression they are all about chasing the corporate patrons, and only put out the occasional public poll to keep their name in the press. OK, thats their right, but other polls can chase sponsors, without looking like the Information Age version of Ebenezer Scrooge (pre-Ghost Visit). My advice? Ignore these guys, unless they start putting some hard data behind the headlines in their releases.
Investors Business Daily/Christian Science Monitor: The Christian Science Monitor is a long-established, well-respected name, but they have no experience in polling. Investors Business Daily is a publication Id never heard of, until they showed up with their releases. They began polling in February 2001, using something they called Indices for various factors they considered important. The Indices are developed using random nationwide telephone interviews with approximately 900 adults each month. The respondent pool appears to be pure RDD, but there is no confirmation. They seem very impressed with themselves. Im not impressed. Since they dont release much hard data at all, and pretty much diddly to support their claims, and their indices dont seem to follow any established method for determining public opinion, my opinion of IBD is rather like the old Monty Python skit about an especially bad brand of wine: This is not a poll for enjoying, its a poll for laying down, and avoiding.
Investors Business Daily/TIPP : See Investors Business Daily/Christian Science Monitor, above.
LA Times : The Los Angeles Times wants to be a big-time newspaper. I write it that way ,because while they want the glory, they dont seem to feel as though they should have to earn it. The LA Times uses telephone interviews nationwide, of at least 1,500 adults, using pure RDD sampling. They produce a subset of registered voters, at around 77-80% of the adult number. The LA Times says the samples are weighted slightly to conform with their respective census figures for gender, race, age, and education, which may or may not be in alignment with NCPP guidelines. The Times’ announced Margin of Error is +/- 3 points. The Times releases details by party alignment and gender, and appears to over-weight Democrats.
Marist College Institute for Public Opinion: Marist is a college up in New York, who produces polls on the Presidency. Marist does not release a lot of details, however, including their methodology. Their website notes that MIPO adheres to the current Code of the National Council on Public Polls and the AAPOR Code of Professional Ethics and Practices, which at least suggests they use the 2000 Census for their weights, although this does not speak to party alignment or sampling methodology. They havent put anything out for a long time, so it may not matter, but if they pop up again, the fact that they dont back up their statements with supporting data should be a warning sign.
NBC News: NBC News uses a sub-contractor for its polling. Princeton Survey Research Associates (PSRA), whom NBC contracts for some of their polls, was kind enough to provide specific details by email at my request (hat tip to Evans Witt). The July poll where Kerry picked Edwards for his running mate, NBC used Braun Research, Inc. for the interviews, using a sample designed by Survey Sampling International, LLC. The sample was relatively small (504 registered voters), but used pure RDD. NBC says that statistical results are weighted to correct known demographic discrepancies, which parameters came from a special analysis of the Census Bureaus 2003 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States that had a telephone. NBC estimates their Margin of Error at +/- 5 points, due to the smaller sample size compared to normal poll pools. NBC does not release demographic breakdowns of votes in their polls.
NBC also examined their response rate, which is an often overlooked factor in poll analysis. NBC states the response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:
Contact rate the proportion of working numbers where a request for interview was made of 47 percent
Cooperation rate the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused of 32 percent
Completion rate the proportion of initially cooperating and eligible interviews that were completed of 99 percent
Thus the response rate for this survey was 15 percent.
Newsweek : Like NBC, Newsweek has also used Princeton Survey Research Associates International (PSRA) to do their polls, and again, they are pure RDD telephone interviews, of roughly 1,000 registered voters nationally. They seem to weight by NCPP guidelines. Newsweek did not publish the response rates from respondents, but they are very good about including the demographic response in their releases, including party support, gender, non-whites, geography, and by age groups. Newsweek reports their Margin of Error to be +/- 4 points.
NPR-POS/GQR : NPR uses a sub-contractor for their polls, Greenberg Quinlan Rosner Research (GQR) and Public Opinion Strategies (POS) for National Public Radio (NPR). GQR also does work for the Democracy Corps firm, indicating a loose possible relation between the two polls. NPR presents its results for likely voters, defined as registered voters, voted in the 2000 presidential election or the 2002 congressional elections (or were not eligible) and indicated they were almost certain or certain to vote in 2004. Pure RDD was used for the pool selection. GQR interviews around 800 voters, and reports a Margin of Error at +/- 3.5 points. NPR does not release demographic responses, and did not respond to a request for further information.
Pew Research Center: Pew conducts its research using the same sub-contractor as NBC News and Newsweek, Princeton Survey Research Associates International. PSRA performs a pure RDD pool of respondents, interviewing a national sample of adults by telephone, for roughly 1,000 or 2,000 respondents, of which 78-80% are registered voters. As with other PSRA work, it appears NCPP guidelines are followed for weighting. Pew publishes an extensive report, reflecting not only national mood on key issues, but demographic response by party response, gender, age, and regional groups. Like the CBS News poll, I find the political weighting a little bit off, but I cant complain about their work ethic or standards. Pew is very consistent, and is particularly useful for measuring shifts in demographic trends. Pew estimates their overall response Margin of Error is +/- 2.5 points, and +/- 3.5 points for registered voters.
Quinnipiac University: This school in Connecticut performs polls on politics and public policy in New York, New Jersey, and Connecticut, as well as the occasional national poll, such as the Presidential Election. They use pure RDD with their on-campus Institute to contact roughly 1,000 registered voters or more nationally by telephone, over a five or six day period. They release results by overall weighted response, party affiliation, by gender, and by black/white racial group responses. Quinnipiac does not detail their methodology for weighting demographics, and did not respond to a request for more information. Quinnipiac estimates their Margin of Error at +/- 3.1 points, less than that if the sample size is larger.
Rasmussen: Rasmussen Research performs more national polls than anyone else right now, with a poll taken every day. Unfortunately there is no methodology released to the general public. Not the size of the respondent poll, nor whether the sample is randomly developed, there is no weighting method cited, and there is no breakdown of respondents responses, which might allow analysts to compare Rasmussens results with anyone else. Scott was kind enough to respond to a request for more information, but only to say that he is very busy right now, and will answer later. By the time this article went to publishing, no information on his methodology had been provided, so I must regard this poll to be unsupported in its claims. There is no evidence to confirm whether or not the response is weighted, and if so how, or whether any standardized methods are employed in this poll.
Survey USA: Survey USA is a unique polling agency. On the one hand, they do not perform national polls on the Presidential race, yet they poll in almost every state on the Presidential race. Survey USA has been around since 1992, and they love to punch out state polls Survey USA and Zogby are in a horse race for who will put out the most state polls this year. I also included Survey USA in this list, because they have strong opinions about polling methodology, and they printed an extensive article, far too long for me to copy here, so read it here.
Survey USA uses pure RDD for their telephone polls,usually between 500 and 1,000 self-identified likely voters. While Survey USA does not define Likely Voter for their methodology, they do take pains to emphasize that their polls are conducted in the voice of a professional announcer. SurveyUSA is the first research company to appreciate that opinion research can be made more affordable, more consistent and in some ways more accurate by eliminating the single largest cost of conducting research, and a possible source of bias: the human interviewer.. This suggests they use an automated voice, which is certainly original. Survey USA opines that human error in pronunciation, diction and unintended inflection leads to flaws in the voter response. Survey USA uses weighting in line with NCPP guidelines. The reports are specific to states, but lack demographic breakdowns or votes by demographic group. Survey USA estimates their Margin of Error to be +/- 4.5 points.
Its interesting to note three additional comments made by Survey USA. First, Survey USA makes a point of the need to verify results, disparaging call-in polls as unscientific, and strongly suggests Internet polling is about as useless as the call-in polls. Survey USA notes that only a few large research companies employ their own telephone interviewers. Almost all small pollsters, and even some of the nations most prominent firms, outsource all of their interviewing to a third party. This appears to imply that contractors are not as valid as independent firms, but from my review of the polls, a number of the contractors are equal or superior to established polls which are better known, Princeton Survey Research Associates International in particular.
Finally, Survey USA mentioned a practice I had heard about, but which is impossible to prove: curbstoning. This is, as Survey USA explains, where a pollster may not interview a respondent at all, but just make up the answers to questions. Its rare, says Survey USA, but the problem is, unless you check your pollsters carefully, you really dont know if they are putting down the real response or not, or whether they are putting down what they think their boss wants to hear. I agree that I think this practice is not very common, as I believe poll clients really do want honest numbers, so they can see where they stand, but it is important to recognize that this problem exists.
TIME : TIME magazine hires a contractor, Schulman, Ronca, & Bucuvalas (SRBI) to perform their polling, including the interviews. They average roughly 1,000 registered voters, and 850-900 self-described likely voters. Pure RDD is used for the contact. SRBI follows NCPP guidelines for demographics and weights the part affiliation, as follows: Likely voters, 34% Republican, 35% Democrat, 22% Independent. Registered voters 31% Republican, 32% Democrat, 26% Independent, which shows a rough parity. TIME does not publish results by demographic group response, but to measure the mood in key questions, and reflects trends by asking the same consistent questions. SRBI estimates their Margin of Error is +/- 3 points for registered voters, +/- 4 points for likely voters.
Wall Street Journal: The Wall Street Journal does not do its own polling, but co-sponsors polls with other groups. Earlier this year, the WSJ was partnered with NBC News, but is now partnered with Zogby. No additional information was available from the Journal.
Zogby: Back in 1996, pollster Zogby hit the bullseye in predicting the results of the Presidential election. In 2000, they were close again, though their aggregate error tied them with 5 other national polls. In 2002, Zogby appeared to show a lean in favor of the Democrats, and he was way off in his mid-term election predictions. This year, at the end of the spring, John Zogby actually came out and predicted John Kerry would win the election, which appeared to indicate his bias had reached the point of full-blown partisanship against the President, reflected in a growing number of opinions made out of personal preference, rather than on the evidence. Zogbys refusal to show his work, only magnifies the apparent distortion of his results.
Zogby runs two polls; a telephone poll and an Interactive Internet poll. Unlike almost every other poll, Zogbys telephone poll is not RDD. Zogby describes his list as follows: The majority of telephone lists for polls and surveys are produced in the IT department at Zogby International. Vendor-supplied lists are used for regions with complicated specifications, e.g., some Congressional Districts. Customer-supplied lists are used for special projects like customer satisfaction surveys and organization membership surveys.
Telephone lists generated in our IT department are called from the 2002 version of a nationally published set of phone CDs of listed households, ordered by telephone number. Residential (or business) addresses are selected and then coded by region, where applicable. An appropriate replicate1 is generated from this parent list, applying the replicate algorithm repeatedly with a very large parent list, e.g., all of the US.
Acquired lists are tested for duplicates, coded for region, tested for regional coverage, and ordered by telephone, as needed. Zogby notes that regional quotas are employed to ensure adequate coverage nationwide. That is, Zogby takes pains to insure that his respondent poll is not random.
As for his weighting, Zogby states Reported frequencies and crosstabs are weighted using the appropriate demographic profile to provide a sample that best represents the targeted population from which the sample is drawn from. The proportions comprising the demographic profile are compiled from historical exit poll data, census data, and from Zogby International survey data.
In other words, Zogby uses his own polls to drive some of his demographic parameters, a practice not approved, much less recommended, by either the NCPP or the AAPOR.
All in all, Zogbys habit of confusing his personal opinion with data-driven conclusions, his admitted practice of manipulating the respondent pool and his demographic weights, by standards not accepted anywhere else, along with mixing Internet polls with telephone interview results, forces me to reject his polls as unacceptable; they simply cannot be verified, and I strongly warn the reader that there is no established benchmark for the Zogby reports, even using previous Zogby polls, because he has changed his practices from his own history.
Except for some specific polls whose practices earned remarks for their excellence or a distinct lack of it, I have tried not to rank or grade the polls. I would also recommend the reader read through the polls himself, to determine which is most thorough in its work and results. But hopefully, this guide will help sort through who is chasing the money, and who is serious about their work.
I wondered what the difference was in polls for some time.
Fox News/Opinion Dynamics :I think this may explain why Fox News/Opinion Dynamics polls seem to always be a bit out of sync with other polls.
... In English, that suggests that Fox will weight some polls, but not others, which is a strike against consistency. There is no information to determine whether or not the respondent poll is pure RDD or pre-selected. The same website admits that Fox weights their polls by gender, 47% Men and 53% Women, even though this is not in line with NPCC guidelines, Census data, or consistent with Exit Polls from past elections. Neither Fox News nor Opinion Dynamics responded to a request for clarification.
Thanks for the FULL posting, friend! I hadn't seen some of the information before....GOOD STUFF! :-)
Thanks for the entire article.
One cannot be an intelligent consumer of polling without understanding from whence it came.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.