Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Gallup Methodology Explained
The Gallup ORganization ^ | 11/11/2002 | Frank Newport

Posted on 12/04/2002 7:54:51 AM PST by Doctor Stochastic

Edited on 12/04/2002 8:14:45 AM PST by Admin Moderator. [history]

PRINCETON, NJ -- There's little question that some Americans are skeptical of polls and the process by which we use small samples to represent the views of millions of people. We pick up that skepticism when we poll people about polls (something we do from time to time!), and I certainly hear it when I am on a radio talk show or make a speech and get bombarded with questions about the believability of our polls, which are based on what seems to the questioners to be ridiculously small numbers of people.

This skepticism about polls is nothing new. Dr. George Gallup, who founded The Gallup Organization, ran into disbelief about the process of polling almost from the first day he began to release public polls in the 1930s. He spent a good deal of time attempting to explain his processes in articles, books, and speeches -- as have most other survey researchers and pollsters in the years since.

One problem in these attempts has been the lack of an objective way of proving just how accurate polls are. One could attempt to prove statistically that a poll result showing that Franklin Roosevelt had a 65% job approval rating was an accurate representation of the entire population's opinions, but that hasn't held a lot of water with poll skeptics.

But there was one objective way to measure polls' accuracy that Dr. Gallup soon discovered would work well as proof that small samples can estimate what millions of people will do: elections. Pre-election polls in which a small sample of people indicated how they were going to vote could be compared to actual Election Day results based on the behavior of millions of voters. Elections, it turns out, provided a perfect opportunity to test just how well a small sample could estimate the behavior of the population from which it is drawn.

There are other reasons to conduct pre-election polls, including the simple fact that they are interesting in and of themselves. But most pollsters began to recognize that one function of doing pre-election polls is to validate to a sometimes-skeptical public that the entire process of random sampling can work.

With that in mind, let's look at the latest example of just how accurate well-done pre-election polling can be: the Nov. 5, 2002 midterm congressional elections. There is no nationwide presidential voting in midterm elections, but every American who went to the polls voted for Congress in his or her district. We can thus ask Americans for whom they are going to vote in their district, and then compare the results to the overall aggregated vote across all 435 districts the next day. Comparing the results of polls conducted just before the election to the actual voting results provides a pretty good test of the ability of surveys to use small samples to predict the behavior of millions.

That's what we did in early November. Here's how the process worked.

On the nights of Oct. 31 through Nov. 3, The Gallup Poll talked with exactly 1,221 Americans on the telephone and asked them to indicate for whom they planned to vote in their local congressional race in the Nov. 5 election. But these weren't just any 1,221 Americans. They were selected using a random sample of all adults living in the country, using a process called Random Digit Dialing, or RDD.

The way this works is fascinating and complex. The computer that selects the sample is essentially loaded with every residential phone exchange in the nation. (A phone exchange is a six-digit number: the area code and the first three additional numbers of the phone number. Each phone exchange can have 10,000 numbers associated with it, starting with 0000 and ending with 9999.) The computer has estimates of how many of the possible 10,000 phone numbers for each working exchange in the country are residential. The computer randomly samples exchanges from the universe of all phone exchanges, giving each exchange a probability of being selected based on the estimated number of working residential phone numbers in its universe of 10,000 possible numbers.

The computer program then takes each phone exchange selected and "creates" an actual 10-digit phone number by adding random digits to the exchanges. There is no such thing as an unlisted phone exchange, so the randomly created phone numbers include unlisted as well as listed numbers. The result of all this is a sample of residential phone numbers drawn in such a way that every possible residential phone number in the country, listed or unlisted, has an equal and known probability of being selected into the sample.

The 1,221 Americans interviewed just before the Nov. 5 election were thus reached because their phone numbers had been randomly selected using these RDD procedures, and they in turn -- it follows -- represented a true random sample of all adults in the nation. (To make sure of this, the computer had cross-referenced the major demographic traits of this group against census estimates for the nation. The percentage in the sample from each major geographic region of the country, as well as the age, gender, and race were adjusted slightly to be just in line with known population parameters.)

The process continued. All of the 1,221 Americans were told they were a part of a Gallup interview, were reminded about the coming election and were asked a series of questions about their voting status and the coming election. Exactly 1,061 people in the sample professed to be registered voters and said they were indeed going to vote in the Nov. 5 election. (The remaining 161 people were asked a series of demographic questions to help us weight the overall sample in one of the final stages of the process.)

Gallup, however, knows that not all of the 1,061 people who said they were registered voters and planned on voting would actually turn out to vote on Election Day. The respondents didn't realize it, but a set of questions they received were designed to indirectly estimate their real chances of voting, no matter what they said they were going to do. Gallup analysts have developed and fine-tuned these questions over a 65-year period. Respondents were asked whether they happened to know where they voted in their neighborhood, about their voting habits in the past, about their interest in the election, and so forth. Nothing was done with this information during the interview. Each of the 1,061 Americans who said they were going to vote was carried on through the interview. It was only later that -- unbeknownst to them – a group of self-proclaimed voters were to be discarded as likely nonvoters, their opinions to have no weight in the final sample.

Interviewing was over at about 4:30 p.m. Eastern time on Sunday, Nov. 3. A computer program at Gallup's World Headquarters then began its work. The responses of the 1,061 voters who claimed they were going to vote were carefully analyzed. Gallup's analysts and computer programmers, based on long experience, gave each of these 1,061 people who had been included in the interview a score. Those with the highest numbers had the highest probability of voting. These people answered almost every question in a way that in the past has been strongly correlated with the vote. They had voted in the past, knew where to vote, and had a high degree of interest in the election. Those with low numbers had a low probability of voting. Despite their professed voting intentions, they didn't know where they voted, hadn't voted in the past, and had a lower level of interest in the election. The computer judged them to have a low probability of actually voting.

Estimates of turnout and experience dictated that a smaller group of the voters interviewed would do the best job of representing the voters across the country who actually would turn out. The computer program was thus looking for people at the high end of the "probability of voting" scale. This group would constitute a sample of those who had very good chances of actually voting on Election Day. The computer settled on a final group of 715 Americans who were in the highest "probability of voting" group -- the so-called "likely voters." The answers of these people -- those with the very highest probability of voting -- were given full weight in the final sample. The answers of a second group were diluted somewhat because their probability of voting, although high, was slightly lower than that of the top group. And, as noted, the responses of a third group of people -- those who had the lowest probability of voting -- weren't considered at all.

It is important to keep in mind that this final sample of likely voters maintained the basic assumption of randomness that had guided all of the sampling process. The likely voters were in essence a randomly selected subset of a very large group of residents throughout the country: those most likely to vote last Tuesday. They thus became a very valuable group of people. Based on the mathematical properties of randomness and probability, it was almost certain that their responses would be representative of the responses of all voters across the country -- if it had been possible to reach all voters.

The final step was to look closely at what the people in this carefully selected random sample of 715 voters said they were going to do on Election Day. Included in the survey was the "money" question: "If the elections for Congress were being held today, which party's candidate would you vote for in your congressional district -- 1) the Democratic Party's candidate or 2) the Republican Party's candidate?" If voters were undecided, we asked them: "As of today, do you lean more toward -- 1) the Democratic Party's candidate or 2) the Republican Party's candidate?"

Here's what we found: 51% of the 715 in the "likely voter" sample were going to vote for the Republican candidate, 45% for the Democratic candidate, and 4% were undecided.

Of course, any or all of these voters could change their minds between Sunday night and the Tuesday vote. Experience indicated, however, that this usually didn't happen. By this point -- close to an election -- what people tell a pollster is very likely to be what they do when they enter the voting booth a day or two later. Thus, the 51% to 45% figures were highly likely to be representative of the vote of the large group of likely voters who would turn out on Election Day.

All of these computations took place within about an hour on Sunday afternoon. By 6:00 that night, the data had been transmitted to CNN headquarters in Atlanta, and to USA Today in Virginia, and by midnight they had been placed on Gallup's Web site.

By Monday morning, the estimate was the front-page headline in USA Today. "Late Shift Appears to Favor GOP," it proclaimed.

Then came Tuesday. By the time the polls closed, more than 75 million Americans actually showed up at the polls.

By the next morning, the votes in the 435 congressional districts across the country had been tallied. The results: about 51.7% of the vote went to Republican candidates; 45% to Democratic candidates.

A remarkable scientific feat had occurred. The extraordinary power of the theory of random probability sampling had allowed a random sample of 715 voters to reflect almost precisely the actual behavior of the more than 75 million voters who voted on Election Day:

The Congressional Voter: Nov. 5, 2002



TOPICS: Politics/Elections; Technical
KEYWORDS: pollingmethodology
The Gallup Organization gives a short explanation of how the 2002 election polls were conducted.
1 posted on 12/04/2002 7:54:51 AM PST by Doctor Stochastic
[ Post Reply | Private Reply | View Replies]

To: Doctor Stochastic
Very interesting.

They got it right this time, didn't they?

Zogby blew it, though.
2 posted on 12/04/2002 9:54:51 AM PST by No dems 2002
[ Post Reply | Private Reply | To 1 | View Replies]

To: KQQL
ping
3 posted on 12/04/2002 10:26:00 AM PST by Fish out of Water
[ Post Reply | Private Reply | To 1 | View Replies]

To: No dems 2002
Polling's a tough business.
4 posted on 12/04/2002 11:56:19 AM PST by Doctor Stochastic
[ Post Reply | Private Reply | To 2 | View Replies]

To: Doctor Stochastic
Thanks for posting this.

Will show this to my Math class Monday!
5 posted on 12/05/2002 11:34:56 PM PST by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 4 | View Replies]

To: Ernest_at_the_Beach
You might look at the "Chance News" site too.
6 posted on 12/06/2002 8:33:57 PM PST by Doctor Stochastic
[ Post Reply | Private Reply | To 5 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson