Posted on 08/08/2016 10:03:19 AM PDT by xzins
Donald J. Trump
43.4% 0.6%
Hillary Clinton
42.8%
Trump +0.6%
The LongRoom Polling Analysis uses the latest voting data from each state's Secretary of State or Election Division. The voting data is kept current by incorporating the latest updates from each state as they become available. This means that the LongRoom Polling Analysis accurately reflects the actual voting demographics, precinct by precinct, county by county, and state by state.
Because the LongRoom Polling Analysis is exclusively data based, it makes it possible to demonstrate from the crosstabs of an individual poll whether that poll is either left or right leaning.
The analysis of the polls of each polling organization and the associated bias is illustrated in a line chart. The most recent poll results are displayed separately and a graphic representation of the amount the poll leans either left or right is shown.
The graphs below cover the last three presidential elections and show the LongRoom Polling Analysis of polls for those elections. In all cases, the LongRoom Analysis was accurate to within +/- 0.3%.
References for the voting data from each state are included below in the list of sources.
We know the polls are biased because the statisticians who produce the polls say they are biased, both explicitly and implicitly. This is also widely reported in the media. Let's look at two recent examples. The Reuters/Ipsos poll last week, July 29th, decided to use "forcing" to assign those who were surveyed to a candidate, even if the person who was surveyed had no preference. Reuters/Ipsos applied this "technique" not only to their most recent poll, but went back through all their previous polls and redid them, assigning those with no preference to a candidate of the pollster's choice. This innovative approach to polling was not universally popular with other pollsters, as Pat Caddell, a pollster with decades of experience, expounded in this article: "Pat Caddell on Cooked Reuters Poll: Never in My Life Have I Seen a News Organization Do Something So Dishonest" . Another example would be the CNN poll from July 30th, where the crosstabs for Question P1 show that 97% of Democrats have committed to a candidate three months before the election. In the history of elections, it is difficult to find an example where 97% of a demographic have made up their minds on who to vote for even on election day, no less in the middle of summer before an election in November.
For a rather extensive list of biases that a statistician may introduce into a poll, there is an excellent article here by Nate Silver where he discusses the biases he uses in creating his analysis, and why he thinks his biases are good.
Statisticians also use "weighting" to produce the poll results that are published in the media. The weighting is simply how many of each demographic the statistician believes will vote based on the detailed questions that are asked when the poll is taken. An example of how this affects polls is demonstrated in the polls out this last week, ending July 31st. Some polls have changes of 10% and more in presidential preference while other polls have a change of only a few percent. Clearly, both of these results cannot be correct.
So like opinions, every statistician has their own biases, but none of them wants to see the other guy's. Here at LongRoom we leave out the biases and let the data speak for itself.
As we discussed above, each poll reflects the biases of the statisticians who prepare the poll. Since each statistician has their own specific biases that they introduce into their poll, it is extremely difficult to compare one poll to another. At LongRoom we use the actual state voter registration data from the Secretary of State or Election Division of each state. We add no "expert" adjustments to the data. This means that all the polls are rationalized one to another based on actual data.
As the election approaches, the statisticians who produce the various polls will begin to back out their biases. In the final few weeks before the election, you will start to notice a convergence of all of the polls. This occurs because the statisticians will use essentially the same data that LongRoom is using now to produce their polls with their own biases removed. So, you might be thinking at this point, are you really saying that all of the polls will eventually match LongRoom? Yes, we are, it is a mathematical certainty, that as the election approaches, all of the polls will begin to match the polls here on LongRoom. This may be difficult for some to believe, however, there is an excellent archive at RCP that shows the poll results for the 2012 presidential election and this typical convergence of polls as the statisticians' biases are backed out.
The day after the election. This may sound humorous but it is actually the truth, there is no reliable predictor for who will win a democratic vote. An example of this is the March 14th, 2004 Spanish General Election which we covered and analyzed. On March 10th, 2004, the Conservative Party was leading in the polls, and as they were the incumbents, were likely to succeed in the election. However, on March 11th, there was a Madrid train bombing. The Conservative government quickly blamed the ETA Spanish Separatist group. As more information was uncovered, it became obvious that the bombing was the work of the Islamist group, Al-Queda. The Conservative government continued to claim it was ETA in spite of the mounting evidence. The electorate rapidly came to believe that the Conservative government was trying to cover up the Islamic involvement and gave the liberal opposition party a 5 point margin of victory. So, in a matter of only three days, there was an 8 point swing in voter preference.
For more information about the 2004 Spanish General Election and the impact the bombing had on it, Wikipedia has a write up here.
To make this example more relevant to our current presidential election, imagine that 3 days before the election there is a terrorist incident here in America, and Mr. Obama and Mrs. Clinton place the blame on right-wing Christian extremists, while Mr. Trump blames radical Islamic terrorists. As the hours tick by, it becomes obvious that the terrorist incident is the work of radical Islamic terrorists, however Mr. Obama and Mrs. Clinton continue to deny the Islamic involvement. Just as in Spain, it is game, set, match, and Mr. Trump is the next president of the United States.
So, if anyone pretends they can predict the election, just keep in mind: Life Happens.
We have developed our analytical model using the programming language that we and other actuaries have used for the last 30 years, APL.
APL is a macro-language used to fine-tune insurance models. If used properly, very powerful. Insurance companies used it for years with high accuracy to compute profitability of their products, based on social, economic and other assumptions.
The point is that they are telling us how they do what they do. Better than the others. How do we know the others aren’t pulling their responses from NY, CA, NJ, MA, etc.?
https://www.longroom.com/polls/
These people have a lot of experience, history and transparency.
They expose the disinformation and bias in the other polls.
TWB
We heard this same thing about Romney. The polls were bias against him by oversampling Dems. And we know how that turned out.
Trump, and his supporters, need to realistically access his position and adapt accordingly, and not pretend that all is well, as it is not.
I don’t want to give the impression that the election is not based on electoral college votes.
However, there is a historic correlation between a national sample and the state samples.
What this poll suggests is that Georgia, Texas, South Carolina, etc. are simply not going for Clinton.
It suggests that Trump has an excellent chance in all the battleground states.
Are you surprised? Of course he is.
I did not know that. Thanks!
These people accurately forecast the Obama/Romney race. They accounted for bias from either side.
That said, polling firms want to look more accurate the closer to an election that they get UNLESS they are getting big bucks to be manipulative. They sell their polling expertise to most firms that use survey data.
However, many of them will use manipulative polls during a campaign because of their own bias or that of their clients.
THIS is the time when manipulative polls are really significant political tools to shape opinion and energy.
Apply the biases to state polls. In the end. If you take bush 2004, swap NH for NM and PA for VA, you would have a win. Even losing CO.
The difference. The liberal media pollsters were not afraid of Romney or McCain. They certainly are much afraid of Trump and no doubt they will use incorrect models and polling to reflect their favoritism for Shillary. You can see their fright manifesting everywhere.
Thanks for the reply. By “these people”, do you mean this poll accurately forecast the last election? How does their methodology differ from “major polls”?
The predicted turnout was D+2. It was D+6. That is where things went wrong. It will be between Even and D+2 this time. Our side will will. Gut feeling: Anti Mormon conservatives stayed home. Presbyterians don’t have that kind of bias against them.
The predicted turnout was D+2. It was D+6. That is where things went wrong. It will be between Even and D+2 this time. Our side will win.. Gut feeling: Anti Mormon conservatives stayed home. Presbyterians don’t have that kind of bias against them.
So why does Yougov on this site have Trump ahead by 2.4% on August 1st, but on RCP, Yougov has Hillary ahead by 3% on August 2nd?
And why does Ipsos/Reuters on this site have Trump up by 3.8% on August 4th, but on RCP it has Hillary up by 4% on the same day?
Then CNN on this site has Hillary up by 1.9% on July 31st, but on RCP has her up by 9% on August 1st.
I stopped after this. It’s completely off. I’ll stick with RCP.
We have developed our analytical model using the programming language that we and other actuaries have used for the last 30 yearsYou've never heard of actuaries?
Never heard of them before this post.
This is just more “unskewing” polls. It is the same nonsense people fell for in 2012.
You can’t “unskew” a poll like this site and others in the past attempted to do.
A poll is a poll, accept it or reject it. Trying to apply your own special sauce to it does not work.
The best bet is to simply go by the RCP average. It isn’t perfect, but it averages all the polls that are regarded as scientific. Last time people complained that the D+ sample was too large, but the polling turned out to be fairly accurate - especially on the state level.
This is relevant because it's producing somewhat different results from state-run media polling. And also because the national popular vote is usually the same as the electoral result. It's an interesting indicator, three months out.
This is relevant because it's producing somewhat different results from state-run media polling. And also because the national popular vote is usually the same as the electoral result. It's an interesting indicator, three months out.
Using CNN, your last example, if you look at the lower data on this site, you’ll see that they have CNN bias recorded at 7.1%.
That would be claiming that CNN had decreased their Hillary bias and then increased it again.
It would seem to be, in my opinion, looking at the chart, an attempt to show a Trump convention bounce and then to take it away again.
I’ve never heard of “Long Room.” Their internet presence just popped up.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.