Posted on 08/08/2016 10:03:19 AM PDT by xzins
Donald J. Trump
43.4% 0.6%
Hillary Clinton
42.8%
Trump +0.6%
The LongRoom Polling Analysis uses the latest voting data from each state's Secretary of State or Election Division. The voting data is kept current by incorporating the latest updates from each state as they become available. This means that the LongRoom Polling Analysis accurately reflects the actual voting demographics, precinct by precinct, county by county, and state by state.
Because the LongRoom Polling Analysis is exclusively data based, it makes it possible to demonstrate from the crosstabs of an individual poll whether that poll is either left or right leaning.
The analysis of the polls of each polling organization and the associated bias is illustrated in a line chart. The most recent poll results are displayed separately and a graphic representation of the amount the poll leans either left or right is shown.
The graphs below cover the last three presidential elections and show the LongRoom Polling Analysis of polls for those elections. In all cases, the LongRoom Analysis was accurate to within +/- 0.3%.
References for the voting data from each state are included below in the list of sources.
We know the polls are biased because the statisticians who produce the polls say they are biased, both explicitly and implicitly. This is also widely reported in the media. Let's look at two recent examples. The Reuters/Ipsos poll last week, July 29th, decided to use "forcing" to assign those who were surveyed to a candidate, even if the person who was surveyed had no preference. Reuters/Ipsos applied this "technique" not only to their most recent poll, but went back through all their previous polls and redid them, assigning those with no preference to a candidate of the pollster's choice. This innovative approach to polling was not universally popular with other pollsters, as Pat Caddell, a pollster with decades of experience, expounded in this article: "Pat Caddell on Cooked Reuters Poll: Never in My Life Have I Seen a News Organization Do Something So Dishonest" . Another example would be the CNN poll from July 30th, where the crosstabs for Question P1 show that 97% of Democrats have committed to a candidate three months before the election. In the history of elections, it is difficult to find an example where 97% of a demographic have made up their minds on who to vote for even on election day, no less in the middle of summer before an election in November.
For a rather extensive list of biases that a statistician may introduce into a poll, there is an excellent article here by Nate Silver where he discusses the biases he uses in creating his analysis, and why he thinks his biases are good.
Statisticians also use "weighting" to produce the poll results that are published in the media. The weighting is simply how many of each demographic the statistician believes will vote based on the detailed questions that are asked when the poll is taken. An example of how this affects polls is demonstrated in the polls out this last week, ending July 31st. Some polls have changes of 10% and more in presidential preference while other polls have a change of only a few percent. Clearly, both of these results cannot be correct.
So like opinions, every statistician has their own biases, but none of them wants to see the other guy's. Here at LongRoom we leave out the biases and let the data speak for itself.
As we discussed above, each poll reflects the biases of the statisticians who prepare the poll. Since each statistician has their own specific biases that they introduce into their poll, it is extremely difficult to compare one poll to another. At LongRoom we use the actual state voter registration data from the Secretary of State or Election Division of each state. We add no "expert" adjustments to the data. This means that all the polls are rationalized one to another based on actual data.
As the election approaches, the statisticians who produce the various polls will begin to back out their biases. In the final few weeks before the election, you will start to notice a convergence of all of the polls. This occurs because the statisticians will use essentially the same data that LongRoom is using now to produce their polls with their own biases removed. So, you might be thinking at this point, are you really saying that all of the polls will eventually match LongRoom? Yes, we are, it is a mathematical certainty, that as the election approaches, all of the polls will begin to match the polls here on LongRoom. This may be difficult for some to believe, however, there is an excellent archive at RCP that shows the poll results for the 2012 presidential election and this typical convergence of polls as the statisticians' biases are backed out.
The day after the election. This may sound humorous but it is actually the truth, there is no reliable predictor for who will win a democratic vote. An example of this is the March 14th, 2004 Spanish General Election which we covered and analyzed. On March 10th, 2004, the Conservative Party was leading in the polls, and as they were the incumbents, were likely to succeed in the election. However, on March 11th, there was a Madrid train bombing. The Conservative government quickly blamed the ETA Spanish Separatist group. As more information was uncovered, it became obvious that the bombing was the work of the Islamist group, Al-Queda. The Conservative government continued to claim it was ETA in spite of the mounting evidence. The electorate rapidly came to believe that the Conservative government was trying to cover up the Islamic involvement and gave the liberal opposition party a 5 point margin of victory. So, in a matter of only three days, there was an 8 point swing in voter preference.
For more information about the 2004 Spanish General Election and the impact the bombing had on it, Wikipedia has a write up here.
To make this example more relevant to our current presidential election, imagine that 3 days before the election there is a terrorist incident here in America, and Mr. Obama and Mrs. Clinton place the blame on right-wing Christian extremists, while Mr. Trump blames radical Islamic terrorists. As the hours tick by, it becomes obvious that the terrorist incident is the work of radical Islamic terrorists, however Mr. Obama and Mrs. Clinton continue to deny the Islamic involvement. Just as in Spain, it is game, set, match, and Mr. Trump is the next president of the United States.
So, if anyone pretends they can predict the election, just keep in mind: Life Happens.
We have developed our analytical model using the programming language that we and other actuaries have used for the last 30 years, APL.
Seems like another variation of unskewing polls here.
They sure are part of the problem. These media outlets cannot fool any of us here.
It is. It goes to state voter registration data for actual demographics, if I correctly understand.
Where is the real-time track record? How long has this outfit been in existence?
Monmouth explained how party affiliation works here:
http://monmouthpoll.blogspot.com/2016/08/a-poll-samples-party-composition.html
They purportedly culled the voter rolls and called a basically even sample 34D/34R/32I. People self-identified 36D/26R/39I.
Most pollsters don’t adjust Party ID. They leave it as is. Those that do, do such to their own peril.
They have it 'Even' between Rs and Dims.
R 28%
D 28%
I 42%
And again 'Even' between lean R v. lean D at 43%.
1) Note the timing of those self-identifieds. It’s before the RNC and DNC, while Hillary was getting hammered over her e-mails.
2) I do agree with you that these ridiculous samples that we are seeing right now at D+9 will not hold come November. With that said, it’s been a pretty bad two weeks for Trump with the DNC and then the DNC media-aided drive by character assassination attempts last week.
3) The bias I’ve seen with these pollsters is that they are asking for the “youngest voter” in some of these surveys. That alone, trends Democrat.
They appear to analyze polls rather than create them. I’m assuming their longbow number is an RCP type average of polls they have analyzed according to their model.
It appears they are about a year old and do poll comparison and analysis like RCP, coming up with an adjusted average
Even this poll does not take into account the “cheat factor” and Donald will have to do a whole let better than 1% to beat that.
Just look at social media - Trump has 10 million followers on Twitter, The Demon has 8 million. On Facebook Trump has 10 million likes. The Demon has FIVE million! Just wait till the debates when Trump exposes The Demon like an atom bomb, The Demon is finished. Trumps biggest problem right now are the enemies of this country trying every which way they can to destroy him in the media so The Demon can take power. If he can somehow neutralize it, he will crush The Demon like a bug. What I am looking for in the debates is when The Demon starts accusing Trump of being racist/anti-Semitic, bias, etc. If The Demon is stupid enough to do that, the flood gates will open and we will see The Demon go into one its seizures.
Trump, and his supporters, need to realistically access his position and adapt accordingly, and not pretend that all is well, as it is not.
Correct, that is very astute. But too many people here are not being realistic and are not adapting accordingly.
If people think the polls are off, the good news is that Trump has been beaten down so much that a bet on him to win will pay almost 5-to-one.
My concern: How do we know social media followers are pro-Trump? My view is skewed because I live among so many liberals And it seems like they all follow Trump on Twitter.
you’re tilting at a very big windmill trying to explain how polls work. Polls are good if they say what people want them to say and bad if they don’t. There is no critical thinking about how polls (or statistics) actually work.
Tell that to Al Gore.
Not sure what your point is. Winning the popular vote while losing in the Electoral College has happened four times in the history of the U.S. That means more than 90 percent of our elections have turned out the other way. That would qualify as "usually."
Now, some of us think Gore was so sore when he lost to GWB because he was sure he'd stolen it fair and square. And the certainty of Democratic election fraud grows every four years. But that would be a case of a "popular vote" that exceeds the number of popular voters.
The latest post convention poll I say gave Hilary +10. Did we pick the wrong candidate? Or is the fact only 66% of GOP voters plan to vote Trump an opportunity?
Finally, a genuine poll.
Back in the days of typeballs, APL had its very own.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.