Posted on 08/23/2016 11:39:21 AM PDT by usafa92
Huffington Post Pollster does not include the USC/LA Times poll in their general election polling trend, but RealClearPolitics does. And, today, August 22, 2016, the USC/LA Times poll has Republican Donald Trump up 44.6 to 43.5. I do not believe the level of the poll (i.e., the head-to-head value), but I believe there is a lot of information in the movement of the poll. The sharp movement toward Trump over the last few days does mean a movement toward Trump among the respondents in their panel, but their actual value of Trump up by 0.9 percentage points could be way off.
The poll uses a panel of 3,200 people who answer their demographics at the start of July and then answer their voting intention once per week through Election Day. Each respondent in a seven day period is weighted so that the sample resembles the voting population from 2012. Then, they further weigh each respondent by their stated likelihood to vote. Then, they report the fraction voting for each candidate.
I love this type of experimental polling, but there are few serious concerns about their methods. This list is not in order of importance, but order of engagement:
Huffington Post Pollster does not include the USC/LA Times poll in their general election polling trend, but RealClearPolitics does. And, today, August 22, 2016, the USC/LA Times poll has Republican Donald Trump up 44.6 to 43.5. I do not believe the level of the poll (i.e., the head-to-head value), but I believe there is a lot of information in the movement of the poll. The sharp movement toward Trump over the last few days does mean a movement toward Trump among the respondents in their panel, but their actual value of Trump up by 0.9 percentage points could be way off.
The poll uses a panel of 3,200 people who answer their demographics at the start of July and then answer their voting intention once per week through Election Day. Each respondent in a seven day period is weighted so that the sample resembles the voting population from 2012. Then, they further weigh each respondent by their stated likelihood to vote. Then, they report the fraction voting for each candidate.
I love this type of experimental polling, but there are few serious concerns about their methods. This list is not in order of importance, but order of engagement:
Advertisement
AdChoices 1) Probability of Voter Intention: Natalie Jackson and Ariel Edwards-Levy specifically cite the voter intention question wording as a reason to not include the poll. This is not actually a big concern for me for two reasons. First, I assume (I have not seen the data yet) that a large portion of people answer 100% or 0%. And, I further assume that deviations from this are somewhat symmetrical for Trump and Clinton supporters. Second, I do not find it that much different from the standard bevy of questions that include leaning or not leaning towards the candidates.
2) Probability of Voting: I am more concerned with the companion probability of question on whether or not the respondent will vote. While some good work has been done in the past on asking probability of voting, it is not clear how well it will hold up in an election like this. It is possible that the standard method of inferring likely voting (from past voting records and other implicit questions) would actually be a more stable and realistic measure of likeliness of voting. Asking the respondents probably exaggerates shifts. Further, why derive this each week anew, when they have a full panel of data on the respondents? Surely they can model the likeliness to vote more efficiently with all of that response data they have for each respondent?
3) Party Identification: Each respondent answered a battery of demographic questions before the daily polling began. Along with the standard questions they asked about the 2012 presidential vote. The poll is weighing people by their 2012 vote as proxy for latent party identification. This is a bad proxy, because a persons four year old vote is actually more susceptible to change than their current stated party identification. You read that right. People have a serious problem remembering if and for whom they voted for in past elections. Generally, people overstate their vote for the winning candidate. What that means is the Romney voters in their panel are probably a more hardcore sub-section of Romney voters than actual Romney voters.
4) Modeling: The poll is raking their weights, rather than modeling the data. Depending on how representative their sample is to begin with and the randomness of the dropouts over time, once the weights get lager they become quite an issue. Modeling the data with some form of hierarchical regression provides additional power. I am particularly concerned with African-American support for Clinton dropping from 90 to 80 percentage points (and Trumps support rising from near 0 to 14.3 percentage points). Could smaller demographics groups, like African-Americans, have too big of weights due to under-representation in the poll?
Most likely, the party ID issue is making the poll a few points too favorable for Trump. That is, by far, my biggest concern about the poll.
But, does that not discount that the movement may still hold valuable information about the race tightening a little. They have a relatively steady group of people and show a 2.7 percentage point drop in support for Clinton from her peak and 2.6 percentage point increase for Trump from his bottom. Someone is moving towards Trump and someone is moving away from Clinton, but it is not clear from where and by how much.
This poll is innovative and interesting. I am worried that this important science may be diminished in some way by its questionable topline values in real-time. It should not. I think the panel will ultimately yield some valuable insight into the 2016 election and polling methodology. It just should not be included in polling averages.
Lastly, Kellyanne Conway had a tweet Sunday that referenced the USC poll and she mentioned that this was the state of the race today. My takeaway from that is that Trump's internal polling is pretty much in line with the USC poll. Maybe better if they have more insights into the partisan if split.
FYI
That should read even split between Democrats and Republicans.
Laughable VA poll from Roanoke shows Hillary up by 16 points in VA.
Waste bin filler poll considering the race is tied!
Ping
It most definitely is. His lead is much larger.
The RCP AVERAGE (forget this idiotic poll) in the last two statewide VA elections (Gov and Senator) were off between 3.5 (gov) and 9 (senator). That’s an average of being off of five points before you even get into the D/R and F/M splits. And, again, that was the average. A single pollster like Roanoke can be off by 15.
Then there was the Cantor polling that had him up huge.
I’d say just participating in the tracking poll exponentially increases the chances these people will actually vote their preference in November.
Not sure you can say the same thing about folks wrestling with their choices at home.
https://alpdata.rand.org/?page=election2012
In 2012, after the Republican convention (lots of smiling Republicans, balloons) Romney pulled ahead (slightly). After the Democrat convention (lots of smiling Democrats, balloons), Obama pulled ahead, and stayed ahead.
The difference between now and 2012 is that the poll numbers did not converge again following the Democrat convention. Obama took the lead with the Democrat convention, and kept it.
I think we were all mislead by Gallup which consistently showed a Romney lead in 2012. They have given up on polling presidential elections now. With the large number of voters with wireless only service, the polling techniques that were successful from 1936 until 2008 won't work today.
Florida, Ohio as well as Pennsylvania/Michigan /Virginia /Wisconsin all that matters.
Then there is this drivel claiming Hillary is up 14 points in Florida.
http://polls.saintleo.edu/survey-florida-looks-like-it-will-support-clinton-for-president/
Yes. 44.6 to 43.5 is a 1.1 point lead, not 0.9
As to point #3, he says the Romney vote is over-represented because people have a hard time remembering who they voted for so respondents tend to overstate support for the winner. Wouldn’t this mean that the Obama voter is over-represented? I also don’t believe that wrt Obama’s votes that people forgot. There may be a tendency for some to say they voted for the winner but again this should mean that the Obama vote is the one that is overstated.
Of course the left-leaning Huffington Post is trying this tact. The USC/LA Times Poll is based on the RAND Continuous 2012 Presidential Election Poll, which accurately predicted the 2012 margin within .7% of the final margin between Romney and Obama in that period.
That was one of the closest spreads of the cycle.
The left doesn’t like a result that matches the carefully weighed results like Monmouth and Marist. Those polls assume a certain race/gender/age turnout, which may or may not actually be where the electorate is come November.
Monmouth had such a terrible Ohio sample on Monday that they had to adjust race/gender/age to such an extent that a R+4 turnout from raw data turned into a D+4 sample.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.