Posted on 11/02/2024 10:27:24 PM PDT by NoLibZone
It’s obviously a really close race. But for some pollsters, it’s a little too close.
Take, for example, this afternoon’s polling release from the British firm Redfield & Wilton. They polled all seven of the core battleground states. And in all seven, Kamala Harris and Donald Trump each received between 47 and 48 percent of the vote:
Isn’t this a little convenient? Whatever happens, Redfield & Wilton — not a firm with a well-established reputation in the US — will be able to throw up their hands and say “well, we projected a tie, so don’t blame us!”. And since all of these states are also close in the polling averages, they’ll also ensure that they won’t rank at the bottom of the table of the most and least accurate pollsters — although unless the race really is that close, and it probably won’t be, they also won’t rank toward the top.
Now granted, our forecast is close too. But it’s based on polling averages: dozens of polls have been released in each of these states over the past month. That greatly increases the sample size. Collectively, they’ve surveyed about 230,000 voters.
By contrast, the median sample size in individual polls in these states is 800 voters. In a 49-49 race in a poll of 800 people — assuming 2 percent goes to third parties — the theoretical margin of error for the difference between Trump and Harris is ±6 points. If that sounds higher than you’re expecting, that’s because the margin of error that’s usually reported in polls is only for one candidate’s vote share. For instance, in a poll of 800 people, Trump’s margin of error is about ±3 points, as is Harris’s. However, basically every vote that isn’t a vote for Trump is a vote for Harris. If Trump gets 52 percent of the vote instead of 49, that implies Harris will receive 46 percent.1 So the margin of error on the difference separating Trump and Harris is ±6.
What this means is that if pollsters are doing honest work, we should see a lot more “outliers” than we do — even if people love to complain about them on Twitter.
In our database as of this afternoon’s model run, there were 249 polls in the seven battleground states that met Silver Bulletin standards and did at least some of their fieldwork in October.2 How many of them showed the race in either direction within 2.5 percentage points3, close enough that you could basically call it a tie?
Well, 193 of them did, or 78 percent. That’s way more than you should get in theory — even if the candidates are actually exactly tied in all seven states, which they almost certainly aren’t.
I have been puzzled about how so many polls show them tied or with a 1-point difference. It’s virtually impossible.
I have a theory. Pollsters may be artificially overcompensating Trump, afraid of another miss. So it is possible he would do worse on Tuesday than his polls show.
I can see it happening. All the women voting for Harris. Some men decided to stay home on Tuesday. Enough to preserve the blue wall in MI, PA and WI. Harris gets 270.
Pollsters often buy the same raw data from survey companies, with pollsters then applying their particular demographic model for purposes of analysis. Late in the election cycle, as public opinion becomes settled, the demographic models make less difference.
Polls never can show vote fraud.
This is not how it works. Turn out models do not try to adjust for response bias in the phone calls. Those calls are about 5% successful and it would be insane to try to amplify one side of those 5% versus the other because you think that Trump voters don’t like to talk to the pollster.
Nobody is going to do this. When your response rate is only 5%, it’s pretty silly to imagine one side versus the other side doesn’t want to answer. Nobody wants to answer, or rather 95% don’t want to answer.
If you wanted to try to compensate for Trump over performing polls, you would not do it by having some magical coefficient in the final result. You would, rather, adjust the turnout model and when you do that, because those tabs are made visible, everyone would know. If you decide the way to do this is to adjust the turnout model for a different income levels, it will be clear to all of the pollsters that you have violated your own rules in defining a representative sample.
That would probably get you extracted from the RCP average.
The bottom line is that because the response rate to attempted sampling is only 5%, there’s nothing you can do to try to capture some theoretical predisposition among the 95%.
Final clarification, when 95% do not answer it’s not reasonable to presume that you can achieve a capture of overperformance by pretending that the Democrats get only 94% rejection. You can do it, but it would be purely imagination. It would be one of those things where the temperature in some town is found to be statistically related to the positions of the moons of Jupiter. It might work but there will be no rationale, and it won’t work the next time.
Well the guy at Rasmussen said he felt their polls might lean slightly left recently. Their last poll has Trump up in PA and WI.
this is one of the most depressing boards around close to election time :)
“we’re losing” or “we’ll lose anyway because of fraud”
:)
My astute opinion is: Who the hell knows?
Pollsters are expecting the Democrat steal. They know the Democrats can't manufacture a blowout win, and that all they really need is a few thousand votes in each state -- just enough to overcome recounts.
If pollsters are expecting Democrats to "eek out" 51-49 wins, then the polls are being constructed to reflect 51-49 voter sentiment. That's why the polls are not conforming to the MOE spreads that repeated polling would suggest would happen.
The fact that most polls are saying they have 3%-5% margins of error but with repeated polling never vary by more than 1%-2% are indicative that the results are belying the expected statistical spread.
One interesting quote from the article:
This is a clear-as-day example of what we call herding: the tendency of some polling firms to move with the flock by file-drawering (not publishing) results that don’t match the consensus or torturing their turnout models until they do. Some pollsters, like the New York Times/Siena College, don’t do this, and are proud to own their work even when it differs from the polling averages.What makes this interesting to me is that the New York Times/Siena has held back from publishing state polls since early October:
Why did the New York Times/Siena College remove themselves from the state-polling business in this most consequential of elections? If it's because they're as ethical as Nate Silver is alleging they are, then what does that say about the state of polling, and indirectly, the true state of the race?
Their absence speaks volumes. They are the proverbial dog that isn't barking.
-PJ
I said "I sure hope President Trump wins on Tuesday". If my friend had been alone she would have agreed with me. But because she was with her daughter she said something like "who knows who will win". I took the hint and changed the subject.
My friend clearly didn't want the gals in her life thinking she was voting for President Trump.
That would be to big a conspiracy to keep
Trump has always out performed the main stream polls by at least 2 points, the most accurate pollsters AtlasIntle actually has Trump wining all 7 swing states and the popular vote, Big Data poll similar, Rassmussen also in the top 5 also has Trump winning. Rich Baris of Big Data Poll says many pollsters keep it close because they don’t want to put out what they see, which is Trump running away with it. IMHO Its just a matter of whether we have it too big to rig
My point is that Silver's point is that the published polls are not conforming to the statistics that they claim. Why is that?
When a poll that is Trump 48 Harris 48 says it has a 3% margin of error, that means that either person could be as high as 51 or as low as 45. So why aren't repeated polls showing 51-47, or 46-49, or some other spread in the MOE range? Why are they all 48-48 or 49-48 or 47-47, but not much else?
Is it because the collective conscious of the pollsters are all guiding them to only publish tight-race polls to prevent a runaway election narrative?
-PJ
Suspicious voter registration forms in Pa linked to Arizona latino city councilman’s company
Field+Media Corps is run by latino Mesa councilman Francisco Heredia, a longtime Az voting activist
VoteBeat ^ | 10/31/24 | Jen Fifield, Carter Walker
FR Posted on 11/1/2024, 2:55:52 AM by Roadrunner383
Two Pennsylvania counties have identified an Arizona-based company as the source of thousands of last-minute voter registration applications that they are investigating. The company, Field+Media Corps, which conducts voter registration and outreach programs, is run by Francisco Heredia, a Mesa councilman and a longtime voting activist in Arizona.
In Monroe County, around 30 forms the company was “responsible for submitting,” which also included mail ballot applications, were “irregular” and included what the District Attorney’s Office described in a Facebook post as several that were “fraudulent as they were not authorized by the persons named as applicants.” “In at least one example, the named applicant is in fact deceased,” District Attorney Mike Mancuso wrote in the post...
York County Chief Clerk Greg Monskie confirmed to Votebeat Wednesday that Field+Media Corps submitted the forms that the county is investigating.
Latino activist Francisco Heredia, a registered Democrat, runs the company, which conducts voter outreach efforts and registers voters. Heredia did not immediately respond to The Arizona Republic’s request for comment. The Republic could not reach Eduardo Sainz, listed as a member of Field+Media Corps on the Arizona Corporation Commission documents.
Field+Media Corps operates voter registration drives for clients in Arizona, too. Last year, both Navajo and Mohave counties flagged voter registration forms from the company and sent them to the Arizona Attorney General’s Office for investigation, office spokesperson Richie Taylor confirmed to Votebeat Thursday.
(Excerpt) Read more at votebeat.org ...
BS...
All the women I’ve talked to at church are voting for Trump.
Trust me. Not *all women* are voting for harris.
Nor are all men voting for Trump.
Which is exactly what Fox did for Arizona in 2020. They had inside info about the voter registration company working there, and understood it would skew the result. They overestimated how much.
Pollster will censor some respondents based on the known party affiliations and expected turnout. If they want a poll of 800 people they will probably sample 840 people, because they don’t know who will respond by affiliation… and then after they get the results they censor some respondents - whittling it down so it is “balanced” based on party affiliation. One way they could skew results could be in how or whom they decide to censor.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.