If I performed statistical analysis in my own work the way the pollsters do it, I’d be fired on grounds of fraud and/or incompetence. The non-sciences have non-standards when it comes to data analysis and data acquisition...it is a disgrace. That so much anguish and joy springs forth from polls (especially polls fluctuating within, or close to, grossly-underestimated margins of error) is a testament to the power of mass delusion.
The acquisition/sampling methodology alone is rife with opportunities to injet bias - we further have to take the fidelity of their implementation on blind faith; fabrication of records is no more difficult than fabrication of results.
The analysis itself adds a whole new layer of potential fraud and incompetence. Sample weighting can easily be rigged (with “good justification”) by a few percentage points to bend the published results. Response grouping is another subtle way to get the answer you want. Data-blocking over several days is not done in some universal way - the polling agencies have ample latitude in how they block different datastreams together to present trendlines and such. Metanalysis, as applied in most contexts, is a synonym for fraud.
Then we have the way in which the data is presented. They casually play with the confidence interval (and then not mention it) to tune the published margin of error. If I, in science, took any one of the liberties taken in data publication in social contexts, no journal would ever publish my research again.
Sorry for the rant, but with all of these posts on FR about polls swinging one way and another, and the media storytellers’ resulting “analysis”...happy thoughts are not the result. It is a giant, furious vortex of stupidity. Focus on message, focus on turnout, and screw the polls!!
Here’s a chuckle for you on “Margin of Error”
http://www.freerepublic.com/focus/f-news/2117123/posts