Weighting samples for better fit to the population is also a perfectly reasonable practice if one knows the demographic composition of the population, and knows that the survey or other measurement is drastically out of line.
Here, of course, the real population doesn’t exist yet — those who vote on Nov. 6 — and one can debate whether using the party affiliation rates from the August 2012 Rasmussen party affiliation survey is a good weighting, but they’re not just jiggering the numbers to get a desired result: they are reweighting the samples from recent polls that provide enough internals to fit the Rasmussen party affiliation data.
Did you ever stop to think that these pollsters have a really serious public confidence problem? Here we have FReepers imagining that these guys have stratified samples and all they really have is a boiler room operation that calls random numbers around the country until they get about a few hundred respondents ~ with a minimum of more than 100 who'll say they are Republicans, another minimum who'll say they are Democrats, and maybe about 60 who'll swear that they are independents.
That gives you a sort of statistically valid floor but just enough so you have fewer accidents in your selection of numbers to call.
Unless someone pays them an awful lot of money they don't do much more than that on any of these polls. And, as you know, the more characteristics you test for, the more sampling it takes to get statistically valid results for every sample cell in your matrix.
I'd like to know ALL the characteristics, not just the R/D/I business. How about the non responses? If you don't have those in your data base you don't even know if either candidate has a popular following!