*************************************EXCERPT*******************************************
Im flabbergasted that temperature data collection can be prone to so many errors in so many dimensions from siting issues to human errors of data entry and misunderstanding the limits of statistics, though on to human analysis fabrications (intentional or otherwise). How can any of it be taken seriously?
Has anyone made a list or flow diagram showing all the steps and all the places that errors have been found to creep in?
*****************************************EXCERPTS***************************************
crosspatch
It should not be hard to automate the discovery of the most blatant errors. A computer is pretty good at making comparisons in data. If a temperature rises some extreme amount from the previous and subsequent readings, particularly at a time of day when temperature should not be rising, it should be flagged for scrutiny.
I believe this shows the complete lack of scrutiny this data gets. We are asked to allow lifestyle modifying changes to be made in our laws (income modifying, too, I might add) based on these readings yet they take no serious responsibility for making sure those data are correct. It is yet again another indication of the lack of respect for and apparent contempt in which the tax paying citizen is held by those in these various government agencies and research organization.
How can they expect us to believe a word they say when such obvious errors slip through, possibly on a daily basis?