***************************************EXCERPT********************************************
E.M.Smith says:
D. J. Hawkins says: I took a quick peek at the GISS website to try and understand how they crank out their numbers, and even a cursory glance was daunting.
Yup. Put me off for about 2 years. I kept saying SOMEBODY needs to look at this! then one day I realized I am somebody.
Ive downloaded it to a Linux box and got it to run. It has issues but I figured out how to get past them. Details on request and posted on my web site. It required a long slow pass through the code
Has there been a clear presentation of the methodology somewhere?
No.
There has been a lot of presentation of the methodology. Much of it is in technical papers that present a method, that is used in the code, but the code does more than (or sometimes less than, or sometimes just somewhat different from) what the papers describe.
Its a convoluted complicated beast. Starter guide here:
http://chiefio.wordpress.com/gistemp/
I would think that once you nail down the method, no matter how many times you run the analysis the results should be the same.
You would think that. I thought that. Its not that
The code is designed in such a way that EVERY time you run it (with any change of starting data AT ALL) you will get different results. And both the GHCN and USHCN input data sets change constantly. Not even just monthly, but even mid-month things just show up changed in the data sets.
So to talk about what GIStemp does at any point in time requires specification of the Vintage of data used TO THE DAY and potentially to the hour and minute.
Why?
Infill of missing data, homogenizing via the Reference station method, UHI via the Reference Station Method. Grid / Box anomaly calculation that uses different sets of thermometers in the grid/box at the start and end times (so ANY change of data in the recent box can change the anomalies
) and some more too
If the assumptions regarding initial conditions are so fungible as to allow a reversal of the relative values of the anomolies at will, you dont have a scientific analytical tool, you have a propoganda tool.
Can I quote you on that? Its rather well said
IMHO, the use of Grid / Box anomalies (calculated AFTER a lot of data manipulation, adjustment, averaging , homogenizing, etc done on monthly temperature averages ) mixed with changing what thermometers are in a grid / box in the present vs. the past lets you tune the data such that GIStemp will find whatever warming you like. Its cleverly done (or subtile enough they missed the bug being generous) and if a good programmer devotes about 2 years to it they can get to this point of understanding. Everyone else is just baffled by it. Draw your own conclusions
Ive tried explaining it to bright folks. A few get it. Most just get glazed. Some become hostile. Ill explain it to anyone who wants to put in the time, but it will take a couple of weeks (months if not dedicated) and few folks are willing to go there.
Judging from the look of the code, it was written about 30 years ago and never been revisited (just more glued on, often at the end of the chain). From that I deduce that either Hansen is unwilling to change his baby or very few folks are willing to shove their brains through that particular sieve
The bottom line is that the GIStemp code is DESIGNED to never be repeatable and to constantly mutate the results as ANY input data changes and that makes ALL the output shift. Its part of the methodology in the design. Dont know if thats malice or stupidity
It is my assertion that this data sensitivity is what GIStemp finds when it finds warming. The simple fact that as new data is added the past shifts to a warmer TREND indicates to me that the METHOD induces the warming, not the actual trend in the data. Ive gone through the data a LOT and find 1934 warmer than 1998 and with a method that IS repeatable. See:
http://chiefio.wordpress.com/category/dtdt/
http://chiefio.wordpress.com/category/ncdc-ghcn-issues/
Basically, 1934 and 1998 ought to stay constant relative to EACH OTHER even as new data is added even IF you were adjusting the past cooler to make up for something or other (nominaly UHI or TOBS yes, I know, it sounds crazy to make the past cooler for UHI correction, but its what they do in many cases).
As it is, they jockey for relative position with a consistent, though stochastic, downward drift of the older relative to the newer. That tells me its the method, not the data, thats warming.
“Paging Dr. Lorenz....”