Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: All
This comment...lengthy from someone who did some major work:

***************************************EXCERPT********************************************

E.M.Smith says:

December 25, 2010 at 9:42 pm

D. J. Hawkins says: I took a quick peek at the GISS website to try and understand how they crank out their numbers, and even a cursory glance was daunting.

Yup. Put me off for about 2 years. I kept saying “SOMEBODY needs to look at this!”… then one day I realized “I am somebody.”

I’ve downloaded it to a Linux box and got it to run. It ‘has issues’ but I figured out how to get past them. Details on request and posted on my web site. It required a long slow pass through the code…

Has there been a clear presentation of the methodology somewhere?

No.

There has been a lot of presentation of the methodology. Much of it is in technical papers that present a method, that is used in the code, but the code does more than (or sometimes less than, or sometimes just somewhat different from) what the papers describe.

It’s a convoluted complicated beast. Starter guide here:

http://chiefio.wordpress.com/gistemp/

I would think that once you nail down the method, no matter how many times you run the analysis the results should be the same.

You would think that. I thought that. It’s not that…

The code is designed in such a way that EVERY time you run it (with any change of starting data AT ALL) you will get different results. And both the GHCN and USHCN input data sets change constantly. Not even just monthly, but even mid-month things just ‘show up’ changed in the data sets.

So to talk about “what GIStemp does” at any point in time requires specification of the “Vintage” of data used TO THE DAY and potentially to the hour and minute.

Why?

Infill of missing data, homogenizing via the “Reference station method”, UHI via the “Reference Station Method”. Grid / Box anomaly calculation that uses different sets of thermometers in the grid/box at the start and end times (so ANY change of data in the recent “box” can change the anomalies…) and some more too…

If the assumptions regarding initial conditions are so fungible as to allow a reversal of the relative values of the anomolies at will, you don’t have a scientific analytical tool, you have a propoganda tool.

Can I quote you on that? It’s rather well said…

IMHO, the use of “Grid / Box anomalies” (calculated AFTER a lot of data manipulation, adjustment, averaging , homogenizing, etc done on monthly temperature averages…) mixed with changing what thermometers are in a “grid / box” in the present vs. the past lets you “tune” the data such that GIStemp will find whatever warming you like. It’s cleverly done (or subtile enough they missed the “bug”… being generous) and if a good programmer devotes about 2 years to it they can get to this point of understanding. Everyone else is just baffled by it. Draw your own conclusions…

I’ve tried explaining it to bright folks. A few ‘get it’. Most just get glazed. Some become hostile. I’ll explain it to anyone who wants to put in the time, but it will take a couple of weeks (months if not dedicated) and few folks are willing to ‘go there’.

Judging from the look of the code, it was written about 30 years ago and never been revisited (just more glued on, often ‘at the end’ of the chain). From that I deduce that either Hansen is unwilling to change “his baby” or very few folks are willing to shove their brains through that particular sieve …

The bottom line is that “the GIStemp code” is DESIGNED to never be repeatable and to constantly mutate the results as ANY input data changes and that makes ALL the output shift. It’s part of the methodology in the design. Don’t know if that’s “malice” or “stupidity”…

It is my assertion that this data sensitivity is what GIStemp finds when it finds warming. The simple fact that as new data is added the past shifts to a warmer TREND indicates to me that the METHOD induces the warming, not the actual trend in the data. I’ve gone through the data a LOT and find 1934 warmer than 1998 and with a method that IS repeatable. See:

http://chiefio.wordpress.com/category/dtdt/

http://chiefio.wordpress.com/category/ncdc-ghcn-issues/

Basically, 1934 and 1998 ought to stay constant relative to EACH OTHER even as new data is added even IF you were ‘adjusting the past cooler’ to make up for something or other (nominaly UHI or TOBS – yes, I know, it sounds crazy to make the past cooler for UHI correction, but it’s “what they do” in many cases).

As it is, they jockey for relative position with a consistent, though stochastic, downward drift of the older relative to the newer. That tells me it’s the method, not the data, that’s warming.


22 posted on 01/07/2011 10:43:26 AM PST by Ernest_at_the_Beach ( Support Geert Wilders)
[ Post Reply | Private Reply | To 6 | View Replies ]


To: Ernest_at_the_Beach

“Paging Dr. Lorenz....”


29 posted on 01/07/2011 11:30:23 AM PST by Gondring (Paul Revere would have been flamed as a naysayer troll and told to go back to Boston.)
[ Post Reply | Private Reply | To 22 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson