Go see it all, and the commentary following.
But we all know it’s true, right?
you put manure in ,you get manure out
No matter what type of data was put in Al Gore’s hokey stick came out.
Sigh. There’s an entire subscience in computer science called “numerical analysis” that deals with the fact that floating pointing point computations in computers are imprecise due to the finite precision of the instruction sets, and which continusously introduces tiny errors.
There are scientists who are formally trained in numerical analysis to understand and minimize the impact of these tiny errors in models in which trillions of calculations are performed iteratively for millions of cycles. If not properly accounted for, these tiny errors can quickly overwhelm any legitimate model results and render the model useless.
Given the completely sloppy, slip-shod, and down right fraudulent nature of the way climate “scientists” go about their business, it wouldn’t surprise me in the least if they don’t bother to consult numerical analysts regarding the validity of their computer models with respect to errors introduced by floating point units.
CPUs have slight differences in their floating point computation units. They are supposed to follow an IEEE standard, but there are edge cases where some don’t. A worst case example is the bug in the Pentium processor a decade or so ago.
Even on the exactly the same processor, different compilers, different versions of the same compiler, or different compiler options on the same version of the same compiler will generate slightly different machine operation codes.
Over millions and even billions of repeated floating point operations, these slight differences accumulate, and the final result can change significantly. A reputable modeler freezes everything once a model is validated, and only accepts any changes after validation.
The problem: climate modelers can’t validate their models in the first place. They don’t have reliable data to do so.
There are two posts that point out the basic problem.
I will illustrate the problem. Let’s say you have four-digit precision, with scientific notation (sixteen-digit precision is routine, and thirty-two digit precision or higher may be necessary for special applications).
The scientific notation part keeps track of the order of magnitude of the number, e.g., ones, billions, billionths, etc.
OK. Now add 1 to X and repeat. With four digit precision you go up to 9999, and then to 10,000 or 10,00E+1. Add 1 to 10,00E+1 and you get 10,00E+1. Do this 5 million more times, and you still get 10,00E+1.
This reveals one of the problems with “Big Data,” lots and lots of data, each datum of which is small relative to the order of magnitude of their total. There are strategies to deal with this, but there is a certain culture among the climatologists that because they have lots of data (e.g., hourly readings at thousands of thermometers over thirty years), they are gods. We have a similar problem with exchange rates, stock prices and so forth, in financial markets. But, just because you have high frequency data doesn’t mean you have that much more information.
The next problem in climatology is the kind of calculations they perform. They use power functions. These make big numbers in to very, very big numbers, which are challenges to handle inside the computer, with floating point numbers. We economists have our own sin, which is to invert large matrices.
Yes, because of computational uncertainty, it is sometimes important to perform calculations using different platforms, operating systems and application software. And, if you get significantly different results, to think about what you’re doing that the computers find offensive.