Posted on 09/15/2013 9:20:46 AM PDT by Sub-Driver
You are correct. The precision of the outcome can not exceed the precision of the measurements.
If you can not measure to decimal precision, the calculated value can not exceed a whole number in precision.
Doctor Smirnoff......... Surveying 201
Yeah, I thought of that, and it's probably what happened. Sub-Driver happened to catch it before the "correction". I would hate to think that a Freeper would distort such things, we don't need more propaganda to balance their propaganda.
They have since corrected it online.
If you use "corrected" in a very loose sense, yes.
As you noted, the other story I linked to shows it as 0.2 degC, and your search of the report itself confirms that number.
It's funny (if it wasn't so tragic) that the papers want to project such precision into the future, when they can't even agree on the past.
Har! Yes, strand all these phoney scientist leeches on one of their floating, supposedly melting bergs with the polar bears they are sooo concerned about!
Thanks for the ping.
The only melting these scientists are concerned about is the melting away of future research grants: "paging Dr. Stadler..." lol.
YES YOU CAN.
This is what phased array radio telescopes do.
You have seen their pictures, hundreds of small radio telescopes in the arizona desert pointed at the same object.
Each telescope is capturing the signal. But the background noise is maybe 10 times the signal strength so what you see from 1 telescope is noise hiding the signal.
Now digitally add all the signals from maybe 100 telescopes together and the noise cancels out but the signal is still there and what you get is the signal is boosted by 100 times but noise remains the same at only 1 time and now you can see what you are looking at.
Same thing happens with averages. Keep repeating the experiment under the same conditions and the noise cancels out and the “signal” or average does not.
Those examples are Apples & PCs; what is being averaged for the temperature “averages” are data points from multiple instruments in separate locations for a single point in time, WITHOUT A COMMON TARGET, none of which are are identical, set up the same, read the same, nor even read at the same (”correction factors” are used to “adjust” for differences reading times) time; stations added in or removed at will; stations moved spatially over time; instruments not only replaced, but replaced by different types of instrumentation; and there are other variables not controlled for. Missing data points are often “calculated” and added to the data sets.
That is very different than all instruments, all standardized and precisely arrayed, simultaneously collecting a signal from a single point.
It is also different than using the same set of equipment, though perhaps utilizing different configurations, to run an experiment several times while controlling for known variables, and accounting for known equipment limitations and error factors. Even then there are limitations to experimental results despite computational abilities.
The radio telescope example is a matter separating a signal from noise in a relatively known background; this is generating data and noise, and calling it all signal. To be analogous, each of the phased array dishes would have to be OUT of phase, and each aimed at a different object.
These people don’t even have a precise and accurate zero point; and they rather arbitrarily add in or delete equipment locations; use interpolation to “measure” huge, geographically dissimilar areas that are not instrumented, and even use that interpolated “data point” to interpolate the data point for another, contiguous, non-instrumented area; and add those into their “average”.
It’s a matter of the differences between “precise”, “accurate”, and “significant”. Garbage to 3 decimal places is still garbage, and stinks no matter how it’s sliced and diced.
And significance can not be increased by adding more insignificant “data”.
When combining measurements with different degrees of accuracy and precision, the accuracy of the final answer can be no greater than the least accurate measurement. This principle can be translated into a simple rule for addition and subtraction: When measurements are added or subtracted, the answer can contain no more decimal places than the least accurate measurement.
Multiplication and Division With Significant Figures
The same principle governs the use of significant figures in multiplication and division: the final result can be no more accurate than the least accurate measurement. In this case, however, we count the significant figures in each measurement, not the number of decimal places: When measurements are multiplied or divided, the answer can contain no more significant figures than the least accurate measurement.
To present a temperature average to the 100th of centigrade degree from instruments that have an accuracy of, AT MOST, of a tenth is disingenuous at best.
If you try to measure and do it really poorly, if you do it enough times, the random walk of the errors tends to average out and the average is still a decent data point.
The more errors you have will cause a wider dispersion of the data and the standard deviation will be larger.
However, as you well know, with more data points, the standard deviation narrows.
A narrowing standard deviation means higher confidence levels and greater precision of the average.
Okay; we’re both right: resolution/accuracy. However, their “average” is still bogus and insignificant, and statistically no different than baseline.
Resolution is NOT the same thing as accuracy.
The ACCURACY is indeed highly questionable, since it looks like they cannot even quote a report correctly.
Key word there is “random”.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.