Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Oldeconomybuyer
“Generally”, well almost.

Models are always an abstraction. If they were not an abstraction then they would be the real thing. Nature has an infinite number of variables. That makes it impossible to account for every combination of outcomes using a computer model.

Just take Hurricane Ian for instance. Meteorologists were just complaining how no model was able to predict landfall. Journalists , who are trained to write at a sixth grade level and don’t know excrement about science, mathematics and modeling, probably didn’t quite understand or report that models of Ian converged on Ian’s landfall as Ian made landfall. The timeframe was shortened and the data became more accurate, or should we say, relevant.

Knowing this, a question needs to be asked about the climate models for the Pacific Northwest. At what point in time did the models become accurate? One year? One month? One day? One minute? When was there enough real data to make an accurate prediction?

For instance, how accurate was the prediction for temperature, since that is easy for people to understand compared to pressure. (Although the article doesn’t discuss temperature) Within 1C? 3C? That’s a huge margin of error when these quacks talk about a time horizon of decades and the effects of 1.3 C increase in temperature. Understand that temperature prediction is just one result that comes out of a model. There are many other results. How many of the results were accurate at the same time and what was the degree of accuracy?

If we have five different types of results that are all slightly inaccurate, then the results compound the error. What about ten, or fifteen? That’s just trying to interrupt the models predictions.

There is another sticky problem. What is the degree of accuracy of the model’s inputs. How many measurements are there? How many locations do the measurements come from? At what altitude in the atmosphere do the they come from? There are an infinite number of combinations.

Try this experiment. Run around your house with an infrared thermometer and take a bunch of temperatures. Point it at the floor, ceiling, walls, and different surfaces. You’re dealing with a closed system. You will probably find that you will get quite a few different readings. Thus, this is to say that in the open system of nature that it is highly probable that the range of input measurements for model will vary significantly. That changes the output of models.

Just remember, “progress” is required to be shown to get more funding and grants. This article does not specify any measurable degree of accuracy. The ignoramuses in government who are blinded by communist utopia have no understanding of quantitative measurement, statistics and error. They are impressed by qualitative narratives that align to their “studies” degree indoctrinations.

15 posted on 10/12/2022 6:02:28 AM PDT by ConservativeInPA ( Scratch a leftist and you'll find a fascist )
[ Post Reply | Private Reply | To 1 | View Replies ]


To: ConservativeInPA

It is not just the number of variables; it is also the underlying equations.

The atmospheric models rely on non-linear equations. Even non-linear equations with only three variables can be unpredictable. With these equations, small errors are amplified, such that two simulations starting with small differences will have diverging results after a period of time. And one can never measure to the accuracy needed to make truly long-term predictions.

What amazes me is that the climatologists should know this, unless they really don’t understand the mathematical nature of the equations they are trying to solve.


19 posted on 10/12/2022 6:46:35 AM PDT by kosciusko51
[ Post Reply | Private Reply | To 15 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson