Living in Florida, my personal interest is hurricane forecasting models.
Everybody has seen the multiple ‘spaghetti’ models the Weather Channel and others use when a storm is brewing and after it finally forms, to predict where it will go.
It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path.
Then test it with other past hurricanes and see what the deviations are and tweak it some more..............
Red Badger wrote: “It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path. Then test it with other past hurricanes and see what the deviations are and tweak it some more..............”
That’s pretty much what they do.
Even very SMALL systems present so much chaos that modeling requires generalizations that reduce fidelity, and hurricanes are MAMMOTH, HIGHLY chaotic systems. The difficulties encountered in modeling them at all accurately is evident in that MUCH human effort has been applied to the task, and -- even with modern technology, and the massive number-crunching power available -- we still have fallen well short of a good result; we still rely on real-time measurements, and frequent updates in our efforts to hone our predictions, and the degree of our remaining inaccuracy is evident in the "fan" shape of those moment-by-moment forecasts.
Too, there are factors that may influence the system more heavily than we know, and they would given with too little influence in the model, so -- again -- the fidelity of the model suffers. And there's what Briggs highlights: those factors "thought by the modeler to modify the probability of the observable Y..."
But DO they REALLY? Just because the modeler thinks so? Is it a valid assumption, or only that specific modeler's notion?
"It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path."
I respect the thought, but no two systems are EVER alike; the myriad variables are never the same; ocean surface temperature, humidity, prevailing winds, air temperatures at-altitude, winds aloft, position of the jet stream, other passing storm fronts, solar gain, time of day... it just goes on and on and on. Every last factor would have to be IDENTICAL at every place along the storm's route of travel for us to have any chance that data from a past hurricane would be reusable in a predictive context, and that's God level improbable.
Joe Bastardi is the only meteorologist I’ve seen do anything close to that.
“It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path.
Then test it with other past hurricanes and see what the deviations are and tweak it some more.....”
Modelers regularly do this. It’s called “backfitting”, and it is probably already baked in to the projections you are seeing on tv. The problem is, there are a thousand different ways to backfit your model to past data, and no way to know which (if any) will make your model more accurate. Even if you test all the different ways to backfit against historical data, you run into the problem that even models which can successfully model historical data aren’t necessarily any more accurate at predicting future data.