Posted on 10/25/2022 11:51:06 AM PDT by HKMk23
Seventy-some researcher groups were given identical data, and asked to investigate an identical question. The groups did not communicate. Details are in the paper “Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty“, by some enormous number of authors.
As is the wont of sociologists, each group created several models, about 15 on average. There were 1,253 different models from the seventy groups Each was examined after the fact, and it was discovered no two models were the same.
(Excerpt) Read more at wmbriggs.com ...
“Models” are just video games for scientists...................
Some models display sexy clothes don’t they?
I dunno. Ask Tom Brady...................😉
Yes, this is a blog.
No, I don’t know the guy; I linked to this article from Steve Kirsch’s Newsletter on Substack (which is VERY well worth your time if you’ve got the least trepidation about The Vax).
https://stevekirsch.substack.com/
FAR more so, it turns out, than anybody has ever been willing to admit; even the “good” scientists truly believe they’re being as honest as possible in constructing their models. The manifest wreckage of over 1200 models purporting to accurately simulate the same fragment of Reality, yet being — every one of them — entirely unique...
Succinctly: Models DON’T.
Living in Florida, my personal interest is hurricane forecasting models.
Everybody has seen the multiple ‘spaghetti’ models the Weather Channel and others use when a storm is brewing and after it finally forms, to predict where it will go.
It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path.
Then test it with other past hurricanes and see what the deviations are and tweak it some more..............
ping
https://www.the-scientist.com/news-opinion/q-a-why-elife-is-doing-away-with-rejections-70667?
DP: We wanted to take advantage of the change that’s happening anyway in the research ecosystem around people posting preprints and sharing their work early. Once that world becomes the norm—and we do feel that that is the way things are headed—then the obvious next step is, what is the best way of reviewing? I don’t think anyone would claim that the current system of going through multiple journals and rounds of review at different venues makes any sense in this current world where you’re sharing preprints early. And so the eLife model is a way of saying, well, in this new world where research is shared early, what is the best way in which peer review can be applied? And this [new] version, we think, makes the most sense in that world.
The obsession with journal title and the venue in which you publish in biomedical literature really has to end, and we feel that this is an opportunity to do that. . . . Because when work is shared early, then the way it’s reviewed can be radically different. . . . In this model, what we’re saying is that having a single journal title that assigns quality, or assigns that research on the hierarchy, is incredibly inefficient. And so instead, a model where you actually assess and post reviews publicly—it’s a much fairer and faster and more equitable way of publishing...
This is beginning to make sense, now
https://www.the-scientist.com/news-opinion/renee-wegryzn-tapped-to-head-arpa-h-70478?
...The new agency will support a broad range of programs aimed at preventing, detecting, and treating a variety of diseases, including cancer, according to the White House’s statement. However, rather than focusing on early research, STAT reports that ARPA-H will investigate higher-level solutions such as commercializing technology. Modeled on DARPA, the new agency is intended to speed the development of treatment and cures for diseases.
Do you think maybe it's all about doing away with the emergency authorization step to make sure everybody's immune system gets marginalized unequally? Based on their perceived lack of success with the covid thing, it sounds a likely strategy to put some muscle (no pun meant) into the program and them patents do put money in their pockets.
Red Badger wrote: “It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path. Then test it with other past hurricanes and see what the deviations are and tweak it some more..............”
That’s pretty much what they do.
Even very SMALL systems present so much chaos that modeling requires generalizations that reduce fidelity, and hurricanes are MAMMOTH, HIGHLY chaotic systems. The difficulties encountered in modeling them at all accurately is evident in that MUCH human effort has been applied to the task, and -- even with modern technology, and the massive number-crunching power available -- we still have fallen well short of a good result; we still rely on real-time measurements, and frequent updates in our efforts to hone our predictions, and the degree of our remaining inaccuracy is evident in the "fan" shape of those moment-by-moment forecasts.
Too, there are factors that may influence the system more heavily than we know, and they would given with too little influence in the model, so -- again -- the fidelity of the model suffers. And there's what Briggs highlights: those factors "thought by the modeler to modify the probability of the observable Y..."
But DO they REALLY? Just because the modeler thinks so? Is it a valid assumption, or only that specific modeler's notion?
"It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path."
I respect the thought, but no two systems are EVER alike; the myriad variables are never the same; ocean surface temperature, humidity, prevailing winds, air temperatures at-altitude, winds aloft, position of the jet stream, other passing storm fronts, solar gain, time of day... it just goes on and on and on. Every last factor would have to be IDENTICAL at every place along the storm's route of travel for us to have any chance that data from a past hurricane would be reusable in a predictive context, and that's God level improbable.
Joe Bastardi is the only meteorologist I’ve seen do anything close to that.
“sociologists”
That might be part of the problem...
The problem is inherent in any complex dynamic system. Beyond a very low level of complexity, they simply cannot be predicted mathematically with any certainty.
The sad thing is that we knew that long before we developed computers capable of attempting to model these systems. Yet we’ve taken up this lost cause anyway.
Some models work just fine for me, especially after a clear-eyed examination. 😋
Excellent.
The article points to some basic rules to view models through:
1. All models only say what they are told to say.
2. Science models are nothing but a list of premises, tacit and explicit, describing the uncertainty of some observable.
I would add a third: Models don’t know what they don’t know.
What bothers me is how much weight models (best guess predictions, aka science fiction) carry over actual observations. The significant deviation of climate models from actual observed data comes to mind.
This is why Freeman Dyson would not accept the validity of such models.
Do you have her data points?
“It seems to me that they could take data from years past where an actual hurricane or hurricanes went, from start to finish, then use that data to fine tune a model to replicate the actual data path.
Then test it with other past hurricanes and see what the deviations are and tweak it some more.....”
Modelers regularly do this. It’s called “backfitting”, and it is probably already baked in to the projections you are seeing on tv. The problem is, there are a thousand different ways to backfit your model to past data, and no way to know which (if any) will make your model more accurate. Even if you test all the different ways to backfit against historical data, you run into the problem that even models which can successfully model historical data aren’t necessarily any more accurate at predicting future data.
George E. P. Box:
“All models are wrong, but some are useful.”
Yep, non-linear, chaotic systems are hard to predict over long time periods.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.