A supercomputer cannot analyze anything.
People created a model based on observations and assumptions, and then wrote a program to test their model.
They then ran the program on a “supercomputer” (which will be a standard “desktop” computer maybe 20 years hence). The program produced a set of results based on the data provided.
The set of results were then interpreted.
Exactly so.
No computer can beat a chess master.
But a team of chess players who program a computer to play chess can beat a chess master.
The team beat the master...the computer is no smarter than a stone.
AI is a myth, simulated AI is real.
You can create a robot that can perform surgery but it will ALWAYS be the team of surgeons and programmers that are actually doing the surgery.
There is a huge difference between intelligence and a programmed simulation of intelligence and there always will be.
It is interesting in the sense, as many others have pointed out, this is more of a vascular disease than one simply impacting the lungs.
What boggles my mind is how efficient this thing is. Folks can be asymptomatic. It attacks the blood vessels. It impacts the lungs but in a fashion outside the medical norm. It does a lot of damage in a number of ways.....all things we have seen before....except this has a host of them. I am not a scientist but it seems odd to me, this evolved to thread the needle in so many specific ways....unlike anything previously. Unless there is some intelligent design behind it.
I think youre off a bit.
Its certainly possible to do a classification model across a variety of known treatments correlated against analyzed characteristics of an illness. Likely unsupervised learning from the snippet. Math-intensive, thus throwing lots of compute power at it.
Writing a program to test hypotheses is what one might do with each of these potential treatments (although its really applying regression against control vs treatment, not writing any custom code).
“A supercomputer cannot analyze anything.
People created a model based on observations and assumptions, and then wrote a program to test their model.
They then ran the program on a supercomputer (which will be a standard desktop computer maybe 20 years hence). The program produced a set of results based on the data provided.
The set of results were then interpreted.”
Bingo. My thoughts exactly. And the results were interpreted by humans.
You might do well to educate yourself on current data analytic models. Supercomputers CAN analyze things across vectors that humans cannot comprehend. Imagine you are trying to understand a relationship between items. If you have 2 vectors, you can create a 2D graph and understand relationships between the numbers (what is above average or below average results. You can even do this with a third variable and create a 3d graphic. But when you have n variables, the human brain cannot conceive of an n-d graphic where n=100 or 1000 or any number you can think of. But a computer can. What this comes down to is ... humans can imagine a potential solution to a problem, and we often call that imagined solution our “intuition” or an intuitive guess, but the computer can actually run the math and prove or invalidate the intuition. In this regard, humans are necessary to understand how to form/pose the question, and identify the known variables to consider, but it is up to the computers to prove if the theory actually works, because that “knowledge” is beyond human ability to determine.