The old contract between explanation and trust is breaking down. We used to believe that if we couldn’t explain it, we shouldn’t believe it. Now we’re using tools every day that outperform us without offering any narrative of how they do it.
Richard Feynman spoke about physics at a deep level, and he discussed the math and stated that he could run the numbers and get the right answer to a physics problem -- but WHY was it the right answer? WHY did physics work the way it worked? He said he had no idea. It just does what it does. He used equations to essentially predict an outcome that, at a fundamental level, he couldn't truly understand.
Physics is a true "hard science". It's all numbers. Now with AI, we may be reducing human psychology to a similar point. The machines have no narrative, no explanation to offer -- but they run the numbers and predict an outcome that the machine doesn't even try to "understand" because it's not even "thinking". But it outperforms us and basically "knows" us better than we know ourselves.
This takes us into a whole new world.
“The machines have no narrative”
We better hope they do not follow the path humans do—and make up wrong and stupid narratives to fill the logic gaps.
Modern physics is a mess.
Part of that is based on the government funding model—which discourages deep analysis of foundational questions and rewards building on existing foundations.
If anyone before you made a blunder you will not find it and will just make it worse.
The most common type of blunder is when a scientist many years ago was able to figure out that whenever there was A there was B—and then concluded that A caused B.
Unfortunately for them it may turn out that (as yet undiscovered) C can cause B as well.