AI can handle the “probability” of errors. But currently AI itself is error prone.
Currently the companies invested in AI do not seem concerned about the errors produced by AI.
In the future, will anyone be concerned about the level of errors? Or will we accept a percentage of errors?
Will humans be hired to detect and fix the errors? Or will ghere be levels of SUPER-AI checking he errors of inferior AI ?
“Will humans be hired to detect and fix the errors?”
Might be that AI errors will be so numerous that it’s cheaper to use humans.
Scholarly documents give references to the sources of information used. If an AI document doesn’t give its sources, then it’s no good to base important decisions on.