“Will humans be hired to detect and fix the errors?”
Might be that AI errors will be so numerous that it’s cheaper to use humans.
Scholarly documents give references to the sources of information used. If an AI document doesn’t give its sources, then it’s no good to base important decisions on.
“If an AI document doesn’t give its sources”
1) Objective facts: AI (and everyone) has access to US Census bureau, USPS and other objective lists of cities, counties, states, etc. AI habitually makes objective errors in geography, math, and objective topics.
2) Subjective Opinion: AI quotes SPLC Southern Poverty Law Center as objective fact when it is in fact subjective opinion. Repeatedly AI quotes and bases logic on subjective opinion with no basis in objective facts.
3) Hallucinations: Even when legitimate sources exist, AI does not quote them; but makes up sources and the content of those sources.
4) Partial data: Even when full data on a topic is available, AI often cherry picks only some of that data. And often the data it picks is not the important data. There will be 10 major facts and 20 minor facts about a company, a politician, a celebrity. I will list 5 of the 10 major facts and 13 of the 20 minor facts and totally miss half of the major facts. Example: A celebrity has won major awards. AI will only mention a few of the major awards but then also list insignificant obscure awards and miss half the major awards of the celebrity.
It appears that sources of AI data is data that is not copywrighted, or is free, or where it is thought it can get
away with copywrighted data without consequence It appears AI starts with the limited human subjectivity of its creators and then builds only on that human subjectivity.