Limited, narrowly-focused applications, such as automated navigation, either in the air or on the ground, can work well because those are primarily mechanistic tasks. But computers are not good at those tasks because they’ve somehow obtained the ability to think. They’re good at them, even better than humans at times, because they can use a multitude of sensors to give them better situational awareness, and they can calculate at extremely high speeds. These are tasks that are perfectly suited for what is just a very advanced calculator.
But the pipe dream of “general AI” will never work, for among many reasons the fact that these software algorithms make constant errors (because they can’t actually think!), and that the larger they grow the more errors they make and the more difficult (and eventually impossible) it becomes for humans to detect errors and correct them. A sufficiently comprehensive “AI” would be impossible to quality check, because no human or even team of humans would be able to know how and why it made every one of its trillions of decisions per second.
Unless we’re just going to create an idiocracy that unquestioningly trusts anything some “AI” algorithm spits out (a very real risk given the current foolish feeding frenzy), I see no long-term place for “general AI.” The very concept is fatally flawed, no matter how advanced the calculation engines that it runs on become.
“Unless we’re just going to create an idiocracy that unquestioningly trusts anything some “AI” algorithm spits out (a very real risk given the current foolish feeding frenzy), I see no long-term place for “general AI.” The very concept is fatally flawed, no matter how advanced the calculation engines that it runs on become.”
And they are already creating an idiocracy. They will let it dictate government, finance, education, healthcare, legal systems... And every little detail of our lives.