AI is not intelligence. It basically takes known information and averages it out to try to guess what might be the answer.
Garbage in, fast food out....
It brings to mind a scene from I, Robot, https://www.youtube.com/watch?v=sOKEIE2puso
It brings to mind a scene from I, Robot, https://www.youtube.com/watch?v=sOKEIE2puso
You’re right, but it’s actually even worse than that. What is the definition of “known information?” Turning an AI large language model loose, using as its knowledge base the unfathomable tangle of facts, errors, truth, lies, and opinions of every type on the internet, and then expecting it to somehow (magic?) come up with consistently rational and accurate answers is idiotic. This drive for “general AI” is nothing less than the atheistic materialists’ pursuit of a “god” that they believe they can control, and which will require nothing of them (which is the primary drive behind atheism in general -they hate the notion that there might be a real God who outranks them).
But this materialistic mindset causes them to make false assumptions about the nature of thought, intelligence, and consciousness. They refuse to acknowledge the existence of the soul, so they must ascribe every characteristic that makes someone “human” to nothing more than biochemical and electrical processes in the brain. THAT is how we got to where we are now, with these morons believing that all it will take is a sufficiently advanced computer to equal and then surpass human thought.
That said, AI will most likely exceed human abilities at solving narrowly-defined problems, like playing chess or even driving a car (though even that task contains enough uncertainty and unpredictability that it will take longer than expected to make it reliable and safe enough). Where AI will shine is in its ability to access information from a wide variety of sensors that can give it more situational awareness than a human can achieve via his five senses, and in its incredible speed at processing incoming data. That is why it has certain advantages at driving a car (even though right now it is still far too susceptible to making a “rookie” mistake that could get the occupants killed). The AI “driver” can see in all directions, has access to sensors like night vision and radar, and can process everything its sensors see at lightning speed. This are big advantages for doing something like driving a car. But that is a far cry from trying to turn AI into an all-knowing oracle that can answer every human query.
In the scene, Will Smith describes how a robot used “logic” to save him over the little girl.
It’s a lot more than “averages” though. I know you must be familiar with neural networks but there’s layers and layers of NN interactions that achieves something far more special. Passing medical and Bar exams at levels above average is quite impressive - although I’d agree, for those, that does become largely training + word sequences, via vector databases. But the ability to compare and contrast complex topics, analyze pictures, deconstruct UML diagrams into requirements, use-cases, solving equations, etc.., is like having an expert consultant sitting next to me. A tool I use every day.
At some point the distinction between “thinking” at a biological level vs. “simulated in technology” is going to be hard to define. Some would argue it’s already there.
This example is nothing more than lack of effective QA testing. It’s embarrassing to the profession - but hey, it’s not FuSa related with no liability. This was just inexperience and being cheap respective to developers and testers.
It’s also inevitable. Any time there’s new technology the initial hiccups happen. Anyone thinking this is some sort of passing fad is going to be disappointed.