I guess I’d have to ask - why wasn’t the experiment carried out by experts?
It was.
The moral of the story is that the margin of error for software running some of these systems is ten to the minus ninth power - extreme improbability.
Allowing a computer to teach itself can’t reach that level of certitude.
Even the smallest glitch would result in industry-killing liability lawsuits the likes of which we’ve never seen.
The risk is not worth the reward.