The problem with AI is that it will have to make decisions that only people should be making.
There is the old “difference between men and women” paradox regarding the home by railroad tracks. You are close to a switch that will change the course of approaching trains. Normally, it takes them by your house, but if you throw the switch, it will take them down a track that ends and the train will go over a cliff.
So the scenario is that you are near the switch and you see your baby plaing in the middle of the track by your house, and a passenger train is barreling down the track toward your baby and its certain demise. If you throw the switch, the baby is saved but all the passengers on the train will die.
The theory is that the mother’s maternal instinct will cause her to throw the switch. But the father will not. Neither would AI.
And that is a good thing.
Let’s say that the AI makes a bad decision (and it will). Who is the liable party?
The AI is based on the opinions of 1.3 million people. Presumably, at least half of them are women. So perhaps the result will be to throw the train full of passengers off the cliff.