The deep neural networks that are often used in these applications to classify objects are only as good as they’ve been “trained” to be. You train them by providing thousands of examples with the right answer and as it gets the wrong answer the network has to “adjust itself” to be closer - when done thousands of times it’ll get quite accurate but beyond 98% becomes a challenge.
There’s a case where a DNN by Google that could recognize animals in a picture incorrectly classified a black man as a gorilla (something like that) and they scrambled to add new “training” to the DNN.
So it might just require additional training. It is important for self-driving cars to classify objects for the purpose of anticipating behavior. Is an object at the side of the road a fire hydrant or a small child, vs. a dog, etc..
In general this specific finding is being overblown, they can probably improve and it’s only a statistical anomaly. That said, of course they’re going to jump up and down and scream about it - as though it is some deliberate outcome of white nationalists!
[The deep neural networks that are often used in these applications to classify objects are only as good as theyve been trained to be.]
Just wait until they become self-aware at 2:14 a.m.....
Must be a different year, though.