They make this sh*t up as they go along, don’t they????
TELL me this isn’t serious...
It is serious.
Visual object detection methods work best with light-colored objects against a darker background. They also work fairly well with dark-colored objects on a lighter background. Dark-on-dark does not go as well and takes longer. Milliseconds count here.
That Black person, wearing a dark-colored hoodie jacket, walking across a stretch of black asphalt may not be "seen" by the camera system of that automatic car coming down the road.
Maybe he should wear a gaudy outfit to improve the odds of being seen.
Maybe the Engineers will keep working on the problem and solve it.
Reminds me of some controversy over Kodak (?) film accused of being “racist” because it didn’t have the dynamic range to process darker skin tones.
The deep neural networks that are often used in these applications to classify objects are only as good as they’ve been “trained” to be. You train them by providing thousands of examples with the right answer and as it gets the wrong answer the network has to “adjust itself” to be closer - when done thousands of times it’ll get quite accurate but beyond 98% becomes a challenge.
There’s a case where a DNN by Google that could recognize animals in a picture incorrectly classified a black man as a gorilla (something like that) and they scrambled to add new “training” to the DNN.
So it might just require additional training. It is important for self-driving cars to classify objects for the purpose of anticipating behavior. Is an object at the side of the road a fire hydrant or a small child, vs. a dog, etc..
In general this specific finding is being overblown, they can probably improve and it’s only a statistical anomaly. That said, of course they’re going to jump up and down and scream about it - as though it is some deliberate outcome of white nationalists!