Posted on 06/12/2019 4:06:23 PM PDT by BenLurkin
Named Speech2Face, the neural network a computer that "thinks" in a manner similar to the human brain was trained by scientists on millions of educational videos from the internet that showed over 100,000 different people talking.
From this dataset, Speech2Face learned associations between vocal cues and certain physical features in a human face, researchers wrote in a new study. The AI then used an audio clip to model a photorealistic face matching the voice
Thankfully, AI doesn't (yet) know exactly what a specific individual looks like based on their voice alone. The neural network recognized certain markers in speech that pointed to gender, age and ethnicity, features that are shared by many people, the study authors reported.
"As such, the model will only produce average-looking faces," the scientists wrote. "It will not produce images of specific individuals."
(Excerpt) Read more at livescience.com ...
LoL - that’s what I hear/picture too!
Winner!
Cant say Im surprised but I am impressed and creeped out at the same time.
“Went back and re-read the article. No mention of Ebonics in there.”
I think it means the AI recognized a person as being black based on his speech (Ebonics).
Clarence Thomas never struck me as an Ebonics speaker.
#IveBeenFoundOut!
I was thinking the same thing, creepy.
I have a face for text. ;-)
ROFL!
Ah, I’ve noticed that sometimes you bark out your comments when you’re angry. :)
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.