Posted on 06/06/2019 7:05:24 PM PDT by ETL
Hackers may soon be able to discern information from the sounds of typing.
A 2019 study of the budding phenomenon, obtained by Cornell University and not yet published in a peer-reviewed journal, found that Android phones and tablets are able to pick up sound waves of typing through their microphones.
In theory, these sound waves can dictate where on a screen a person was tapping, meaning hackers could have access to pins, logins, text messages and other sensitive and private content, the Wall Street Journal reports.
So, if hackers were able to access a devices microphone they could be privy to messages sent through the phone.
The research was conducted at the University of Cambridge and Linköping University in Sweden, where scientists were able to recover 27 out of 45 passwords on a phone, and 19 out of 27 passwords on a tablet, all through typing vibrations.
If right now its really hard to imagine anybody deploying these attacks, in the near future theyre definitely going to be there, study author Ilia Shumailov tells the WSJ.
The report also refers to a 2012 paper, also not peer-reviewed, from the University of Pennsylvania, which found that researchers could discern an Android phone numerical password 43% of the time and swipe pattern passcode 73% of the time.
While researchers said that this style of hacking is not currently a concern, as technology progresses, these breaches could become reality.
(Excerpt) Read more at foxnews.com ...
Your voice is unique because of the shape of your vocal cavities and the way you move your mouth when you speak.
To enroll in a voiceprint system, you either say the exact words or phrases that it requires, or you give an extended sample of your speech so that the computer can identify you no matter which words you say.
When people think of voiceprints, they often think of the wave pattern they would see on an oscilloscope.
But the data used in a voiceprint is a sound spectrogram, not a wave form.
Speaker recognition systems use spectrograms to represent human voices
A spectrogram is basically a graph that shows a sound's frequency on the vertical axis and time on the horizontal axis.
Different speech sounds create different shapes within the graph.
Spectrograms also use colors or shades of grey to represent the acoustical qualities of sound.
This tutorial has a lot more information on spectrograms and how to read them.
Some companies use voiceprint recognition so that people can gain access to information or give authorization without being physically present.
Instead of stepping up to an iris scanner or hand geometry reader, someone can give authorization by making a phone call.
Unfortunately, people can bypass some systems, particularly those that work by phone, with a simple recording of an authorized person's password.
That's why some systems use several randomly-chosen voice passwords or use general voiceprints instead of prints for specific words.
Others use technology that detects the artifacts created in recording and playback.
It would be different in the sense that you would first need to associate a particular sound with a particular key on a keyboard. Then you'd have to piece it all together to make sense of the number and/or letter combinations. It would be like breaking a code.
ie, A = x sound pattern, B = y sound pattern, etc, etc.
get 100 people type the same document on 100 different keyboards and map the audio to the words
the bigger the sample size the better the recognition, then adjust for speed
you are mapping the whole word not the key sound itself
Even if they could, I can absolutely assure you that no one would WANT to go through that much trouble to read my texts.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.