Posted on 10/26/2015 8:08:40 AM PDT by Sam's Army
Researchers have created an artificial neural network that identifies the activities in a users average day through a catalogue of images.
New research in neural networks may let computers identify our daily actions more accurately than the apps on the market that track things like GPS location and heart rate. A new computer model has achieved about 83 percent accuracy in identifying the activities it sees in real-life imagesand with just a bit of training it could do this for any user it encounters.
Led by Georgia Tech graduate students Daniel Castro and Steven Hickson, researchers have created an artificial neural network designed to identify scenes in so-called egocentric photographs taken from the users point of view. These usually come from wearable cameras like Narrative Clip, MeCam, Google Glass, and GoPro, but regular cell-phone photos often work as well. The team gave the network its skill by training it with a set of about 40,000 images taken by a single individual over a six-month period. This dedicated volunteer manually associated each image with an activity, and naturally settled on using 19 basic activity labels. These labels include driving, watching TV, family time, and hygiene.
A separate learning algorithm combines the neural networks guesses with metadata about the day and time at which the image was captured. This allows the network to learn common associations between activities and even make predictions about the users upcoming schedule.
Its this ensemble-like method, where we trained on top of a deep learning method, says Hickson. So it can leverage the deep learning, and the basic contextual information on daily activities. (See 10 Breakthrough Technologies 2013: Deep Learning.)
Wearable technology developers could offer much more insightful services with this technology. The researchers imagine an app that notices a users eating or exercise habits and suggests possible adjustments. And since it can learn your schedule, it could make intelligent suggestions on the fly, like leaving early for work due to a traffic report. Castro says it might even let an app reorganize your activities throughout the day so you can get through them more efficiently.
Microsoft researcher Gordon Bell has worked on so-called e-memory, which aims to assist human recall with computers. He says that the key is giving machines the ability to recognize the content of photos. Every one of these steps forward [for machine learning] is incredibly valuable, says Bell. Id look at [this indexing ability] as something that will enhance your long-term memory by being able to find things in earlier situations. He says that in the future, e-memory algorithms could search a wide variety of photos from more than just the egocentric viewpoint, so its got a wide range of applicability.
Happily, not every user has to compile a 40,000-image database to take advantage of this technology. When the team tested its machine-learning ensemble on two new volunteers, it struggled with the changes in lifestyle. Hickson says they did just a quick study on the effect of fine-tuning the model, training it with just a single days worth of egocentric photos from their two new volunteers. The accuracy of the results increased dramatically, he says.
As always with wearable cameras, however, there are complex issues of privacy and user trust. Point-of-view photography (egography) allows insights that can be extremely useful when put to work for users, but it can also create a very desirable target for criminal hackers and nosy advertising companies. The practice is even becoming political through the spreading use of police body cameras to automatically record interactions with suspects (see Controlling When the Cameras Record).
Some of the issues could evaporate if the hardware needed to run intensive machine learning algorithms in consumer-grade mobile devices becomes available. If data no longer has to travel over the Internet for processing, the researchers say, security becomes a lot more manageable. Castro says the challenge is whether we can figure out what these privacy issues are now so we dont run into problems later, say five years down the line, when these devices are available.
The researchers do examine the possibility of an image-analysis algorithm that could complement theirs by identifying and removing private information from images automaticallya casual request of the machine-learning community that would have seemed far too aspirational just a few short years ago.
Can we imagine an app that doesn't seek to know everything about us instead?
Sure it will...
I'd like to see a "privacy app" that scrambles all data collection about us gathered wirelessly in our homes through smart meters and digital media devices and on the roads through our EZpasss and police data gathering instruments. One's location could only be known to a designated party, such as a friend living at a distance, whom one's spouse would know how to contact. But one could also disable anyone from knowing.
And tyrants.
When a person goes to Walmart to print photos from an iPhone, the photo kiosk uploads the user's complete photo album from the device, even if it is hundreds of photos, then asks the user to choose the photo he wants to print. Lastly, before printing the user's photo, it asks for his or her name, address and phone number. This is completely creepy. Why can the user not just upload a single photo to the machine to have it printed?
Uh, let me ponder the ramifications of this app.
NO!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.