Posted on 07/02/2012 5:05:54 PM PDT by null and void
Breakthrough opens door to several new technology possibilities
By amplifying variations in successive video frames, a team of researchers from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) has discovered an amazing new way of making subtle movements captured on video typically invisible to the human eye distinctly visible.
The group that worked on this project includes graduate student Michael Rubinstein, recent alumni Hao-Yu Wu 12, MNG 12 and Eugene Shih SM 01, PhD 10, and professors William Freeman, Fredo Durand and John Guttag. In the video below, they demonstrate how the technology works; specifically, how it is able to display fast, subtle movements like a human pulse, breathing, and a persons skin becoming red and growing pale with the flow of blood to the region.

In these frames of video, a new algorithm developed by a team from MIT amplifies the subtle change in skin color caused by the pumping of the blood.
The team likens the technology to the way in which an equalizer in a sound system works; that is, how its job is to boost some frequencies and cut others. The difference with the teams approach is that the frequency theyre focusing on is that of color change in a sequence of video frames, not an audio signal.
Taking a closer look
The prototype presented allows the user to specify the frequency range as well as the degree of amplification. It works in real time, and displays both the original video as well as the altered one, with changes between the two magnified for clearer understanding on the differences.
The team suggests that if the range of frequencies is wide enough, their software can be used to amplify changes that occur only once (as opposed to those that recur at regular intervals e.g. heartbeat, lungs breathing, plucked guitar string, etc.) This would allow users the ability to compare different images of the same scene and easily pick out changes that would otherwise go unnoticed.
Stumbling upon this discovery
The team started out looking to create a technology that would amplify color changes. In their experiments, however, they found that their program was really effective when it came to amplifying motion as well.
We started from amplifying color, and we noticed that wed get this nice effect, that motion is also amplified, Rubinstein explains. So we went back, figured out exactly why that happens, studied it well, and saw how we can incorporate that to do better motion amplification.
A future in healthcare
Rubinstein foresees this technology being particularly useful in healthcare. For instance, it could be used for the contactless monitoring of a patients vital signs: Boosting one set of frequencies allows the measurement of pulse rates via subtle changes in skin coloration; boosting another set of frequencies allows monitoring of breathing.
This sort of thinking could be particularly useful with infants who are born prematurely or otherwise require early medical attention. Their bodies are so fragile, you want to attach as few sensors as possible, Rubinstein notes.
He adds that the technology could be used for baby monitors, too, wherein concerned parents could check on the vital signs of their sleeping toddler simply by adjusting the frequency of the video feed.
Popular concept
Since sharing their discovery, outside researchers have begun making suggestions to Rubinstein and the rest of the MIT team other ways to use the technology, including laparoscopic imaging of internal organs, long-range-surveillance systems, contactless lie detection, and more.
Its a fantastic result, says Maneesh Agrawala, an associate professor in the electrical engineering and computer science department at the University of California at Berkeley, and director of the departments Visualization Lab. Agrawala points out that Freeman and Durand were part of a team of MIT researchers who made a splash at the 2005 Siggraph with a paper on motion magnification in video.
This approach is both simpler and allows you to see some things that you couldnt see with that old approach, Agrawala says. The simplicity of the approach makes it something that has the possibility for application in a number of places. I think well see a lot of people implementing it because its fairly straightforward.

Test it on Kenyan see if he lies.
....or when he doesn’t.
When doesn’t he?
For that matter, does he even know what truth is?
...he wouldn’t know the truth if it bite him on one of his elephant ears
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.