Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Pictures Posing Questions - The next steps in photography could blur reality
Science News Online ^ | April 7, 2007 | Patrick L. Barry

Posted on 04/06/2007 10:42:09 PM PDT by neverdem

When a celebrity appears in a fan-magazine photo, there's no telling whether the person ever wore the clothes depicted or visited that locale. The picture may have been "photoshopped," we say, using a word coined from the name of the popular image-editing software, Adobe Photoshop.

a8307_1584.jpg

In one new aspect of computational photography, a dome contains hundreds of precisely positioned flash units. A high-speed camera captures a frame as each flash fires in sequence. Computers can then relight the scene as they reconstruct it.
Debevec/University of Southern California

But today's image processing is just a prelude. Imagine photographs in which the lighting in the room, the position of the camera, the point of focus, and even the expressions on people's faces were all chosen after the picture was taken. The moment that the picture beautifully captures never actually happened. Welcome to the world of computational photography, arguably the biggest step in photography since the move away from film.

Digital photography replaced the film in traditional cameras with a tiny wafer of silicon. While that switch swapped the darkroom for far more-powerful image-enhancement software, the camera itself changed little. Its aperture, shutter, flash, and other components remained essentially the same.

Computational photography, however, transforms the act of capturing the image. Some researchers use curved mirrors to distort their camera's field of view. Others replace the camera lens with an array of thousands of microlenses or with a virtual lens that exists only in software. Some use what they call smart flashes to illuminate a scene with complex patterns of light, or set up domes containing hundreds of flashes to light a subject from many angles. The list goes on: three-dimensional apertures, multiple exposures, cameras stacked in arrays, and more.

In the hands of professional photographers and filmmakers, the creative potential of these technologies is tremendous. "I expect it to lead to new art forms," says Marc Levoy, a professor of computer science at Stanford University.

Medicine and science could also benefit from imaging techniques that transcend the limitations of conventional microscopes and telescopes. The military is interested as well. The Defense Advanced Research Projects Agency, for example, has funded research on camera arrays that can see through dense foliage.

For consumers, some of these new technologies could improve family snapshots. Imagine fixing the focus of a blurry shot after the fact, or creating group shots of your friends and family in which no one is blinking or making a silly face. Or posing your children in front of a sunset and seeing details of their faces instead of just silhouettes.

Since the late 1990s, inexpensive computing power and improvements in digital camera technology have fueled research in all these areas of computational photography. Levoy says that scientists "look around and see more and more everyday people using digital cameras, and they begin to think, 'Well, this is getting interesting.'"

Robots to superheroes

Computational photography has roots in robotics, astronomy, and animation technology. "It's almost a convergence of computer vision and computer graphics," says Shree Nayar, professor of computer science at Columbia University.

a8307_2327.jpg

SUN AND SHADOWS. A conventional camera poorly captures scenes with both extreme brightness and dark shadows (top). Using computational photography techniques, it's possible to create an image that preserves more detail (bottom).
Computer Vision Lab., Columbia Univ.

Attaching a video camera to a robot is easy, but it's difficult to get the robot to distinguish objects, faces, and walls and to compute its position in a room. "The recovery of 3-D information from [2-D] images is kind of the backbone of computer vision itself," Nayar says.

Other important optics and digital-imaging advances have come from astronomy. In that field, researchers have been pushing boundaries to view ever-fainter and more-distant objects in the sky. In one technique, for example, the telescope's primary mirror continuously adjusts its shape to compensate for the twinkling effect created by Earth's atmosphere (SN: 3/4/00, p. 156: Available to subscribers at http://www.sciencenews.org/articles/20000304/bob10.asp).

Rapid progress in computer animation during the 1980s and 1990s provided another cornerstone of the new photography. The stunning visual realism of modern animated movies such as Shrek and The Incredibles comes from accurately computing how light bounces around a 3-D scene and ultimately reaches a viewer's eye (SN: 1/26/02, p. 56: http://www.sciencenews.org/articles/20020126/bob10.asp). Those calculations can be run in reverse—starting from the light that entered the lens of a camera and tracing it back—to deduce something about the real scene.

Such calculations make it possible to decode the often-distorted images taken by these unconventional cameras. "What the computational camera does is it captures an optically coded image that's not ready for human consumption," Nayar explains. By unscrambling the raw images, scientists can extract extra information about a scene, such as the shapes of the photographed objects or the unique way in which those objects reflect and absorb light.

Photo fusion

One powerful way to do computational photography is to take multiple shots of a scene and mathematically combine those images. For example, even the best digital cameras have difficulty capturing extreme brightness and darkness at the same time. Just look at an amateur snapshot of a person standing in front of a sunlit window.

Compared with a single photo, a sequence of shots taken with different exposures can capture a scene with a wide range of brightness, called the dynamic range. Both a bright outdoor scene and the person in front of it can have good color and detail when the set of images is merged. The method was described by Nayar and others at a conference in 1999.

In a similar way, a series of frames in which the focus varies can produce a single, sharp image of the entire scene. Both these types of mergers can be arduously performed with standard image-editing software, but computational photography automates the process.

A related technique fuses a series of family portraits into a single image that's free of blinking eyes and unflattering expressions. After using a conventional camera to take a set of pictures of a group of people, the photographer might feed the pictures into a program described during a 2004 conference on computer graphics by Michael Cohen and his colleagues at Microsoft Research in Redmond, Wash.

The user indicates the photos in which each face looks best, and the software then splices them into a seamless image that makes everyone attractive at the same time—even though the depicted moment never happened. This software is now being offered with a high-end version of Microsoft's Windows Vista.

a8307_4890.jpg

3-D FROM A DOUGHNUT. Photographing a person's face with a cone-shaped mirror in front of the lens creates a distorted, doughnut-shaped image (left). The cone provides two extra perspectives of the face on opposite sides of the center point, providing enough information to construct a 3-D model (right).
Computer Vision Lab., Columbia Univ.

Want that family photo in 3-D? Nayar's group takes three-dimensional pictures with a normal camera by placing a cone-shaped mirror, like a cheerleader's megaphone, in front of the lens. Because some of the light from an object comes directly into the lens and the rest of the light first bounces off spots inside the cone, the camera captures images from multiple vantage points. From those data, computer software constructs a full 3-D model, as Nayar's group explained at the SIGGRAPH meeting last year in Boston.

A mirrored cone on a video camera might be especially useful to capture an actor's performance in 3-D, Nayar says.

Another alteration of a camera's field of view makes it possible to shoot a picture first and focus it later. Todor Georgiev, a physicist working on novel camera designs at Adobe, the San Jose, Calif.–based company that produces Photoshop, has developed a lens that splits the scene that a camera captures into many separate images.

Georgiev's group etched a grid of square minilenses into a lens, making it look like an insect's compound eye. Each minilens creates a separate image of the scene, effectively shooting the scene from 20 slightly different vantage points. Software merges the mini-images into a single image that the photographer can focus and refocus at will. The photographer can even slightly change the apparent vantage point of the camera. The team described this work last year in Cyprus at the Eurographics Symposium on Rendering.

In essence, the technique replaces the camera's focusing lens with a virtual lens.

Light motifs

The refocusing trick made possible by Georgiev's insect-eye lens can also be achieved by placing a tiny array of thousands of microlenses inside the camera body, directly in front of the sensor that captures images.

Conceptually, the microlens array is a digital sensor in which each pixel has been replaced by a tiny camera. This enables the camera to record information about the incoming light that traditional cameras throw away. Each pixel in a normal digital camera receives light focused into a cone shape from the entire lens. Within that cone, the light varies in important ways, but normal cameras average the cone of light into a single color value for the pixel.

By replacing each pixel with a tiny lens, Levoy's research team developed a camera that can preserve this extra information. Mathematically, say the researchers, the change expands the normal 2-D image into a "light field" that has four dimensions. This light field contains all the information necessary to calculate a refocused image after the fact. Ren Ng, now at Refocus Imaging in Mountain View, Calif., explained the process at a 2005 conference.

Capturing more information about incoming light waves can also create powerful new kinds of scientific and medical images. For example, Stephen Boppart and his colleagues at the University of Illinois at Urbana-Champaign create 3-D microscopic photos by processing the out-of-focus parts of an image.

The team devised software to examine how a tissue sample, for instance, bends and scatters light. In the February 2007 Nature Physics, the researchers describe how the device uses that information to discern the structure of the tissue. "What we've done is take this blurred information, descramble it, and reconstruct it into an in-focus image," Boppart says.

a8307_3494.jpg

ARTIFICIAL LIGHTING. By filming a person inside a dome containing hundreds of flashes (left), a filmmaker can re-light the scene afterward using a computer to calculate how the person would look under any combination of colored lights (right).
Debevec/University of Southern California

In computational photography, the flash becomes more than a simple pulse of light. For example, a room-size dome built by Paul Debevec of the University of Southern California in Los Angeles and his colleagues makes it possible to redo the lighting of a scene after it's been shot. Hundreds of flash units mounted on the dome fire one at a time in a precise sequence that repeats dozens of times per second. A high-speed camera captures a frame for every flash.

The result is complete information about how the subject reflects light from virtually every angle. Software can then compute exactly how the scene would look in almost any lighting environment, the researchers reported at the 2006 Eurographics Symposium on Rendering. This method is particularly promising for making films.

What is reality?

With all this manipulative power come questions of authenticity. The more that photographs can be computed or synthesized instead of simply snapped, the less confident a viewer is that a picture can be trusted.

"Certainly, all of us have a certain emotional attachment to things that are real, and we don't want to lose that," Nayar says. For example, to get a perfect family portrait, one might prefer that nobody had blinked. But is a bad shot better than a synthesized moment?

Whether film or digital, photographic images have always departed from reality to some degree. "And every generation, I believe, will redefine how much you can depart," Nayar says. "What was completely unacceptable 20 years ago has become more acceptable today."

Perhaps 20 years from now, when a photographer changes a picture's vantage point, people will still consider the scene to be real. But using a computer to change the clothes that a person in the image is wearing might be going too far, Nayar proposes.

Often, the goal of computational photography isn't to depart from reality but to create a closer facsimile of it. For example, someone looking at people standing in front of a sunset can see the faces clearly and can focus on any part of the scene. A normal photograph, with its dark silhouettes and fixed focus, offers a viewer less than reality.

So, a manipulated image can be "closer, by some subjective argument, to what the real world is for a person looking at it," Levoy says.

It's difficult to say which of the many technologies under the umbrella of computational photography will ever reach the consumer market. The room-size dome containing hundreds of flash units will almost certainly remain in the realm of specialized photographers and movie studios. Other techniques may be suitable for everyday use, but whether and when they reach the market will depend on the vagaries of business and marketing.

In whatever form computational photography becomes commonplace, people continue to adopt it over conventional image making will take pictures that capture more of what they actually see, and sometimes what never was at all.


If you have a comment on this article that you would like considered for publication in Science News, send it to editors@sciencenews.org. Please include your name and location.


To subscribe to Science News (print), go to
https://www.kable.com/pub/scnw/ subServices.asp.

To sign up for the free weekly e-LETTER from Science News, go to
http://www.sciencenews.org/pages/subscribe_form.asp.

References:

Agarwala, A., et al. 2004. Interactive digital photomontage. ACM Transactions on Graphics 23(August):292-300. Abstract available at http://portal.acm.org/citation.cfm?id=1015718&jmp=cit&coll
=portal&dl=ACM&CFID=19253470&CFTOKEN=55531145#abstract
. See also http://grail.cs.washington.edu/projects/photomontage/.

Baker, S., and S.K. Nayar. 2001. Single viewpoint catadioptric cameras. In Panoramic Vision: Sensors, theory, and applications. R. Benosman, and S.B. Kang, eds. New York: Springer-Verlag.

Einarsson, P., et al. 2006. Relighting human locomotion with flowed reflectance fields. Eurographics Symposium on Rendering. June 26-28. Cyprus. Available at http://gl.ict.usc.edu/research/RHL/.

Georgiev, T., et al. 2006. Spatio-angular resolution trade-offs in integral photography. Eurographics Symposium on Rendering. June 26-28. Available at http://www.tgeorgiev.net/Spatioangular.pdf.

Kuthirummal, S., and S.K. Nayar. 2006. Multiview radial catadioptric imaging for scene capture. ACM Transactions on Graphics 25(July):916-923. Abstract available at http://doi.acm.org/10.1145/1179352.1141975.Reprint available at http://www1.cs.columbia.edu/CAVE/publications/pdfs/Kuthirummal_TOG06.pdf.

Mitsunaga, T., and S.K. Nayar. 1999. Radiometric self calibration. IEEE Conference on Computer Vision and Pattern Recognition. June 23-25. Abstract available at http://dx.doi.org/10.1109/CVPR.1999.786966.

Ng, R. 2005. Fourier slice photography. ACM Transactions on Graphics 24(July):735-744. Abstract available at http://doi.acm.org/10.1145/1073204.1073256. Reprint available at http://graphics.stanford.edu/papers/fourierphoto/.

Ralston, T.S. . . .and S.A. Boppart. 2007. Interferometric synthetic aperture microscopy. Nature Physics 3(February):129-134. Abstract available at http://dx.doi.org/10.1038/nphys514.

Further Readings:

Cowen, R. 2000. Getting a clear view. Science News 157(March 4):156-158. Available to subscribers at http://www.sciencenews.org/articles/20000304/bob10.asp.

Weiss, P. 2002. Calculating cartoons. Science News 161(Jan. 26):56-58. Available at http://www.sciencenews.org/articles/20020126/bob10.asp.

Sources:

Stephen A. Boppart
University of Illinois, Urbana-Champaign
405 N. Mathews Avenue
Urbana, IL 61801

Michael F. Cohen
Microsoft Research
One Microsoft Way
Redmond, WA 98052

Paul Debevec
Centers for Creative Technologies
University of Southern California
Los Angeles, CA 90089

Todor Georgiev
Adobe Systems
345 Park Avenue
San Jose, CA 95110-2704

Marc Levoy
Stanford University
Gates Bldg 3B-366
Stanford, CA 94305-9035

Shree Nayar
Columbia University
2960 Broadway
New York, NY 10027-6902

David Salesin
Adobe Systems
801 N. 34th Street
Seattle, WA 98103



http://www.sciencenews.org/articles/20070407/bob8.asp

From Science NewsVol. 171, No. 14, April 7, 2007, p. 216.

Copyright (c) 2007 Science Service. All rights reserved.



TOPICS: Culture/Society; Government; News/Current Events; Technical
KEYWORDS: medicine; photography; science
Navigation: use the links below to view more comments.
first 1-2021 next last

1 posted on 04/06/2007 10:42:13 PM PDT by neverdem
[ Post Reply | Private Reply | View Replies]

To: neverdem

Neat stuff.


2 posted on 04/06/2007 10:43:14 PM PDT by PetroniusMaximus
[ Post Reply | Private Reply | To 1 | View Replies]

To: PetroniusMaximus
Ug....it's going to create a whole sub-culture of fabricated, highly realistic celebrity porn.
3 posted on 04/06/2007 10:49:47 PM PDT by Psycho_Bunny
[ Post Reply | Private Reply | To 2 | View Replies]

To: El Gato; Ernest_at_the_Beach; Robert A. Cook, PE; lepton; LadyDoc; jb6; tiamat; PGalt; Dianna; ...
Disease underlies Hatfield-McCoy feud

Weak drug combos find new use - Antibiotics that don't work could beat back resistant bacteria.

FReepmail me if you want on or off my health and science ping list.

4 posted on 04/06/2007 10:50:53 PM PDT by neverdem (May you be in heaven a half hour before the devil knows that you're dead.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: neverdem
Say CHEESE !
5 posted on 04/06/2007 10:58:10 PM PDT by hole_n_one
[ Post Reply | Private Reply | To 1 | View Replies]

To: Psycho_Bunny
Ug....it's going to create a whole sub-culture of fabricated, highly realistic celebrity porn.

Any technology can be used for good or evil. I'm more worried about the fidelity of the imaging, as in propaganda from Al Qaeda.

6 posted on 04/06/2007 10:58:27 PM PDT by neverdem (May you be in heaven a half hour before the devil knows that you're dead.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: neverdem

Hilarious. Reuters and other news orgs have been photoshopping reality for years, and there’s plenty of proof.


7 posted on 04/06/2007 11:02:12 PM PDT by JennysCool ("The urge to save humanity is almost always a false front for the urge to rule." -Mencken)
[ Post Reply | Private Reply | To 1 | View Replies]

To: hole_n_one

Say “Cheese 2.0”


8 posted on 04/06/2007 11:16:37 PM PDT by ffusco (Maecilius Fuscus,Governor of Longovicium , Manchester, England. 238-244 AD)
[ Post Reply | Private Reply | To 5 | View Replies]

To: neverdem

9 posted on 04/06/2007 11:22:28 PM PDT by Vince Ferrer
[ Post Reply | Private Reply | To 1 | View Replies]

To: neverdem

Damn. Left the lens cap on again.


10 posted on 04/06/2007 11:56:37 PM PDT by martin_fierro (< |:)~)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Vince Ferrer; All
Check out How make a link open in a new browser window. It's really good for oversize pics like the one you posted. Neither your comment's column or the recipients comment's column will be skewed to the left. You won't have to scroll to the right to see the sidebars. Delete the space between the greater than sign, "<" and the first "a", and between the greater than sign, "<" and the terminal "/a>" < a target="_blank" href="">< /a>. You'll have an alternate command that works just as good to view a pic just as well. Just plug in the URL and title.
11 posted on 04/07/2007 12:51:48 AM PDT by neverdem (May you be in heaven a half hour before the devil knows that you're dead.)
[ Post Reply | Private Reply | To 9 | View Replies]

To: neverdem

Two of the capabilities mentioned in the article are available today, from software that’s available for free.

The ability to dramatically expand the dynamic range of a digital photo is provided by a demo program called DaVinci available from http://www.chromasoftware.com/

The ability to stack images in order to eliminate transient elements in individual shots is delivered by the demo version of the Astrostack program, which can be downloaded at http://www.astrostack.com/

The amount of valuable intellectual property freely available on the Web today poses awesome questions for a service-oriented economy like our own.


12 posted on 04/07/2007 1:06:30 AM PDT by earglasses (...whereas I was blind, now I hear...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: martin_fierro

Damn. Left the lens cap on again.

That's OK. Our new software takes the image formed by cosmis rays going through the lens cap and intensifies it to reconstruct your original image.

13 posted on 04/07/2007 2:53:23 AM PDT by Right Wing Assault ("..this administration is planning a 'Right Wing Assault' on values and ideals.." - John Kerry)
[ Post Reply | Private Reply | To 10 | View Replies]

To: Right Wing Assault

cosmic rays


14 posted on 04/07/2007 2:54:06 AM PDT by Right Wing Assault ("..this administration is planning a 'Right Wing Assault' on values and ideals.." - John Kerry)
[ Post Reply | Private Reply | To 13 | View Replies]

To: neverdem

Reuters will e quick to adapt.


15 posted on 04/07/2007 3:12:27 AM PDT by R. Scott (Humanity i love you because when you're hard up you pawn your Intelligence to buy a drink)
[ Post Reply | Private Reply | To 1 | View Replies]

To: neverdem

cool!


16 posted on 04/07/2007 3:16:35 AM PDT by Recovering_Democrat (I am SO glad to no longer be associated with the party of Dependence on Government!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: earglasses

Thanks for the links.

“The amount of valuable intellectual property freely available on the Web today poses awesome questions for a service-oriented economy like our own.”

I am writing a mysql management system in Java that will be finished just about in time for there to be no prfit margin.


17 posted on 04/07/2007 6:28:57 AM PDT by FastCoyote
[ Post Reply | Private Reply | To 12 | View Replies]

To: earglasses

I’m waiting for Hollywood to replace all the liberal actors with digital actors.


18 posted on 04/07/2007 9:36:46 AM PDT by aimhigh
[ Post Reply | Private Reply | To 12 | View Replies]

To: neverdem

“3-D FROM A DOUGHNUT” That is really neat.


19 posted on 04/07/2007 11:35:42 AM PDT by SunkenCiv (I last updated my profile on Monday, April 2, 2007. https://secure.freerepublic.com/donate/)
[ Post Reply | Private Reply | To 1 | View Replies]

To: glock rocks; Pete-R-Bilt

Pinging Mr Rocks... Mr Ansel Rocks...


20 posted on 04/07/2007 6:34:06 PM PDT by tubebender (Whom keeps stealing my Tag Line???)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson