Snap Image Composition
Yael Pritch, Yair Poleg, Shmuel Peleg
MIRAGE 2011: Computer Vision/Computer Graphics Collaboration Techniques, October 2011, pp. 181--191.
Abstract: Snap Composition broadens the applicability of interactive image composition. Current tools, like Adobe's Photomerge Group Shot, do an excellent job when the background can be aligned and objects have limited motion. Snap Composition works well even when the input images include different objects and the backgrounds cannot be aligned. The power of Snap Composition comes from the ability to assign for every output pixel a source pixel in any input image, and from any location in that image. An energy value is computed for each such assignment, representing both the user constraints and the quality of composition. Minimization of this energy gives the desired composition. Composition is performed once a user marks objects in the different images, and optionally drags them into a new location in the target canvas. The background around the dragged objects, as well as the final locations of the objects themselves, will be automatically computed for seamless composition. If the user does not drag the selected objects to a desired place, they will automatically snap into a suitable location. A video describing the results can be seen in www.vision.huji.ac.il/shiftmap/SnapVideo.mp4 .
Article URL: http://dx.doi.org/10.1007/978-3-642-24136-9_16
BibTeX format:
@incollection{Pritch:2011:SIC,
  author = {Yael Pritch and Yair Poleg and Shmuel Peleg},
  title = {Snap Image Composition},
  booktitle = {MIRAGE 2011: Computer Vision/Computer Graphics Collaboration Techniques},
  pages = {181--191},
  month = oct,
  year = {2011},
}
Search for more articles by Yael Pritch.
Search for more articles by Yair Poleg.
Search for more articles by Shmuel Peleg.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."