Mirror mirror: crowdsourcing better portraits
Jun-Yan Zhu, Aseem Agarwala, Alexei A. Efros, Eli Shechtman, Jue Wang
In ACM Transactions on Graphics, 33(6), November 2014.
Abstract: We describe a method for providing feedback on portrait expressions, and for selecting the most attractive expressions from large video/photo collections. We capture a video of a subject's face while they are engaged in a task designed to elicit a range of positive emotions. We then use crowdsourcing to score the captured expressions for their attractiveness. We use these scores to train a model that can automatically predict attractiveness of different expressions of a given person. We also train a cross-subject model that evaluates portrait attractiveness of novel subjects and show how it can be used to automatically mine attractive photos from personal photo collections. Furthermore, we show how, with a little bit ($5-worth) of extra crowdsourcing, we can substantially improve the cross-subject model by "fine-tuning" it to a new individual using active learning. Finally, we demonstrate a training app that helps people learn how to mimic their best expressions.
Article URL: http://dx.doi.org/10.1145/2661229.2661287
BibTeX format:
@article{Zhu:2014:MMC,
  author = {Jun-Yan Zhu and Aseem Agarwala and Alexei A. Efros and Eli Shechtman and Jue Wang},
  title = {Mirror mirror: crowdsourcing better portraits},
  journal = {ACM Transactions on Graphics},
  volume = {33},
  number = {6},
  pages = {234:1--234:12},
  month = nov,
  year = {2014},
}
Search for more articles by Jun-Yan Zhu.
Search for more articles by Aseem Agarwala.
Search for more articles by Alexei A. Efros.
Search for more articles by Eli Shechtman.
Search for more articles by Jue Wang.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."