SemanticPaint: Interactive 3D Labeling and Learning at your Fingertips
Julien Valentin, Vibhav Vineet, Ming-Ming Cheng, David Kim, Jamie Shotton, Pushmeet Kohli, Matthias Niess ner, Antonio Criminisi, Shahram Izadi, Philip Torr
In ACM Transactions on Graphics (TOG), 34(5), October 2015.
Abstract: We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems where capture, labeling, and batch learning often take hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing to immediately correct errors in the segmentation and/or learning - a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user's environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.
@article{10.1145-2751556,
author = {Julien Valentin and Vibhav Vineet and Ming-Ming Cheng and David Kim and Jamie Shotton and Pushmeet Kohli and Matthias Niess ner and Antonio Criminisi and Shahram Izadi and Philip Torr},
title = {SemanticPaint: Interactive 3D Labeling and Learning at your Fingertips},
journal = {ACM Transactions on Graphics (TOG)},
volume = {34},
number = {5},
articleno = {154},
month = oct,
year = {2015},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."