Synthesizing Training Data for Object Detection in Indoor Scenes
Training set
DOI:
10.15607/rss.2017.xiii.043
Publication Date:
2017-09-12T14:56:00Z
AUTHORS (4)
ABSTRACT
Detection of objects in cluttered indoor environments is one the key enabling functionalities for service robots. The best performing object detection approaches computer vision exploit deep Convolutional Neural Networks (CNN) to simultaneously detect and categorize interest scenes. Training such models typically requires large amounts annotated training data which time consuming costly obtain. In this work we explore ability using synthetically generated composite images state-of-the-art detectors, especially instance detection. We superimpose 2D textured into real at variety locations scales. Our experiments evaluate different superimposition strategies ranging from purely image-based blending all way depth semantics informed positioning demonstrate effectiveness these detector on two publicly available datasets, GMU-Kitchens Washington RGB-D Scenes v2. As observation, augmenting some hand-labeled with synthetic examples carefully composed onto scenes yields detectors comparable performance much more data. Broadly, charts new opportunities by exploiting existing model repositories either a automatic fashion or only very small number human-annotated examples.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (135)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....