Multi-modal uniform deep learning for RGB-D person re-identification

0202 electrical engineering, electronic engineering, information engineering 02 engineering and technology
DOI: 10.1016/j.patcog.2017.06.037 Publication Date: 2017-07-04T18:32:32Z
ABSTRACT
Abstract In this paper, we propose a multi-model uniform deep learning (MMUDL) method for RGB-D person re-identification. Unlike most existing person re-identification methods which only use RGB images, our approach recognizes people from RGB-D images so that more information such as anthropometric measures and body shapes can be exploited for re-identification. In order to exploit useful information from depth images, we use the deep network to extract efficient anthropometric features from processed depth images which also have three channels. Moreover, we design a multi-modal fusion layer to combine these features extracted from both depth images and RGB images through the network with a uniform latent variable which is robust to noise, and optimize the fusion layer with two CNN networks jointly. Experimental results on two RGB-D person re-identification datasets are presented to show the efficiency of our proposed approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (62)
CITATIONS (48)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....