Tensor low-rank sparse representation for tensor subspace learning

Rank (graph theory) Representation
DOI: 10.1016/j.neucom.2021.02.002 Publication Date: 2021-02-17T14:19:19Z
ABSTRACT
Abstract Many subspace learning methods are implemented on a matrix of sample data. For multi-dimensional data, these methods have to convert data samples into vectors in advance, which often destroys the inherent spatial structure of the sample data. In this paper, we propose a robust tensor low-rank sparse representation (TLRSR) method that can directly perform subspace learning on three-dimensional tensors. Firstly, the dual constraints of low-rankness and sparseness make the representation tensor effectively capture the global structure and local structure of sample data, respectively. Secondly, in order to deal with outliers and noise, we adopt the tensor l 2 , 1 -norm to characterize the noise of tensor composed of multiple samples. Thirdly, the denoised tensor instead of the original tensor is used as the dictionary to find the low-rank sparse representation tensor. Finally, an iterative update algorithm is proposed for the optimization of TLRSR, compared with the state-of-the-art methods, clustering on face images and denoising on real images verify the good performance of our proposed TLRSR in tensor subspace learning.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (50)
CITATIONS (27)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....