RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds Deep Learning

FOS: Computer and information sciences Rotation Invariance Artificial Intelligence and Robotics Computer Science - Artificial Intelligence Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition 3D Point Cloud Software Engineering Deep learning 02 engineering and technology Convolutional Neural 3D point cloud Rotation invariance Deep Learning Artificial Intelligence (cs.AI) 0202 electrical engineering, electronic engineering, information engineering Convolutional neural networks Networks
DOI: 10.1007/s11263-022-01601-z Publication Date: 2022-03-18T09:02:33Z
ABSTRACT
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly, making it a robust tool for solving 3D scene understanding tasks. While recent works show that point cloud convolutions can be invariant to translation and point permutation, investigations of the rotation invariance property for point cloud convolution has been so far scarce. Some existing methods perform point cloud convolutions with rotation-invariant features, existing methods generally do not perform as well as translation-invariant only counterpart. In this work, we argue that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions. We consider the relationship between the point of interest and its neighbors as well as the internal relationship of the neighbors to largely improve the feature descriptiveness. Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer. We conduct several experiments on synthetic and real-world point cloud classifications, part segmentation, and shape retrieval to evaluate our method, which achieves the state-of-the-art accuracy under challenging rotations.<br/>Authors' version. Accepted to International Journal of Computer Vision (IJCV) 2022<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (52)
CITATIONS (42)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....