SAT: Size-Aware Transformer for 3D Point Cloud Semantic Segmentation

Granularity
DOI: 10.48550/arxiv.2301.06869 Publication Date: 2023-01-01
ABSTRACT
Transformer models have achieved promising performances in point cloud segmentation. However, most existing attention schemes provide the same feature learning paradigm for all points equally and overlook enormous difference size among scene objects. In this paper, we propose Size-Aware (SAT) that can tailor effective receptive fields objects of different sizes. Our SAT achieves size-aware via two steps: introduce multi-scale features to each layer allow choose its attentive adaptively. It contains key designs: Multi-Granularity Attention (MGA) scheme Re-Attention module. The MGA addresses challenges: efficiently aggregating tokens from distant areas preserving within one layer. Specifically, point-voxel cross is proposed address first challenge, shunted strategy based on standard multi-head self applied solve second. module dynamically adjusts scores fine- coarse-grained output by point. Extensive experimental results demonstrate state-of-the-art S3DIS ScanNetV2 datasets. also balanced performance categories referred methods, which illustrates superiority modelling code model will be released after acceptance paper.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....