Fully Attentional Network for Semantic Segmentation
Feature (linguistics)
Pascal (unit)
Attention network
Similarity (geometry)
DOI:
10.1609/aaai.v36i2.20126
Publication Date:
2022-07-04T10:33:18Z
AUTHORS (5)
ABSTRACT
Recent non-local self-attention methods have proven to be effective in capturing long-range dependencies for semantic segmentation. These usually form a similarity map of R^(CxC) (by compressing spatial dimensions) or R^(HWxHW) channels) describe the feature relations along either channel dimensions, where C is number channels, H and W are dimensions input map. However, such practices tend condense other hence causing attention missing, which might lead inferior results small/thin categories inconsistent segmentation inside large objects. To address this problem, we propose new approach, namely Fully Attentional Network (FLANet), encode both attentions single while maintaining high computational efficiency. Specifically, each map, our FLANet can harvest responses from all maps, associated positions as well, through novel fully attentional module. Our method has achieved state-of-the-art performance on three challenging datasets, i.e., 83.6%, 46.99%, 88.5% Cityscapes test set, ADE20K validation PASCAL VOC respectively.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (39)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....