Self-supervised feature adaption for infrared and visible image fusion

03 medical and health sciences 0302 clinical medicine
DOI: 10.1016/j.inffus.2021.06.002 Publication Date: 2021-06-12T11:13:28Z
ABSTRACT
Abstract Benefitting from the strong feature extraction capability of deep learning, infrared and visible image fusion has made a great progress. Since infrared and visible images are obtained by different sensors with different imaging mechanisms, there exists domain discrepancy, which becomes stumbling block for effective fusion. In this paper, we propose a novel self-supervised feature adaption framework for infrared and visible image fusion. We implement a self-supervised strategy that facilitates the backbone network to extract features with adaption while retaining the vital information by reconstructing the source images. Specifically, we preliminary adopt an encoder network to extract features with adaption. Then, two decoders with attention mechanism blocks are utilized to reconstruct the source images in a self-supervised way, forcing the adapted features to contain vital information of the source images. Further, considering the case that source images contain low-quality information, we design a novel infrared and visible image fusion and enhancement model, improving the fusion method’s robustness. Experiments are constructed to evaluate the proposed method qualitatively and quantitatively, which show that the proposed method achieves the state-of-art performance comparing with existing infrared and visible image fusion methods. Results are available at https://github.com/zhoafan/SFA-Fuse.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (60)
CITATIONS (67)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....