Deep Hyperspectral Unmixing using Transformer Network
Autoencoder
Endmember
DOI:
10.48550/arxiv.2203.17076
Publication Date:
2022-01-01
AUTHORS (5)
ABSTRACT
Currently, this paper is under review in IEEE. Transformers have intrigued the vision research community with their state-of-the-art performance natural language processing. With superior performance, transformers found way field of hyperspectral image classification and achieved promising results. In article, we harness power to conquer task unmixing propose a novel deep model transformers. We aim utilize ability better capture global feature dependencies order enhance quality endmember spectra abundance maps. The proposed combination convolutional autoencoder transformer. data encoded by encoder. transformer captures long-range between representations derived from are reconstructed using decoder. applied three widely used datasets, i.e., Samson, Apex, Washington DC mall compared it terms root mean squared error spectral angle distance. source code for will be made publicly available at \url{https://github.com/preetam22n/DeepTrans-HSU}.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....