Single‐image super‐resolution using lightweight transformer‐convolutional neural network hybrid model
Memory footprint
Graphics processing unit
DOI:
10.1049/ipr2.12833
Publication Date:
2023-05-29T10:45:00Z
AUTHORS (4)
ABSTRACT
Abstract With constant advances in deep learning methods as applied to image processing, convolutional neural networks (CNNs) have been widely explored single‐image super‐resolution (SISR) problems and attained significant success. These CNN‐based cannot fully use the internal external information of image. The authors add a lightweight Transformer structure capture this information. Specifically, apply dense block residual connection build convolution (RDCB) that reduces parameters somewhat extracts shallow features. transformer (LTB) further features learns texture details between patches through self‐attention mechanism. LTB comprises an efficient multi‐head (EMT) with small graphics processing unit (GPU) memory footprint, benefits from feature preprocessing by attention (MA), reduction, expansion. EMT significantly GPU resources. In addition, detail‐purifying (DAB) is proposed explore context high‐resolution (HR) space recover more details. Extensive evaluations four benchmark datasets demonstrate effectiveness authors’ model terms quantitative metrics visual effects. only uses about 40% much other methods, better performance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (59)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....