Multi-feature self-attention super-resolution network

VISUAL COMPUTER(2023)

引用 1|浏览4
暂无评分
摘要
In recent years, single-image super-resolution (SISR) methods based on the attention mechanism have been widely explored and achieved remarkable performances. However, most existing networks only explore channel correlations or the spatial long-distance dependences in a single scale while ignoring the mutual guidance of multi-scale information, resulting in the loss of high-frequency information in the reconstructed image. To address this issue, we propose a multi-feature self-attention super-resolution network (MFSN) to embed multi-scale encoding information into the attention mechanism. Specifically, the network consists of a shallow feature extraction subnetwork, a multi-feature alignment subnetwork (MFAN) and a reconstruction subnetwork. The MFAN is composed of an adjacent feature alignment residual block (AFAB) and a dense backward fusion block (DBFB), where AFAB explores multi-scale encoding information using the low-resolution space statistics with larger receptive fields to weight and align the original-scale feature map, so as to extract more discriminative high-frequency features adaptively. Meanwhile, the contrast-aware channel attention module adopts contrast pooling that is more suitable for low-level computer vision tasks to realize the adaptive selection of channel feature in the AFAB. Structurally, the DBFB adopts the backward fusion mechanism to fuse the output of each AFAB, from the deep layer to the shallow layer to make full use of the hierarchical features. Experimental results demonstrate the superiority of our MFSN network in terms of both quantitative metrics and visual quality.
更多
查看译文
关键词
Convolutional neural network,Super-resolution,Multi-scale,Self-attention,Backward fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要