A Simple Transformer-style Network for Lightweight Image Super-resolution.

CVPR Workshops(2023)

引用 3|浏览8
暂无评分
摘要
The task of single image super resolution (SISR) has taken much attention in the last few years due to the wide range of real-world applications. However, most of the recently developed methods are computationally expensive and need much more memory. To solve this issue, we propose a simple Transformer-style network (STSN) for the image super resolution (SR) task. The idea of this method is based on using convolutional modulation (Conv2Former), which is a very simple block with a linearly compared to quadratically as in Transformers. This Conv2Former is simplified the self-attention mechanism based on utilizing only convolutions and Hadamard product. Also, the original Conv2Former is further improved to be able to extract local features, which is helpful for SR task. Based on this Conv2Former and multi-layer perceptron (MLP), we propose a convolutional modulation block (Conv2FormerB) which is similar to the Transformers block. Based on this Conv2FormerB, 3 × 3 convolution and enhanced spatial attention (ESA) block, an STSN is designed for the SISR task. This STSN achieved good results in multiple SR benchmarks. Finally, our STSN model attained 5.6 × faster run time compared to LWSwinIR.
更多
查看译文
关键词
Conv2FormerB,convolutional modulation block,image super resolution task,lightweight image super-resolution,original Conv2Former,real-world applications,recently developed methods,self-attention mechanism,simple block,simple Transformer-style network,single image super resolution,SISR task,spatial attention block,SR task,STSN,Transformers block
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要