Progressive Fusion Video Super-Resolution Network Via Exploiting Non-Local Spatio-Temporal Correlations
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)
摘要
Most previous fusion strategies either fail to fully utilize temporal information or cost too much time, and how to effectively fuse temporal information from consecutive frames plays an important role in video super-resolution (SR). In this study, we propose a novel progressive fusion network for video SR, which is designed to make better use of spatio-temporal information and is proved to be more efficient and effective than the existing direct fusion, slow fusion or 3D convolution strategies. Under this progressive fusion framework, we further introduce an improved non-local operation to avoid the complex motion estimation and motion compensation (ME&MC) procedures as in previous video SR approaches. Extensive experiments on public datasets demonstrate that our method surpasses state-of-the-art with 0.96 dB in average, and runs about 3 times faster, while requires only about half of the parameters.
更多查看译文
关键词
consecutive frames,spatio-temporal information,direct fusion,slow fusion,3D convolution strategies,nonlocal operation,progressive fusion video super-resolution network,fusion strategies,nonlocal spatio-temporal correlations,video SR approaches,motion estimation,motion compensation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络