Depth-Aware Unpaired Video Dehazing.

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society(2024)

引用 0|浏览6
暂无评分
摘要
This paper investigates a novel unpaired video dehazing framework, which can be a good candidate in practice by relieving pressure from collecting paired data. In such a paradigm, two key issues including 1) temporal consistency uninvolved in single image dehazing, and 2) better dehazing ability need to be considered for satisfied performance. To handle the mentioned problems, we alternatively resort to introducing depth information to construct additional regularization and supervision. Specifically, we attempt to synthesize realistic motions with depth information to improve the effectiveness and applicability of traditional temporal losses, and thus better regularizing the spatiotemporal consistency. Moreover, the depth information is also considered in terms of adversarial learning. For haze removal, the depth information guides the local discriminator to focus on regions where haze residuals are more likely to exist. The dehazing performance is consequently improved by more pertinent guidance from our depth-aware local discriminator. Extensive experiments are conducted to validate our effectiveness and superiority over other competitors. To the best of our knowledge, this study is the initial foray into the task of unpaired video dehazing. Our code is available at https://github.com/YaN9-Y/DUVD.
更多
查看译文
关键词
Video dehazing,Image dehazing,Unpaire learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要