Chrome Extension
WeChat Mini Program
Use on ChatGLM

Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition

CoRR(2024)

Cited 0|Views17
Abstract
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise. One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions, a dimension overlooked in existing methods. In this paper, we present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision to delineate illumination and reflectance components. We leverage dynamic cross-frame correspondences for intrinsic appearance and enforce a scene-level continuity constraint on the illumination field to yield satisfactory consistent decomposition results. To further ensure consistent decomposition, we introduce a dual-structure enhancement network featuring a novel cross-frame interaction mechanism. This mechanism can seamlessly integrate with encoder-decoder single-frame networks, incurring minimal additional parameter costs. By supervising different frames simultaneously, this network encourages them to exhibit matching decomposition features, thus achieving the desired temporal propagation. Extensive experiments are conducted on widely recognized LLVE benchmarks, covering diverse scenarios. Our framework consistently outperforms existing methods, establishing a new state-of-the-art (SOTA) performance.
More
Translated text
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种创新的基于视频Retinex的分解策略,通过空间-时间一致性的光照和反射分解实现了低光照视频增强,并在无显式监督的情况下达到了令人满意的分解结果,同时引入了一种双结构增强网络以保持分解的一致性,实现了当前最佳性能。

方法】:研究采用了一种无需显式监督的Retinex-based分解策略,通过动态跨帧对应关系来保持图像固有的外观,并在光照场上施加场景级别的连续性约束。

实验】:作者在广泛认可的LLVE基准数据集上进行了大量实验,覆盖了多种场景,结果表明,所提出的框架始终优于现有方法,并确立了新的最佳性能水平(SOTA)。