Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition
CoRR(2024)
Abstract
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes
plagued by severe invisibility and noise. One critical aspect is formulating a
consistency constraint specifically for temporal-spatial illumination and
appearance enhanced versions, a dimension overlooked in existing methods. In
this paper, we present an innovative video Retinex-based decomposition strategy
that operates without the need for explicit supervision to delineate
illumination and reflectance components. We leverage dynamic cross-frame
correspondences for intrinsic appearance and enforce a scene-level continuity
constraint on the illumination field to yield satisfactory consistent
decomposition results. To further ensure consistent decomposition, we introduce
a dual-structure enhancement network featuring a novel cross-frame interaction
mechanism. This mechanism can seamlessly integrate with encoder-decoder
single-frame networks, incurring minimal additional parameter costs. By
supervising different frames simultaneously, this network encourages them to
exhibit matching decomposition features, thus achieving the desired temporal
propagation. Extensive experiments are conducted on widely recognized LLVE
benchmarks, covering diverse scenarios. Our framework consistently outperforms
existing methods, establishing a new state-of-the-art (SOTA) performance.
MoreTranslated text
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话