Diving Deep into Regions: Exploiting Regional Information Transformer for Single Image Deraining
CoRR(2024)
摘要
Transformer-based Single Image Deraining (SID) methods have achieved
remarkable success, primarily attributed to their robust capability in
capturing long-range interactions. However, we've noticed that current methods
handle rain-affected and unaffected regions concurrently, overlooking the
disparities between these areas, resulting in confusion between rain streaks
and background parts, and inabilities to obtain effective interactions,
ultimately resulting in suboptimal deraining outcomes. To address the above
issue, we introduce the Region Transformer (Regformer), a novel SID method that
underlines the importance of independently processing rain-affected and
unaffected regions while considering their combined impact for high-quality
image reconstruction. The crux of our method is the innovative Region
Transformer Block (RTB), which integrates a Region Masked Attention (RMA)
mechanism and a Mixed Gate Forward Block (MGFB). Our RTB is used for attention
selection of rain-affected and unaffected regions and local modeling of mixed
scales. The RMA generates attention maps tailored to these two regions and
their interactions, enabling our model to capture comprehensive features
essential for rain removal. To better recover high-frequency textures and
capture more local details, we develop the MGFB as a compensation module to
complete local mixed scale modeling. Extensive experiments demonstrate that our
model reaches state-of-the-art performance, significantly improving the image
deraining quality. Our code and trained models are publicly available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要