Contrastive structure and texture fusion for image inpainting

NEUROCOMPUTING(2023)

引用 1|浏览46
暂无评分
摘要
Most recent U-Net based models have shown promising results for the challenging tasks in image inpainting field. However, they often generate content with blurred textures and distorted structures due to the lack of semantic consistency and texture continuity in the missing regions. In this paper, we propose to restore the missing areas at both structural and textural levels. Our method is built upon a U-Net structure, which repairs images by extracting semantic information from high to low resolution and then decoding it back to the original image. Specifically, we utilize the high-level semantic features learned in encoder to guide the inpainting of structure-aware features of its adjacent low-level feature map. Meanwhile, low-level feature maps have clearer texture compared with high-level ones, which can be used as a prior for textural repair of high-level feature maps. subsequently, a module is used to fuse the two repaired feature maps (i.e., structure-aware and texture-aware features) reasonably and obtain a feature map with reasonable semantics. Moreover, in order to learn more representative highlevel semantics feature, we design the model as a siamese network for contrastive learning. Experiments on practical data show that our method outperforms other state-of-the-art methods. (c) 2023 Published by Elsevier B.V.
更多
查看译文
关键词
Image inpainting,Contrastive learning,Attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要