Pluralistic Face Inpainting With Transformation of Attribute Information

IEEE TRANSACTIONS ON MULTIMEDIA(2023)

引用 0|浏览23
暂无评分
摘要
Most face-inpainting methods perform well in face repair. However, these methods can only complete a single face image per input. Although existing various image-inpainting methods can achieve pluralistic image inpainting, they typically produce faces with distorted structures or the same texture. To resolve these shortcomings and achieve high-quality diverse face inpainting, we propose PFTANet, a two-stage pluralistic face-inpainting network that transforms attribute information. In the first stage, the face-parsing network is fine-tuned to obtain semantic facial region information. In the second stage, a generator consisting of SNBlock, CF_ShiftBlocks, and CF_MergeBlock, which ensures that high-quality pluralistic face results are generated, is used. Specifically, CF_ShiftBlocks completes pluralistic face generation by transforming the attribute information from the conditional face extracted by the attribute extractor and ensuring the consistency of the attribute information between the conditional and generated faces. CF_MergeBlock ensures structural consistency between the masked and background regions of the generated face using facial region semantic information. A multi-patch discriminator is used to enhance facial detail generation. Experimental results for the CelebA and CelebA-HQ datasets indicated that PFTANet achieved pluralistic and visually realistic face inpainting.
更多
查看译文
关键词
Attribute transformation,adversarial generative network,pluralistic face inpainting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要