Facial attribute editing method combined with parallel GAN for attribute separation

JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION(2024)

引用 0|浏览2
暂无评分
摘要
Facial attribute editing encounters problems of incorrect changes to face regions and artifacts in generated images. We propose a facial attribute editing method combined with parallel GAN for attribute separation. First, the method integrates the U2-net encoder and Trans-GAN decoder as a model encoder to extract and generate facial spatial information effectively. Second, RGB images and semantic mask images are used to train a parallel generator and discriminator respectively. Semantic consistency loss is introduced to ensure that the two branches have consistent semantic output and achieve the effect of convergence in the same direction parallel generator and discriminator. The proposed model, trained on the CelebAMask-HQ original dataset and verified by the CelebA dataset, adopts the separation of the face mask image and the background mask image, to improve the correct rate of face attribute editing. Compared with existing facial attribute editing methods, the proposed method is capable of balancing attribute editing ability and details preservation ability. It can accurately edit the target attribute area and greatly improve the quality of facial images.
更多
查看译文
关键词
Facial attribute editing,Generative adversarial network,Semantic consistency,Attribute mask
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要