Image Translation With Dual-Directional Generative Adversarial Networks

IET COMPUTER VISION(2021)

引用 3|浏览4
暂无评分
摘要
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between input images and output images. However, due to the unstable training and limited training samples, many existing GAN-based works have difficulty in producing photo-realistic images. Herein, dual-directional generative adversarial networks are proposed, which consist of four adversarial networks, to produce images of high perceptual quality. In this framework, self-reconstruction strategy is used to construct auxiliary sub-networks, which impose more effective constraints on encoder-generator pairs. Using this idea, this model can increase the use ratio of paired data conditioned on the same dataset and obtain well-trained encoder-generator pairs with the help of the proposed cross-network skip connections. Moreover, the proposed framework not only produces realistic images but also addresses the problem where condition GAN produces sharp images containing many small, hallucinated objects. Training on multiple supervised datasets, convincing evidences are shown to prove that this model can achieve compelling results by latently learning a common feature representation. Qualitative and quantitative comparisons against other methods, demonstrate the effectiveness and superiority of the method.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要