Collocated Clothing Synthesis with GANs Aided by Textual Information: A Multi-Modal Framework

ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS(2024)

引用 0|浏览17
暂无评分
摘要
Synthesizing realistic images of fashion items which are compatible with given clothing images, as well as conditioning on multiple modalities, brings novel and exciting applications together with enormous economic potential. In this work, we propose a multi-modal collocation framework based on generative adversarial network (GAN) for synthesizing compatible clothing images. Given an input clothing item that consists of an image and a text description, our model works on synthesizing a clothing image which is compatible with the input clothing, as well as being guided by a given text description from the target domain. Specifically, a generator aims to synthesize realistic and collocated clothing images relying on image- and text-based latent representations learned from the source domain. An auxiliary text representation from the target domain is added for supervising the generation results. In addition, a multi-discriminator framework is carried out to determine compatibility between the generated clothing images and the input clothing images, as well as visual-semantic matching between the generated clothing images and the targeted textual information. Extensive quantitative and qualitative results demonstrate that our model substantially outperforms state-of-the-art methods in terms of authenticity, diversity, and visual-semantic similarity between image and text.
更多
查看译文
关键词
Multi-modal,clothes collocation,generative adversarial networks,image translation,fashion data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要