Large Scale Qualitative Evaluation of Generative Image Model Outputs

Yannick Assogba,Adam Pearce, Madison Elliott

arXiv (Cornell University)(2023)

引用 0|浏览9
暂无评分
摘要
Evaluating generative image models remains a difficult problem. This is due to the high dimensionality of the outputs, the challenging task of representing but not replicating training data, and the lack of metrics that fully correspond to human perception and capture all the properties we want these models to exhibit. Therefore, qualitative evaluation of model outputs is an important part of model development and research publication practice. Quantitative evaluation is currently under-served by existing tools, which do not easily facilitate structured exploration of a large number of examples across the latent space of the model. To address this issue, we present Ravel, a visual analytics system that enables qualitative evaluation of model outputs on the order of hundreds of thousands of images. Ravel allows users to discover phenomena such as mode collapse, and find areas of training data that the model has failed to capture. It allows users to evaluate both quality and diversity of generated images in comparison to real images or to the output of another model that serves as a baseline. Our paper describes three case studies demonstrating the key insights made possible with Ravel, supported by a domain expert user study.
更多
查看译文
关键词
large scale qualitative evaluation,qualitative evaluation,image,model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要