AGTGAN: Unpaired Image Translation for Photographic Ancient Character Generation

International Multimedia Conference(2022)

引用 11|浏览15
暂无评分
摘要
ABSTRACTThe study of ancient writings has great value for archaeology and philology. Essential forms of material are photographic characters, but manual photographic character recognition is extremely time-consuming and expertise-dependent. Automatic classification is therefore greatly desired. However, the current performance is limited due to the lack of annotated data. Data generation is an inexpensive but useful solution to data scarcity. Nevertheless, the diverse glyph shapes and complex background textures of photographic ancient characters make the generation task difficult, leading to unsatisfactory results of existing methods. To this end, we propose an unsupervised generative adversarial network called AGTGAN in this paper. By explicitly modeling global and local glyph shape styles, followed by a stroke-aware texture transfer and an associate adversarial learning mechanism, our method can generate characters with diverse glyphs and realistic textures. We evaluate our method on photographic ancient character datasets, e.g., OBC306 and CSDD. Our method outperforms other state-of-the-art methods in terms of various metrics and performs much better in terms of the diversity and authenticity of generated samples. With our generated images, experiments on the largest photographic oracle bone character dataset show that our method can achieve a significant increase in classification accuracy, up to 16.34%. The source code is available at https://github.com/Hellomystery/AGTGAN.
更多
查看译文
关键词
photographic ancient character
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要