A Masked Self-Supervised Pretraining Method for Face Parsing

MATHEMATICS(2022)

引用 0|浏览33
暂无评分
摘要
Face Parsing aims to partition the face into different semantic parts, which can be applied into many downstream tasks, e.g., face mask up, face swapping, and face animation. With the popularity of cameras, it is easier to acquire facial images. However, pixel-wise manually labeling is time-consuming and labor-intensive, which motivates us to explore the unlabeled data. In this paper, we present a self-supervised learning method attempting to make full use of the unlabeled facial images for face parsing. In particular, we randomly mask some patches in the central area of facial images, and the model is required to reconstruct the masked patches. This self-supervised pretraining is capable of making the model capture facial feature representations through these unlabeled data. After self-supervised pretraining, the model is fine-tuned on a few labeled data for the face parsing task. Experimental results show that the model achieves better performance for face parsing assisted by the self-supervised pretraining, which greatly decreases the labeling cost. Our approach achieves 74.41 mIoU on the LaPa test set fine-tuned on only 0.2% of the labeled data of the whole training data, surpassing the model that is directly trained by a large margin of +5.02 mIoU. In addition, our approach achieves a new state-of-the-art on the LaPa and CelebAMask-HQ test set.
更多
查看译文
关键词
face parsing, semantic segmentation, self-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要