Using Self-Supervised Co-Training to Improve Facial Representation

ArXiv(2021)

引用 2|浏览0
暂无评分
摘要
Over the past few years, best SSL methods, gradually moved from the pre-text task learning to the Contrastive learning. But contrastive methods have some drawbacks which could not be solved completely, such as performing poor on fine-grained visual tasks compare to supervised learning methods. In this study, at first, the impact of ImageNet pre-training on finegrained Facial Expression Recognition (FER) was tested. It could be seen from the results that training from scratch is better than ImageNet fine-tuning at stronger augmentation levels. After that, a framework was proposed for standard Supervised Learning (SL), called Hybrid Multi-Task Learning (HMTL) which merged Self-Supervised as auxiliary task to the SL training setting. Leveraging Self-Supervised Learning (SSL) can gain additional information from input data than labels which can help the main fine-grained SL task. It is been investigated how this method could be used for FER by designing two customized version of common pre-text techniques, Jigsaw puzzling and in-painting. The state-of-the-art was reached on AffectNet via two types of HMTL, without utilizing pre-training on additional datasets. Moreover, we showed the difference between SS pre-training and HMTL to demonstrate superiority of proposed method. Furthermore, the impact of proposed method was shown on two other fine-grained facial tasks, Head Poses estimation and Gender Recognition, which concluded to reduce in error rate by 11% and 1% respectively.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要