Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled Learning

International Multimedia Conference(2022)

引用 0|浏览41
暂无评分
摘要
ABSTRACTPositive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. While this problem is receiving increasing attention, most of the efforts have been dedicated to the design of trustworthy risk estimators such as uPU and nnPU and direct knowledge distillation, e.g., Self-PU. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively. Our method achieves much better results compared with existing methods on CIFAR10 and two medical datasets of liver cancer survival time prediction, and low blood pressure diagnosis of pregnant, individually. The experimental results validates the efficacy of our proposed method.
更多
查看译文
关键词
learning,hardness-aware,positive-unlabeled
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要