Regularized Loss for Weakly Supervised Single Class Semantic Segmentation

European Conference on Computer Vision(2020)

引用 12|浏览39
暂无评分
摘要
Fully supervised semantic segmentation is highly successful, but obtaining dense ground truth is expensive. Thus there is an increasing interest in weakly supervised approaches. We propose a new weakly supervised method for training CNNs to segment an object of a single class of interest. Instead of ground truth, we guide training with a regularized loss function. Regularized loss models prior knowledge about the likely object shape properties and thus guides segmentation towards the more plausible shapes. Training CNNs with regularized loss is difficult. We develop an annealing strategy that is crucial for successful training. The advantage of our method is simplicity: we use standard CNN architectures and intuitive and computationally efficient loss function. Furthermore, we apply the same loss function for any task/dataset, without any tailoring. We first evaluate our approach for salient object segmentation and co-segmentation. These tasks naturally involve one object class of interest. In some cases, our results are only a few points of standard performance measure behind those obtained training the same CNN with full supervision, and state-of-the art results in weakly supervised setting. Then we adapt our approach to weakly supervised multi-class semantic segmentation and obtain state-of-the-art results.
更多
查看译文
关键词
loss,class
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要