Generic network for domain adaptation based on self-supervised learning and deep clustering

Neurocomputing(2022)

引用 11|浏览15
暂无评分
摘要
Domain adaptation methods train a model to find similar feature representations between a source and target domain. Recent methods leverage self-supervised learning to discover the analogous representations of the two domains. However, prior self-supervised methods have three significant drawbacks: (1) leveraging pretext tasks that are susceptible to learning low-level representations, (2) aligning the two domains using adversarial loss without considering if the extracted features are low-level representations, (3) the models are not flexible to accommodate various proportions of target labels, i.e., they assume target labels are always available. This paper presents a Generic Domain Adaptation Network (GDAN) to address these issues. First, we introduce a criterion based on instance discrimination to select appropriate pretext tasks to learn high-level domain invariant representations. Then, we propose a semantic neighbor cluster to align the two domain features. The semantic neighbor cluster implements a clustering technique in a feature embedding space to form clusters according to high-level semantic similarities. Finally, we present a weighted target loss function to balance the model weights according to the target labels. This loss function makes GDAN flexible for semi-supervised scenarios, i.e., partly labeled target data. We evaluate the proposed methods on four domain adaptation benchmark datasets. The experiment findings show that the proposed methods align the two domains well and achieve competitive results.
更多
查看译文
关键词
Domain adaptation,Self-supervised learning,Deep clustering,Image recognition,Pretext task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要