Universal Model Adaptation by Style Augmented Open-set Consistency

APPLIED INTELLIGENCE(2023)

引用 0|浏览5
暂无评分
摘要
Learning to recognize unknown target samples is of great importance for unsupervised domain adaptation (UDA). Open-set domain adaptation (OSDA) and open-partial domain adaptation (OPDA) are two typical UDA scenarios, and the latter assumes that some source-private categories exist. However, most existing approaches are devised for one UDA scenario and often have bad performance on the other. Furthermore, they also demand access to source data during adaptation, leading them highly impractical due to data privacy concerns. To address the above issues, we propose a novel universal model framework that can handle both UDA scenarios without prior knowledge of the source-target label-set relationship nor access to source data. For source training, we learn a source model with both closed-set and open-set classifiers and provide it to the target domain. For target adaptation, we propose a novel Style Augmented Open-set Consistency (SAOC) objective to minimize the impact of target domain style on model behavior. Specifically, we exploit the proposed Intra-Domain Style Augmentation (IDSA) strategy to generate style-augmented target images. Then we enforce the consistency of the open-set classifier’s prediction between the image and its corresponding style-augmented version. Extensive experiments on OSDA and OPDA scenarios demonstrate that our proposed framework exhibits comparable or superior performance to some recent source-dependent approaches.
更多
查看译文
关键词
Universal model adaptation,Unsupervised domain adaptation,Style augmentation,Consistency regularization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要