Disentangled Text Representation Learning With Information-Theoretic Perspective for Adversarial Robustness

IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING(2024)

引用 0|浏览11
暂无评分
摘要
Adversarial vulnerability remains a major obstacle to the construction of reliable NLP systems. When imperceptible perturbations are added to raw input text, the performance of a deep learning model may drop dramatically under attacks. Recent work has argued that the adversarial vulnerability of a model is caused by non-robust features in supervised training. Thus, in this paper, we tackle the adversarial robustness challenge by means of disentangled representation learning, which is able to explicitly disentangle robust and non-robust features in text. Specifically, inspired by the variation of information (VI) in information theory, we derive a disentangled learning objective composed of mutual information to represent both the semantic representativeness of latent embeddings and the differentiation of robust and non-robust features. On the basis of this, we design a disentangled learning network to estimate the mutual information for realization. Experiments on the typical text-based tasks show that our method significantly outperforms the representative methods under adversarial attacks, indicating that discarding non-robust features is critical for improving model robustness.
更多
查看译文
关键词
Adversarial robustness,variation of information,disentangled text representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要