Do learned representations respect causal relationships?

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2022)

引用 3|浏览56
暂无评分
摘要
Data often has many semantic attributes that are causally associated with each other. But do attribute-specific learned representations of data also respect the same causal relations? We answer this question in three steps. First, we introduce NCINet, an approach for obser-vational causal discovery from high-dimensional data. It is trained purely on synthetically generated representations and can be applied to real representations, and is specif-ically designed to mitigate the domain gap between the two. Second, we apply NCINet to identify the causal relations between image representations of different pairs of at-tributes with known and unknown causal relations between the labels. For this purpose, we consider image represen-tations learned for predicting attributes on the 3D Shapes, CelebA, and the CASIA-WebFace datasets, which we an-notate with multiple multi-class attributes. Third, we an-alyze the effect on the underlying causal relation between learned representations induced by various design choices in representation learning. Our experiments indicate that (1) NCINet significantly outperforms existing observational causal discovery approaches for estimating the causal relation between pairs of random samples, both in the presence and absence of an unobserved confounder, (2) under controlled scenarios, learned representations can indeed satisfy the underlying causal relations between their respective labels, and (3) the causal relations are positively correlated with the predictive capability of the representations. Code and annotations are available at: https://github.com/human-analysis/causal-relations-between-representations.
更多
查看译文
关键词
Machine learning, Datasets and evaluation, Explainable computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要