Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior

ACM Conference on Human Factors in Computing Systems (CHI)(2022)

引用 20|浏览3
暂无评分
摘要
ABSTRACTSaliency methods — techniques to identify the importance of input features on a model’s output — are a common step in understanding neural network behavior. However, interpreting saliency requires tedious manual inspection to identify and aggregate patterns in model behavior, resulting in ad hoc or cherry-picked analysis. To address these concerns, we present Shared Interest: metrics for comparing model reasoning (via saliency) to human reasoning (via ground truth annotations). By providing quantitative descriptors, Shared Interest enables ranking, sorting, and aggregating inputs, thereby facilitating large-scale systematic analysis of model behavior. We use Shared Interest to identify eight recurring patterns in model behavior, such as cases where contextual features or a subset of ground truth features are most important to the model. Working with representative real-world users, we show how Shared Interest can be used to decide if a model is trustworthy, uncover issues missed in manual analyses, and enable interactive probing.
更多
查看译文
关键词
human-computer interaction, interpretability, machine learning, saliency methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要