From Discovery to Adoption: Understanding the ML Practitioners' Interpretability Journey

DESIGNING INTERACTIVE SYSTEMS CONFERENCE, DIS 2023(2023)

引用 0|浏览19
暂无评分
摘要
Models are interpretable when machine learning (ML) practitioners can readily understand the reasoning behind their predictions. Ironically, little is known about the ML practitioners' experience of discovering and adopting novel interpretability techniques in production settings. In a qualitative study with 18 practitioners at a large technology company working with text data, we found that despite varied tasks, practitioners experienced nearly identical challenges related to interpretability methods in model analysis workflows. These stem from problem formulation, the social nature of interpretability investigations, and non-standard practices in cross-functional organizational contexts. A follow-up examination of early-stage design probes with seven practitioners suggests that self-reported experts are "perpetual intermediates", who can benefit from regular, responsive, and in-situ education about interpretability methods across workflows, regardless of prior experience with models, analysis tools, or interpretability techniques. From these findings, we emphasize the need for multi-stage support for learning of interpretability methods for real-world NLP applications.
更多
查看译文
关键词
Interpretability,ML practitioners,learnability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要