Explain to Question not to Justify
CoRR(2024)
摘要
Explainable Artificial Intelligence (XAI) is a young but very promising field
of research. Unfortunately, the progress in this field is currently slowed down
by divergent and incompatible goals. In this paper, we separate various threads
tangled within the area of XAI into two complementary cultures of
human/value-oriented explanations (BLUE XAI) and model/validation-oriented
explanations (RED XAI). We also argue that the area of RED XAI is currently
under-explored and hides great opportunities and potential for important
research necessary to ensure the safety of AI systems. We conclude this paper
by presenting promising challenges in this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要