Chrome Extension
WeChat Mini Program
Use on ChatGLM

A General-Purpose Method for Applying Explainable AI for Anomaly Detection.

International Syposium on Methodologies for Intelligent Systems (ISMIS)(2022)

Cited 5|Views4
No score
Abstract
The need for explainable AI (XAI) is well established but relatively little has been published outside of the supervised learning paradigm. This paper focuses on a principled approach to applying explainability and interpretability to the task of unsupervised anomaly detection. We argue that explainability is principally an algorithmic task and interpretability is principally a cognitive task, and draw on insights from the cognitive sciences to propose a general-purpose method for practical diagnosis using explained anomalies. We define Attribution Error, and demonstrate, using real-world labeled datasets, that our method based on Integrated Gradients (IG) yields significantly lower attribution errors than alternative methods.
More
Translated text
Key words
Anomaly detection,Interpretability,Explainable AI
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined