Chrome Extension
WeChat Mini Program
Use on ChatGLM

Generation and Evaluation of Factual and Counterfactual Explanations for Decision Trees and Fuzzy Rule-based Classifiers

IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)(2020)

Univ Santiago de Compostela

Cited 19|Views0
Abstract
Data-driven classification algorithms have proven highly effective in a range of complex tasks. However, their output is sometimes questioned, as the reasoning behind it may remain unclear due to a high number of poorly interpretable parameters used during training. Evidence-based (factual) explanations for single classifications answer the question why a particular class is selected in terms of the given observations. On the contrary, counterfactual explanations pay attention to why the rest of classes are not selected. Accordingly, we hypothesize that providing classifiers with a combination of both factual and counterfactual explanations is likely to make them more trustworthy. In order to investigate how such explanations can be produced, we introduce a new method to generate factual and counterfactual explanations for the output of pretrained decision trees and fuzzy rule-based classifiers. Experimental results show that unification of factual and counterfactual explanations under the paradigm of fuzzy inference systems proves promising for explaining the reasoning of classification algorithms.
More
Translated text
Key words
Explainable Artificial Intelligence,Counterfactuals,Decision Trees,Fuzzy Inference Systems,Natural Language Generation
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers

Factual and Counterfactual Explanation of Fuzzy Information Granules

Studies in Computational IntelligenceInterpretable Artificial Intelligence: A Perspective of Granular Computing 2021

被引用9

Towards a Formulation of Fuzzy Contrastive Explanations

2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2022

被引用2

Using the K-associated Optimal Graph to Provide Counterfactual Explanations.

Ariel Tadeu da Silva,Joao Roberto Bertini Junior
2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2022

被引用1

Counterfactual Rule Generation for Fuzzy Rule-Based Classification Systems

2022 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE) 2022

被引用3

Effective Intrusion Detection and Classification Using Fuzzy Rule Based Classifier in Cloud Environment

C. Veena, S. Ramalakshmi,V. Bhoopathy, Minakshi Dattatraya Bhosale,C. G. Magadum,Abirami. S. K.
2022 International Conference on Automation, Computing and Renewable Systems (ICACRS) 2022

被引用0

Towards Explainable Linguistic Summaries

2023 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ 2023

被引用0

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations.

MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES RESEARCH TRACK, ECML PKDD 2023, PT III 2023

被引用1

Opacity, Machine Learning and Explainable AI

The International Library of Ethics, Law and Technology Ethics of Artificial Intelligence 2023

被引用0

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种新的方法,为决策树和模糊规则分类器生成事实性解释和反事实解释,以增强分类算法的可信度。

方法】:作者通过模糊推理系统的范式统一事实性和反事实性解释,为预训练的决策树和模糊规则分类器的输出生成解释。

实验】:实验使用未具体提及的数据集,结果表明结合事实性和反事实性解释的方法对于解释分类算法的推理过程是具有前景的。