LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Factual and Counterfactual Explanations in Fuzzy Classification Trees

Photo from wikipedia

Classification algorithms have recently acquired great popularity due to their efficiency to generate models capable of solving high complexity problems. Specifically, black box models are the ones that offer the… Click to show full abstract

Classification algorithms have recently acquired great popularity due to their efficiency to generate models capable of solving high complexity problems. Specifically, black box models are the ones that offer the best results, since they greatly benefit from the enormous amount of data available to learn models in an increasingly accurate way. However, their main disadvantage compared to other simpler algorithms, e.g., decision trees, is the loss of interpretability for both the model and the individual classifications, which may become a major drawback because of the increasing number of applications where it is advisable and even compulsory to provide an explanation. A well-accepted practice is to build an explainable model that can mimic the behavior of the (more complex) classifier in the neighborhood of the instance to be explained. Nonetheless, the generation of explanations in such white box models is not trivial either, which has generated intense research. It is common to generate two types of explanations, factual explanations and counterfactual explanations, which complement each other to justify why the instance has been classified into a certain class or category. In this work, we propose the definition of factual and counterfactual explanations in the frame of fuzzy decision trees, where multiple branches can be fired at once. Our proposal is centered around the definition of factual explanations that can contain more than a single rule, in contrast to the current standard that is limited to considering a single rule as a factual explanation. Moreover, we introduce the idea of robust factual explanation. Finally, we provide procedures to obtain counterfactual explanations from the instance and also from a factual explanation.

Keywords: counterfactual explanations; factual explanation; factual counterfactual; explanations fuzzy; classification

Journal Title: IEEE Transactions on Fuzzy Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.