Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high‐dimensional input space) is… Click to show full abstract
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high‐dimensional input space) is quickly exceeded. To explain a model's decisions, black‐box methods have been proposed that provide either non‐linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What‐If'‐reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high‐dimensional hyperplanes—2D‐slices of the high‐dimensional parameter space—based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real‐world datasets and illustrate that it allows identification of instance‐based decision boundary structures and can answer multi‐dimensional ‘What‐If'‐questions, thereby identifying counterfactual scenarios visually.
               
Click one of the above tabs to view related content.