LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Towards Explainability in Machine Learning: The Formal Methods Way

Photo from wikipedia

& CLASSIFICATION IS A central discipline of machine learning (ML) and classifiers have become increasingly popular to support or replace human decisions. We encounter them as email spam detectors, as… Click to show full abstract

& CLASSIFICATION IS A central discipline of machine learning (ML) and classifiers have become increasingly popular to support or replace human decisions. We encounter them as email spam detectors, as decision support systems, for example in healthcare, as aid in interpreting X-rays in breast cancer detection, or in the financial and insurance sector, for financial and risk analysis. For example, Facebook uses classifiers to predict the likelihood that users will navigate or click in a certain way, at scale, for millions and millions of users every day [9]. They also play a significant role in various areas of computer vision, where traffic signals and other objects need to be identified in order to “read” a situation during assisted or autonomous driving. Because we rely on classifiers not only for ease and comfort but also in business or safety critical systems, they need to be precise and reliable. Classifiers foot on a wide variety of techniques: neural networks, statistical learning like Bayesian networks, instance leaning like in K-Nearest Neighbor, separability of classes in a vector space like in support vector machines, or logics, like in decision trees, random forests, and rule-based classifiers. ML classifiers were traditionally judged mostly in terms of precision, ease of training and fast response. In many cases, however, small differences in the sample led to spectacularly wrong decisions. Meanwhile, AI failure stories populate various sites including fails by popular AI platforms like IBM’sWatson. When something goes wrong, it is good to know why. In cases where legal action follows a misclassification, as in the recent CervicalCheck cancer scandal that rocked Ireland’s Health Service, it is important to be able to find out exactly why a certain classification verdict was issued. Ease of explanation is also particularly important when the proposed classification is correct, but apparently counter-intuitive. This is why Explainability is now a new hot topic in ML, and this is where formalmethods can play an essential role. Let us show the power of the formal methods way in combinationwith random forests. Random Forests are one of the most popular logic-based classifiers in ML. The larger they are, the more precise the outcome of their predictions. Figure 1 shows a random forest with 100 tree elements that was learned from the Iris

Keywords: methods way; explainability; formal methods; machine learning; way

Journal Title: IT Professional
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.