LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Novel Explainable Deep Belief Network Framework and Its Application for Feature Importance Analysis

Photo by dawson2406 from unsplash

Feature analysis and selection are highly considered topics in deep learning (DL) real-world applications. However, most existing methods are manual and lack of deep insights of training mechanisms. This is… Click to show full abstract

Feature analysis and selection are highly considered topics in deep learning (DL) real-world applications. However, most existing methods are manual and lack of deep insights of training mechanisms. This is because DL is often viewed as a “black box” and the mechanisms providing the output are often hidden from the user and difficult to understand. Some scientists have utilized visualization, sensitivity analysis, and adversary attack machine learning to increase transparency and have demonstrated successful methods in understanding DL, especially related to convolutional neural nets. This paper builds on these methods and focuses on deep belief networks (DBNs), with two training stages: unsupervised learning and supervised learning. First, a novel algorithm named Visual Input-neuron Importance (Vi-II), based on visualization and feature importance criterion are proposed, to calculate changes in the importance of the input features. Second, a criterion named Visual Hidden-layer Importance (Vi-HI) is proposed to dynamically display the contributions of each hidden layer. Third, a novel framework is put forward by combing the two techniques together, to determine the final structure (input and hidden layers) of DBN, for both unsupervised training and supervised training stages. Then, an application based on the analysis of a road safety performance function is demonstrated. The proposed method provides an accurate description of the model’s inner workings, identifies significant features, and eliminates irrelevant features. At last, the revised dataset and optimized model structure are used for car collision prediction; the result demonstrates that the revised model achieves much better performance than comparable methods.

Keywords: importance; deep belief; analysis; feature importance

Journal Title: IEEE Sensors Journal
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.