Deep learning has achieved impressive prediction accuracies in a variety of scientific and industrial domains. However, the nested nonlinear feature of deep learning makes the learning highly nontransparent, i.e., it… Click to show full abstract
Deep learning has achieved impressive prediction accuracies in a variety of scientific and industrial domains. However, the nested nonlinear feature of deep learning makes the learning highly nontransparent, i.e., it is still unknown how the learning coordinates a huge number of parameters to achieve decision-making. To explain this hierarchical credit assignment, we propose a mean-field learning model by assuming that an ensemble of subnetworks, rather than a single network, is trained for a classification task. Surprisingly, our model reveals that apart from some deterministic synaptic weights connecting two neurons at neighboring layers, there exists a large number of connections that can be absent, and other connections can allow for a broad distribution of their weight values. Therefore, synaptic connections can be classified into three categories: very important ones, unimportant ones, and those of variability that may partially encode nuisance factors. Therefore, our model learns the credit assignment leading to the decision and predicts an ensemble of sub-networks that can accomplish the same task, thereby providing insights toward understanding the macroscopic behavior of deep learning through the lens of distinct roles of synaptic weights.
               
Click one of the above tabs to view related content.