LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization

Photo from wikipedia

The groundbreaking success of deep learning in many real-world tasks has triggered an intense effort to understand theoretically the power and limitations of deep learning in the training and generalization… Click to show full abstract

The groundbreaking success of deep learning in many real-world tasks has triggered an intense effort to understand theoretically the power and limitations of deep learning in the training and generalization of complex tasks, so far with limited progress. In this work we study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is highly nonlinear, hence studying its properties reveals some of the essential features of nonlinear Deep Neural Networks (DNNs). Importantly, we solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the Back-Propagating Kernel Renormalization (BPKR), which allows for the incremental integration of the network weights layer-by-layer starting from the network output layer and progressing backward until the first layer’s weights are integrated out. This procedure allows us to evaluate important network properties, such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. BPKR does not assume specific statistics of the input or the task’s output. Furthermore, by performing partial integration of the layers, the BPKR allows us to compute the emergent properties of the neural representations across the different hidden layers. We have proposed a heuristic extension of the BPKR to nonlinear DNNs with rectified linear units (ReLU). Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks of modest depth, in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of Deep Neural Networks, and the first successful theory of learning through the successive integration of Degrees of Freedom in the learned weight space.

Keywords: deep linear; neural networks; linear neural; statistical mechanics; network; mechanics

Journal Title: Physical Review X
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.