LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Online gradient method with smoothing ℓ0 regularization for feedforward neural networks

Photo from wikipedia

źp regularization has been a popular pruning method for neural networks. The parameter p was usually set as 0 < p ź 2 in the literature, and practical training algorithms… Click to show full abstract

źp regularization has been a popular pruning method for neural networks. The parameter p was usually set as 0 < p ź 2 in the literature, and practical training algorithms with ź0 regularization are lacking due to the NP-hard nature of the ź0 regularization problem; however, the ź0 regularization tends to produce the sparsest solution, corresponding to the most parsimonious network structure which is desirable in view of the generalization ability. To this end, this paper considers an online gradient training algorithm with smoothing ź0 regularization (OGTSL0) for feedforward neural networks, where the ź0 regularizer is approximated by a series of smoothing functions. The underlying principle for the sparsity of OGTSL0 is provided, and the convergence of the algorithm is also theoretically analyzed. Simulation examples support the theoretical analysis and illustrate the superiority of the proposed algorithm.

Keywords: neural networks; smoothing regularization; method; regularization; feedforward neural; online gradient

Journal Title: Neurocomputing
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.