LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Overfitting remedy by sparsifying regularization on fully-connected layers of CNNs

Photo from archive.org

Abstract Deep learning, especially Convolutional Neural Networks (CNNs), has been widely applied in many domains. The large number of parameters in a CNN allow it to learn complex features, however,… Click to show full abstract

Abstract Deep learning, especially Convolutional Neural Networks (CNNs), has been widely applied in many domains. The large number of parameters in a CNN allow it to learn complex features, however, they may tend to hinder generalization by over-fitting training data. Despite many previously proposed regularization methods, over-fitting is still a problem in training a robust CNN. Among many factors that lead to over-fitting, the numerous parameters of fully-connected layers (FCLs) of a typical CNN should be taken into account. This paper proposes the SparseConnect, a simple idea which alleviates over-fitting by sparsifying connections to FCLs. Experimental results on three benchmark datasets MNIST, CIFAR10 and ImageNet show that the SparseConnect outperforms several state-of-the-art regularization methods.

Keywords: overfitting remedy; sparsifying regularization; connected layers; regularization; fully connected; remedy sparsifying

Journal Title: Neurocomputing
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.