LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Cross-covariance regularized autoencoders for nonredundant sparse feature representation

Photo from wikipedia

Abstract We propose a new feature representation algorithm using cross-covariance in the context of deep learning. Existing feature representation algorithms based on the sparse autoencoder and nonnegativity-constrained autoencoder tend to… Click to show full abstract

Abstract We propose a new feature representation algorithm using cross-covariance in the context of deep learning. Existing feature representation algorithms based on the sparse autoencoder and nonnegativity-constrained autoencoder tend to produce duplicative encoding and decoding receptive fields, which leads to feature redundancy and overfitting. We propose using the cross-covariance to regularize the feature weight vector to construct a new objective function to eliminate feature redundancy and reduce overfitting. The results from the MNIST handwritten digits dataset, the NORB normalized-uniform dataset and the Yale face dataset indicate that relative to other algorithms based on the conventional sparse autoencoder and nonnegativity-constrained autoencoder, our method can effectively eliminate feature redundancy, extract more distinctive features, and improve sparsity and reconstruction quality. Furthermore, this method improves the image classification performance and reduces the overfitting of conventional networks without adding more computational time.

Keywords: cross covariance; feature representation; feature

Journal Title: Neurocomputing
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.