LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Sparse Auto-encoder with Smoothed $$l_1$$l1 Regularization

Photo from wikipedia

Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into… Click to show full abstract

Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the $$l_1$$l1 regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed $$l_1$$l1 regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed $$l_1$$l1 regularization works better than the KL divergence.

Keywords: auto; smoothed regularization; auto encoder; sparsity auto

Journal Title: Neural Processing Letters
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.