Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into… Click to show full abstract
Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the $$l_1$$l1 regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed $$l_1$$l1 regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed $$l_1$$l1 regularization works better than the KL divergence.
               
Click one of the above tabs to view related content.