Restricted Boltzmann machines (RBMs) are successfully employed to construct deep architectures because their power of expression and the inference is tractable and easy. In this paper, we propose a model… Click to show full abstract
Restricted Boltzmann machines (RBMs) are successfully employed to construct deep architectures because their power of expression and the inference is tractable and easy. In this paper, we propose a model named self-connected restricted Boltzmann machine (SCRBM), which adds horizontal connections to the hidden layer to enable direct information transfer between hidden units. We present a simple and effective method based on greedy layer-wise procedure of deep learning algorithms to train the model. Under the algorithm, SCRBM has a three-layer architecture. The first hidden layer extracts features from the data, and the second hidden layer is used to stimulate various interactions between units in the layer. Specifically, to stimulate the lateral inhibition that exists in sensory systems, a log sparse item is introduced to the second hidden layer of SCRBM. Our experiments show that the features learned by our algorithm are more vivid and clean than those learned by basic RBM and SparseRBM. Further experiments show the performance of SCRBM outperforms basic RBM and SparseRBM on several widely used datasets in terms of accuracy.
               
Click one of the above tabs to view related content.