Abstract Neural learning plays an important role in many applications. In this paper, we derive a new learning paradigm for neural networks. Most existing neural models train network parameters including… Click to show full abstract
Abstract Neural learning plays an important role in many applications. In this paper, we derive a new learning paradigm for neural networks. Most existing neural models train network parameters including connecting weights and biases via optimizing a loss or energy function. Inspired from the associative learning in brain, we propose to associate different patterns via modeling joint distribution of them based on a hierarchical architecture. We first define an energy function based on the distance between hierarchical features of different patterns. Then a Gibbs typological distribution is constructed according to the energy field. To optimize the model, it is necessary to estimate the gradient expectation via sampling. Different from the simple architecture of existing probabilistic neural models such as restricted Boltzmann machine, the difficulty of optimizing this model lies in uncomputable probability for sampling from the distribution. Then we propose an optimization based sampling method. After learning, conditional probability can be derived and the unknown pattern can be generated via sampling as well. Compared with existing neural learning models, the proposed deep associative learning can associate different patterns and can be directly applied to many learning problems. Experiments on problems of classification, image transformation, and image change detection verify the effectiveness of the proposed learning paradigm.
               
Click one of the above tabs to view related content.