A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law… Click to show full abstract
A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law for weights parameters which motivate the network regularization more than $$l_ {1}$$l1 and $$l_ {2}$$l2 early proposed. To train the network, we have based on Hamiltonian Monte Carlo, it is used to simulate the prior and the posterior distribution. The generated samples are used to approximate the gradient of the evidence which allows to re-estimate the hyperparameters that balance a trade off between the likelihood term and regularized term, on the other hand we use the obtained posterior samples to estimate the network output. The case problem studied in this paper includes a regression and classification tasks. The obtained results illustrate the advantages of our approach in term of error rate compared to old approach, unfortunately our method consomme time before convergence.
               
Click one of the above tabs to view related content.