ABSTRACT Overfitting occurs when one tries to train a large model on small amount of data. Regularizing a neural network using prior knowledge remains a topic of research as it… Click to show full abstract
ABSTRACT Overfitting occurs when one tries to train a large model on small amount of data. Regularizing a neural network using prior knowledge remains a topic of research as it is not concluded how much prior information can be given to the neural network. In this paper, a novel algorithm is introduced which uses regularization to train a neural network without increasing the dataset. A trivial prior information of a class label is supplied to the model while training. Laplace noise is introduced to the intermediate layer for more generalization. The results show significant improvement in accuracy on the standard datasets for a simple Convolutional Neural Network (CNN). While the proposed method outperforms previous regularization techniques like dropout and batch normalization, it can also be applied with them for further improvement in the performance. On the variants of MNIST, proposed algorithm achieved an average 48% increment in the test accuracy.
               
Click one of the above tabs to view related content.