Deep neural networks have been scaled up to thousands of layers with the intent to improve their accuracy. Unfortunately, after some point, doubling the number of layers leads to only… Click to show full abstract
Deep neural networks have been scaled up to thousands of layers with the intent to improve their accuracy. Unfortunately, after some point, doubling the number of layers leads to only minor improvements, while the training difficulties increase substantially. In this article, we present an approach for constructing high-accuracy deep evolutionary networks and train them by activating and freezing dense networks (AFNets). The activating and freezing strategy enables us to reduce the classification error of test and reduce the training time required for deeper dense networks. We activate the layers that are being trained and construct a freezing box to freeze the idle and pretrained network layers in order to minimize memory consumption. The training speed in the early stage is not fast enough because many layers are activated for training. As the epochs gradually increase, the training speed becomes faster and faster since fewer and fewer layers are activated. Our method improves the convergence to the optimal performance within a limited number of epochs. Comprehensive experiments on a variety of data sets show that the proposed model achieves better performance when compared to the other state-of-the-art network models.
               
Click one of the above tabs to view related content.