As a research hotspot in the field of machine learning, ensemble learning improved the prediction accuracy of the final model by constructing and combining multiple basic models. In recent years,… Click to show full abstract
As a research hotspot in the field of machine learning, ensemble learning improved the prediction accuracy of the final model by constructing and combining multiple basic models. In recent years, many experts and scholars are committed to combining deep networks with ensemble learning to improve the accuracy of neural network models in various scenarios and tasks. But not all neural networks are suitable for participating in the construction of ensemble models. Deep networks with ensemble learning require that the single neural network involved in the integration has high accuracy and great discrepancy with other networks. In the initial stage of deep networks with ensemble learning, the process of generating sets of candidate deep networks is first required. After studying an existing multiobjective deep belief networks ensemble (MODBNE) method, the Gaussian random field model is used as a pre-screening strategy in the process of generating the candidate deep network sets. Individuals with great potential for improvement are selected for fitness function evaluation so that a large number of neural network models with higher accuracy and the larger discrepancy between networks can be easily obtained, which effectively improves the quality of the solution and reduces the time consumed in training the neural networks.
               
Click one of the above tabs to view related content.