Ensemble learning has many successful applications because of its effectiveness in boosting the predictive performance of classification models. In this article, we propose a semisupervised multiple choice learning (SemiMCL) approach… Click to show full abstract
Ensemble learning has many successful applications because of its effectiveness in boosting the predictive performance of classification models. In this article, we propose a semisupervised multiple choice learning (SemiMCL) approach to jointly train a network ensemble on partially labeled data. Our model mainly focuses on improving a labeled data assignment among the constituent networks and exploiting unlabeled data to capture domain-specific information, such that semisupervised classification can be effectively facilitated. Different from conventional multiple choice learning models, the constituent networks learn multiple tasks in the training process. Specifically, an auxiliary reconstruction task is included to learn domain-specific representation. For the purpose of performing implicit labeling on reliable unlabeled samples, we adopt a negative ℓ₁-norm regularization when minimizing the conditional entropy with respect to the posterior probability distribution. Extensive experiments on multiple real-world datasets are conducted to verify the effectiveness and superiority of the proposed SemiMCL model.
               
Click one of the above tabs to view related content.