Multiview learning has been widely studied in various fields and achieved outstanding performances in comparison to many single-view-based approaches. In this paper, a novel multiview learning method based on the… Click to show full abstract
Multiview learning has been widely studied in various fields and achieved outstanding performances in comparison to many single-view-based approaches. In this paper, a novel multiview learning method based on the Gaussian process latent variable model (GPLVM) is proposed. In contrast to existing GPLVM methods which only assume that there are transformations from the latent variable to the multiple observed inputs, our proposed method simultaneously takes a back constraint into account, encoding multiple observations to the latent variable by enjoying the Gaussian process (GP) prior. Particularly, to overcome the difficulty of the covariance matrix calculation in the encoder, a linear projection is designed to map different observations to a consistent subspace first. The obtained variable in this subspace is then projected to the latent variable in the manifold space with the GP prior. Furthermore, different from most GPLVM methods which strongly assume that the covariance matrices follow a certain kernel function, for example, radial basis function (RBF), we introduce a multikernel strategy to design the covariance matrix, being more reasonable and adaptive for the data representation. In order to apply the presented approach to the classification, a discriminative prior is also embedded to the learned latent variables to encourage samples belonging to the same category to be close and those belonging to different categories to be far. Experimental results on three real-world databases substantiate the effectiveness and superiority of the proposed method compared with state-of-the-art approaches.
               
Click one of the above tabs to view related content.