The visual-semantic gap between the visual space (visual features) and semantic space (semantic attributes) is one of the main problems in the Generalized Zero-Shot Learning (GZSL) task. The essence of… Click to show full abstract
The visual-semantic gap between the visual space (visual features) and semantic space (semantic attributes) is one of the main problems in the Generalized Zero-Shot Learning (GZSL) task. The essence of this problem is that the structure of manifolds in these two spaces is inconsistent, which makes it difficult to learn embeddings that unify visual features and semantic attributes for similarity measurement. In this work, we tackle this problem by proposing a multi-modal aggregated posterior aligning neural network based on Wasserstein Auto-encoders (WAE) which learns a shared latent space for visual features and semantic attributes. The key to our approach is that the aggregated posterior distribution of the latent representations encoded from visual features of each class is encouraged to be aligned with a Gaussian distribution predicted by the corresponding semantic attribute in the latent space. On one hand, requiring the latent manifolds of visual features and semantic attributes to be consistent preserves the inter-class association between seen and unseen classes. On the other hand, the aggregated posterior of each class is directly defined as a Gaussian in the latent space, which provides a reliable way to synthesize latent features for training classification models. Using the AWA1, AWA2, CUB, aPY, FLO, and SUN benchmark datasets, we extensively conducted comparative evaluations to demonstrate the advantages of our method over state-of-the-art approaches.
               
Click one of the above tabs to view related content.