Current work on zero-shot learning (ZSL) generally does not focus on the discriminative ability of the models, which is important for differentiating between classes since our brain focuses on the… Click to show full abstract
Current work on zero-shot learning (ZSL) generally does not focus on the discriminative ability of the models, which is important for differentiating between classes since our brain focuses on the discriminating part of the object to classify it. For generalized ZSL (GZSL), the fact that the outputs of the model are not comparable leads to a degraded performance. We propose a new ZSL method with a center loss to make the instances from the same class more compact by extracting their discriminative parts. Further, we introduce a varying learning rate to accelerate the model selection process. We also demonstrate how to boost the performance of GZSL by rectifying the outputs of the model to make the outputs be comparable. Experimental results on four benchmarks, including SUN, CUB, AWA2, and aPY, demonstrate the superiority of the proposed method, therein achieving state-of-the-art performance.
               
Click one of the above tabs to view related content.