Feature generating networks face a very important issue, which is the fitting difference (inconsistency) of the distribution between the generated feature and the real data. This inconsistency further influences the… Click to show full abstract
Feature generating networks face a very important issue, which is the fitting difference (inconsistency) of the distribution between the generated feature and the real data. This inconsistency further influences the performance of the network model because training samples from seen classes are disjointed with testing samples from unseen classes in zero-shot learning (ZSL). In generalized zero-shot learning (GZSL), testing samples are from not only seen classes but also unseen classes to be closer to the practical situation. Therefore, most feature generating networks have difficulty achieving satisfactory performance for challenging GZSLs by adversarial learning the distribution of semantic classes. To alleviate the negative influence of this inconsistency for ZSL and GZSL, transfer feature generating networks with semantic classes structure (TFGNSCS) are proposed for constructing a network model to improve the performance of ZSL and GZSL. TFGNSCS not only can consider the semantic structure relationship between seen and unseen classes, but also can learn the difference of generating features by transferring classification model information from seen to unseen classes in networks. The proposed method can integrate the transfer loss, the classification loss and the Wasserstein distance loss to generate enough CNN features, on which softmax classifiers are trained for ZSL and GZSL. Experiments demonstrate that TFGNSCS outperforms state-of-the-art models on four challenging datasets: CUB, FLO, SUN, and AwA in GZSL.
               
Click one of the above tabs to view related content.