In recent years, relying on training with thousands of labeled samples, deep learning has achieved remarkable success in the field of computer vision. However, in practice, annotating samples is a… Click to show full abstract
In recent years, relying on training with thousands of labeled samples, deep learning has achieved remarkable success in the field of computer vision. However, in practice, annotating samples is a time-consuming and laborious task, which means that it is impractical to obtain thousands of labeled data. Humans have the ability to learn the knowledge of new concepts from only a handful of examples, which makes it easier to adapt to new environments. Inspired by this ability of human beings, few-shot learning aims at training a classifier that can learn to recognize new classes when only given a few labeled samples of these classes. In this paper, we propose a new framework called Adaptive Learning Knowledge Networks (ALKN) for few-shot learning. ALKN learns the knowledge of different classes from the features of labeled samples and store the learned knowledge into memory which will be dynamically updated during the learning process. We define the difficult knowledge and easy knowledge of each class so that when performing inference, our model can holistically leverage the memory of learned knowledge more efficiently. Considering the situations of standard few-shot learning and semi-supervised few-shot learning, we design different update strategies for the memory of learned knowledge. Extensive experiments are conducted on three datasets, Omniglot, Mini-Imagenet and CUB. Compared with the most existing approaches, our ALKN achieves superior results on those benchmarks.
               
Click one of the above tabs to view related content.