LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Curiosity-Driven Class-Incremental Learning via Adaptive Sample Selection

Photo from wikipedia

Modern artificial intelligence systems require class-incremental learning while suffering from catastrophic forgetting in many real-world applications. Due to the missing knowledge of past data, performance substantially degrades. Recent methods often… Click to show full abstract

Modern artificial intelligence systems require class-incremental learning while suffering from catastrophic forgetting in many real-world applications. Due to the missing knowledge of past data, performance substantially degrades. Recent methods often used knowledge distillation and bias correction to avoid catastrophic forgetting caused by cognitive bias. However, since these methods mainly learn all samples indiscriminately, the model is hard to learn what it truly needs from the data stream to balance the new and old knowledge, leading to inevitable forgetting. Instead of considering each sample indiscriminately, the model should learn from its curious samples automatically. To tackle this problem, we propose a curiosity-driven class-incremental learning approach via adaptive sample selection for learning a more generalized model with fewer ineffective updates. Specifically, our method quantifies the model’s curiosity in each sample by two properties: uncertainty and novelty. Our model learns informative samples selectively during training utilizing the proposed uncertainty property, which benefits the classification decision boundary. In light of the imbalanced data, a novelty property is used to selectively optimize the model by employing dissimilar samples, endowing it with more robustness and less cognitive bias. Our method successfully reduces catastrophic forgetting and can be flexibly incorporated with other techniques. Extensive experiments and in-depth analysis on the CIFAR-100, Tiny-ImageNet and Caltech-101 datasets show that our approach outperforms competing methods for class-incremental learning in terms of preventing catastrophic forgetting.

Keywords: sample; curiosity driven; incremental learning; catastrophic forgetting; class incremental

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.