LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Handling catastrophic forgetting using cross-domain order in incremental deep learning

Photo from wikipedia

Abstract. In the present era of big data applications, incremental learning has emerged as the most admired area of research where ever-ending tasks from different application domains arrive temporarily, and… Click to show full abstract

Abstract. In the present era of big data applications, incremental learning has emerged as the most admired area of research where ever-ending tasks from different application domains arrive temporarily, and the learning models majorly focus on newly arriving tasks while forgetting the historically acquired knowledge of old tasks. In this context, when a deep neural network (DNN) performs a sequence of learning tasks, it usually forgets the previously learned tasks. This phenomenon is known as catastrophic forgetting (CF), and the existing state-of-the-art deep learning approaches are unable to handle this issue. Following this, many DNN-based state-of-the-art solutions have been proposed to alleviate the problem that ensures a more natural way of human brain imitation. During training, deep learning algorithms are subjected to a random sequence of diversified datasets (also referred to as cross-domain datasets) that typically belong to multiple domains and have a variety of feature representation, distribution, density, etc. In such cases, CF may become a serious issue because the model tends to forget the previously learned features that affect the overall performance. Because there has been very little research in this area, a more robust incremental deep learning algorithm is proposed in our study to learn desired tasks incrementally. To overcome this situation, proper sequencing of cross-domain datasets is important so that related datasets are kept together to enhance the learnability of the model. Our research proposes a cross-domain order approach with hard attention to the task (HAT) methodology that uses an attention-gated mechanism to efficiently learn the task-specific features. Intense experiments have been performed, and the performance of the proposed approach is compared with the changing cross-domain orders with the baseline models over the following datasets: MNIST, P-MNIST, CIFAR-10, CIFAR-100, and SVHN, etc. The results of the experiments show that learning through cross-domain orders results in better incremental learning.

Keywords: catastrophic forgetting; cross; deep learning; cross domain

Journal Title: Journal of Electronic Imaging
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.