AbstractTransfer learning focuses on building better predictive models by exploiting knowledge gained in previous related tasks, being able to soften the traditional supervised learning assumption of having identical train–test distributions.… Click to show full abstract
AbstractTransfer learning focuses on building better predictive models by exploiting knowledge gained in previous related tasks, being able to soften the traditional supervised learning assumption of having identical train–test distributions. Most efforts on transfer learning consider revisiting the data from the source tasks or rely on transferring knowledge for specific models. In this paper, a general framework is proposed for transferring knowledge by including a regularization factor based on the structural model similarity between related tasks. The proposed approach is instantiated to different models for regression, classification, ranking and recommender systems, obtaining competitive results in all of them. Also, we explore high-level concepts in transfer learning like sparse transfer, partially observable transfer and cross-model transfer.
               
Click one of the above tabs to view related content.