Abstract Transfer learning, which applies the knowledge from related but different source domains to improve the learning of the target domains, has attracted much attention in recent years. Lots of… Click to show full abstract
Abstract Transfer learning, which applies the knowledge from related but different source domains to improve the learning of the target domains, has attracted much attention in recent years. Lots of transfer learning methods have been proposed in literature, and many concentrate on selecting related source (-domain) instances or features for boosting learning. However, in each source instance, possibly only part of the features are related or helpful for transfer. That is, when a source instance is selected by relativeness, its unrelated feature knowledge can also be introduced. At the same time, some related feature knowledge may be discarded when an unrelated source instance is dropped. As a result, these methods ignore the partial related/unrelated knowledge in each source instance. In this paper, we attempt to discover such partial related “instance-feature” knowledge in transfer, and propose a new transfer learning method with partial related “instance-feature” knowledge (PRIF for short). Specifically, the partial “instance-feature” structure is first discovered by co-clustering over both instances and features, then the source instances are reconstructed considering both the related “instance-feature” knowledge and the related target (-domain) instances, so that the source instances can be more related to the target ones. Finally, transfer learning is performed with these newly-constructed source instances. Empirical results over several real-world datasets demonstrate the effectiveness of PRIF.
               
Click one of the above tabs to view related content.