In real-world applications, we often encounter multi-view learning tasks where we need to learn from multiple sources of data or use multiple sources of data to make decisions. Multi-view representation… Click to show full abstract
In real-world applications, we often encounter multi-view learning tasks where we need to learn from multiple sources of data or use multiple sources of data to make decisions. Multi-view representation learning, which can learn a unified representation from multiple data sources, is a key pre-task of multi-view learning and plays a significant role in real-world applications. Accordingly, how to improve the performance of multi-view representation learning is an important issue. In this work, inspired by human collective intelligence shown in group decision making, we introduce the concept of view communication into multi-view representation learning. Furthermore, by simulating human communication mechanism, we propose a novel multi-view representation learning approach that can fulfill multi-round view communication. Thus, each view of our approach can exploit the complementary information from other views to help with modeling its own representation, and mutual help between views is achieved. Extensive experiment results on six datasets from three significant fields indicate that our approach substantially improves the average classification accuracy by 4.536% in medicine and bioinformatics fields as well as 4.115% in machine learning field.
               
Click one of the above tabs to view related content.