With the immersive experience of P4 medicine (predictive, preventive, personalized and participatory), various medical applications are now transforming from periodic in-hospital service into anytime self-monitoring systems, which generates rich personal… Click to show full abstract
With the immersive experience of P4 medicine (predictive, preventive, personalized and participatory), various medical applications are now transforming from periodic in-hospital service into anytime self-monitoring systems, which generates rich personal medical data required to be processed. The emergence of edge computing enables the possibility of performing on-site processing for such data while preserving individual privacy. To automatically analyzing the data, machine learning methods were implemented to extract features from sensor readings and then performed on a concentrated feature space with all features. However, these approaches can easily suffer from over-fitting issues and ignore the differences of physical interpretation between different feature groups. In this article, we proposed a lightweight multi-stage, multi-view learning approach, called M3E, for processing data on the edge. Every stage of M3E helps the desired features to be finally acquired from the raw signals, as well as exploits the consistency and complementary proprieties of different views for getting better learning results. To study the performance, we evaluated the M3E approach on a real-world system and two real-world datasets, and the accuracy of prediction can reach 80.1, 79.11 and 72.63 percent respectively. Moreover, the experimental results also show that our multi-view approach outperforms single-view ones and can be easily extended to other medical cases.
               
Click one of the above tabs to view related content.