The need for computational models that can incorporate imaging data with non-imaging data while investigating inter-subject associations arises in the task of population-based disease analysis. Although off-the-shelf deep convolutional neural… Click to show full abstract
The need for computational models that can incorporate imaging data with non-imaging data while investigating inter-subject associations arises in the task of population-based disease analysis. Although off-the-shelf deep convolutional neural networks have empowered representation learning from imaging data, incorporating data of different modalities complementarily in a unified model to improve the disease diagnostic quality is still challenging. In this work, we propose a generalizable graph-convolutional framework for population-based disease prediction on multi-modal medical data. Unlike previous methods constructing a static affinity population graph in a hand-crafting manner, the proposed framework can automatically learn to build a population graph with variational edges, which we show can be optimized jointly with spectral graph convolutional networks. In addition, to estimate the predictive uncertainty related to the constructed graph, we propose Monte-Carlo edge dropout uncertainty estimation. Experimental results on four multi-modal datasets demonstrate that the proposed method can substantially improve the predictive accuracy for Autism Spectrum Disorder, Alzheimer's disease, and ocular diseases. A sufficient ablation study with in-depth discussion is conducted to evaluate the effectiveness of each component and the choice of algorithmic details of the proposed method. The results indicate the potential and extendability of the proposed framework in leveraging multi-modal data for population-based disease prediction.
               
Click one of the above tabs to view related content.