Multimodal medical imaging plays a crucial role in the diagnosis and characterization of lesions. However, challenges remain in lesion characterization based on multimodal feature fusion. First, current fusion methods have… Click to show full abstract
Multimodal medical imaging plays a crucial role in the diagnosis and characterization of lesions. However, challenges remain in lesion characterization based on multimodal feature fusion. First, current fusion methods have not thoroughly studied the relative importance of characterization modals. In addition, multimodal feature fusion cannot provide the contribution of different modal information to inform critical decision-making. In this study, we propose an adaptive multimodal fusion method with an attention-guided deep supervision net for grading hepatocellular carcinoma (HCC). Specifically, our proposed framework comprises two modules: attention-based adaptive feature fusion and attention-guided deep supervision net. The former uses the attention mechanism at the feature fusion level to generate weights for adaptive feature concatenation and balances the importance of features among various modals. The latter uses the weight generated by the attention mechanism as the weight coefficient of each loss to balance the contribution of the corresponding modal to the total loss function. The experimental results of grading clinical HCC with contrast-enhanced MR demonstrated the effectiveness of the proposed method. A significant performance improvement was achieved compared with existing fusion methods. In addition, the weight coefficient of attention in multimodal fusion has demonstrated great significance in clinical interpretation.
               
Click one of the above tabs to view related content.