Multi‐modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography‐magnetic resonance (CT‐MR) image fusion model… Click to show full abstract
Multi‐modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography‐magnetic resonance (CT‐MR) image fusion model is proposed using an optimized bio‐inspired spiking feedforward neural network in different decomposition domains. First, source images are decomposed into base (low‐frequency) and detail (high‐frequency) layer components. Low‐frequency subbands are fused using texture energy measures to capture the local energy, contrast, and small edges in the fused image. High‐frequency coefficients are fused using firing maps obtained by pixel‐activated neural model with the optimized parameters using three different optimization techniques such as differential evolution, cuckoo search, and gray wolf optimization, individually. In the optimization model, a fitness function is computed based on the edge index of resultant fused images, which helps to extract and preserve sharp edges available in the source CT and MR images. To validate the fusion performance, a detailed comparative analysis is presented among the proposed and state‐of‐the‐art methods in terms of quantitative and qualitative measures along with computational complexity. Experimental results show that the proposed method produces a significantly better visual quality of fused images meanwhile outperforms the existing methods.
               
Click one of the above tabs to view related content.