Multi-modal image co-registration via optimizing mutual information (MI) is based on the assumption that intensity distributions of multi-modal images follow a consistent relationship. However, images with a substantial difference in… Click to show full abstract
Multi-modal image co-registration via optimizing mutual information (MI) is based on the assumption that intensity distributions of multi-modal images follow a consistent relationship. However, images with a substantial difference in appearance violate this assumption, thus MI directly based on image intensity alone may be inadequate to drive similarity based co-registration. To address this issue, we introduce a novel approach for multi-modal co-registration called Multi-scale Spectral Embedding Registration (MSERg). MSERg involves the construction of multi-scale spectral embedding (SE) representations from multimodal images via texture feature extraction, scale selection, independent component analysis (ICA) and SE to create orthogonal representations that decrease the dissimilarity between the fixed and moving images to facilitate better co-registration. To validate the MSERg method, we aligned 45 pairs of in vivo prostate MRI and corresponding ex vivo histopathology images. The dataset was split into a learning set and a testing set. In the learning set, length scales of 5 × 5, 7 × 7 and 17 × 17 were selected. In the independent testing set, we compared MSERg with intensity-based registration, multi-attribute combined mutual information (MACMI) registration and scale-invariant feature transform (SIFT) flow registration. Our results suggest that multi-scale SE representations generated by MSERg are found to be more appropriate for radiology-pathology co-registration.
               
Click one of the above tabs to view related content.