Skip to main navigation menu Skip to main content Skip to site footer

Structural Representation: Reducing Multi-Modal Image Registration to Mono-Modal Problem


Registration of multi-modal images has been a challenging task
due to the complex intensity relationship between images. The
standard multi-modal approach tends to use sophisticated similarity
measures, such as mutual information, to assess the accuracy
of the alignment. Employing such measures imply the increase in
the computational time and complexity, and makes it highly difficult
for the optimization process to converge. The presented registration
method works based on structural representations of images
captured from different modalities, in order to convert the multimodal
problem into a mono-modal one. Two different representation
methods are presented. One is based on a combination of
phase congruency and gradient information of the input images,
and the other utilizes a modified version of entropy images in a
patch-based manner. Sample results are illustrated based on experiments
performed on brain images from different modalities.