SLGP Header

Enhanced Medical Image Fusion Using Adaptive NSCT

IJEECC Front Page

Multimodal medical image fusion, as a powerful tool for the clinical applications, has developed with the advent of various imaging modalities in medical imaging. The main motivation is to capture most relevant information from sources into a single output, which plays an important role in medical diagnosis. In this paper, a novel fusion framework is proposed for multimodal medical images based on non-sub sampled contour let transform (NSCT). The source medical images are first transformed by NSCT followed by combining low- and high-frequency components. Two different fusion rules based on phase congruency and directive contrast are proposed and used to fuse low and high frequency coefficients. Finally, the fused image is constructed by the inverse NSCT with all composite coefficients.
Keywords:Multimodal medical image fusion, non-sub sampled contour transform, phase congruency, directive contrast.
In the recent years, medical imaging has attracted increasing attention due to its critical role in health care. However, different types of imaging techniques such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), etc., provide limited information where some information is common, and some are unique. For example, X-ray and computed tomography (CT) can pro- vide dense structures like bones and implants with less distortion, but it cannot detect physiological changes [1]. Similarly, normal and pathological soft tissue can be better visualized by MRI image whereas PET can be used to provide better information on blood flow and flood activity with low spatial resolution. As a result, the anatomical and functional medical images are needed to be combined for a compendious view.
The salient contributions of the proposed framework over existing methods can be summarized as follows. • This paper proposes a new image fusion framework for multimodal medical images, which relies on the NSCT domain. • Two different fusion rules are proposed for combining low and high-frequency coefficients. • For fusing the low-frequency coefficients, the phase congruency based model is used. The main benefit of phase congruency is that it selects and combines contrast- and brightness-invariant representation contained in the low- frequency coefficients. • On the contrary, a new definition of directive contrast in NSCT domain is proposed and used to combine high frequency coefficients. Using directive contrast, the most prominent texture and edge information are selected from high-frequency coefficients and combined in the fused ones. • The definition of directive contrast is consolidated by incorporating a visual constant to the SML based definition of directive contrast which provide a richer representation of the contrast. • Further, the proposed scheme is also extended for multi-spectral fusion in color space which essentially rectifies the HIS color space undesirable cross-channel artifacts and produce best quality output with natural spectral features and improved the color information.


  1. F. Maes, D. Vandermeulen, and P. Suetens, “Medical image registration using mutual information,” Proc. IEEE, vol. 91, no. 10, pp. 1699–1721, Oct. 2003.
  2. G. Bhatnagar, Q. M. J. Wu, and B. Raman, “Real time human visual system based framework for image fusion,” in Proc. Int. Conf. Signal and Image Processing, Trois Rivieres, Quebec, Canada, 2010, pp. 71–78.
  3. A. Cardinali and G. P. Nason, “A statistical multiscale approach to image segmentation and fusion,” in Proc. Int. Conf. Information Fusion, Philadelphia, PA, USA, 2005, pp. 475–482.
  4. P. S. Chavez and A. Y. Kwarteng, “Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis,” Photogrammetric Eng. Remote Sens., vol. 55, pp. 339–348, 1989.
  5. A. Toet, L. V. Ruyven, and J. Velaton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng., vol. 28, no. 7, pp. 789–792, 1989.
  6. V. S. Petrovic and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, Feb. 2004.
  7. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graph Models Image Process., vol. 57, no. 3, pp. 235–245, 1995.
  8. A. Toet, “Hierarchical image fusion,” Mach. Vision Appl., vol. 3, no. 1, pp. 1–11, 1990.
  9. X. Qu, J. Yan, H. Xiao, and Z. Zhu, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in non-sub- sampled contourlet transform domain,” Acta Automatica Sinica, vol. 34, no. 12, pp. 1508–1514, 2008.
  10. G. Bhatnagar and B. Raman, “A new image fusion technique based on directive contrast,” Electron. Lett. Comput. Vision Image Anal., vol. 8, no. 2, pp. 18–38, 2009.
  11. Q. Zhang and B. L. Guo, “Multifocus image fusion using the non- sub sampled contourlet transform,” Signal Process., vol. 89, no. 7, pp. 1334–1346, 2009.
  12. Y. Chai, H. Li, and X. Zhang, “Multifocus image fusion based on features contrast of multiscale products in non-sub-sampled contourlet transform domain,” Optik, vol. 123, pp. 569–581, 2012.
  13. G. Bhatnagar and Q. M. J. Wu, “An image fusion framework based on human visual system in framelet domain,” Int. J. Wavelets, Multires., Inf. Process., vol. 10, no. 1, pp. 12500021–30, 2012.
  14. S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new contourlet packet,” Inf. Fusion, vol. 11, no. 2, pp. 78–84, 2010.