International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 167

ISSN 2229-5518

Survey on Different Image Fusion Techniques

Hari Om Shankar Mishra, Smriti Bhatnagar

Abstract- The term fusion means in general an approach to extraction of information acquired in several domains. The objective of Image fusion is to combine information from multiple images of the same scene in to a single image retaining the important and required features from each of the original image. The main task of image fusion is integrating complementry information from multiple images in to single image. The resultant fused image will be more informative and complete than any of the input image and is more suitable for human visual and machine perception. Certain algorithms can perform image fusion process. Image fusion techniques can improve the quality and increase the application of this image. The purpose of this paper to present an overview on different techniques of image fusion, such as primitive based fusion (averaging method, select maximum, select minimum), discrete wavelet transform based fusion, principal component analysis based fusion etc [1].

Keywords: Image Fusion, Fusion Methods, Discrete wavelet Transform (DWT), Root mean square error (RMSE), Peak signal to noise ratio (PSNR), Principal Component Analysis (PCA).

I. INTRODUCTION

—————————— ——————————
Image fusion is the process that combines information from mul- tiple images of the same scene. The object of the image fusion is to retain the most desirable characteristics of each image. Thus the new image contains a more accurate description of the scene than any of the individual image and is more suitable for human visual and machine perception. It also reduces the storage cost by storing just the single fused image instead of multiple images. For medical image fusion, the fusion of image provides additional clinical in- formation, which is otherwise not apparent in the separate images. However, the instruments are not capable of providing such infor- mation either by design or because of observational constraints, one possible solution for this is image fusion. The image fusion tech- niques are used in navigation guidance, object detection and recog- nition, medical diagnosis (like CT, MRI, MRA, PET) [1], [2], satellite imaging for remote sensing, computer vision and robotics, military and civilian surveillance etc.

II. IMAGE FUSION ALGORITHMS

Any image fusion algorithm must satisfy two main requirements. First- they must identify the most significant features in the input images and transfer them without loss of detail into the fused im- age.

1) SPATIAL DOMAIN BASED FUSION METHOD

In spatial domain techniques, we directly deal with the image pix- els. The pixel values are manipulated to achieve desired result.
Spatial domain based method-it has following method
A) Intensity-hue-saturation transform based fusion
B) Principal component analysis based fusion
C) Averaging method
D) Select maximum
E) Select minimum

A) INTENSITY-HUE SATURATION TRANSFORMED BASED FUSION

Hue (H) refers to the average wavelength of the light contributing to the colour, Intensity (I) the total brightness of the color and satu- ration (S) the purity of the color. The HIS transform fusion isolate spatial (I) and spectral (H, S) data from RGB images [10].This be- longs to color image fusion algorithms.
Mathematically, the transformation from the standard RGB colour scheme to the IHS scheme is given by:
Second- the fusion method should not bring in any inconsistencies or artifacts, which would distract the human observer [4].
Image fusion method can be broadly classified into two groups;

 1 1


I   3

1 

3   R

V  =  1

1 − 2

 G

(1)



1   3 6

6   

————————————————

Hari Om Shankar Mishra is currently pursuing masters degree program in electron- ics and communication engineering in Jaypee Institute of Information technology, Deeemed University, Noida, India, E-mail: hariommishra62@gmail.com

Smriti Bhatnagar is Assistant Professor (ECE Department) in in Jaypee Institute of

V 2   1 1


 −

 2

Where, H= tan-1(V2 /V1 ) S =

  B 

0 

V 2+V 2

Information technology, Deeemed University, Noida, India. E-mail: bhatnagar_smriti@yahoo.com

1 2

I= (R+G+B)/3

V1 & V2 are intermediate variables.

The reverse transformation is given by:

IJSER © 2014 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 168

ISSN 2229-5518

 1 1


R   6

1 

  I

eigenvectors as coordinate axes. The data points are mapped to this space by projecting them onto the two eigenvectors, which consti- tute the final data representation.

G =  1

1 − =1

 = V

(2)

i.e. Fi =Vi T.nDT, i=no. of eigenvalue


   3 6

2   1

 B 

 1 2


 −

 V 2 

0 

Step7 Dimensionality reduction
The reduction in dimensionality is achieved by projecting the trans- formed data points in eigenspace on to the principal component

B) PRINCIPAL COMPONENT ANALYSIS BASED FUSION

Principal component analysis is a mathematical tool frequently used for reducing dimensions of image that have not been separat-
axis.

C) AVERAGE METHOD

ed into classes. Suppose we have a set of 2-dimensional image whose x-coordinates and y-coordinates are given below:
X={x1 , x2 , x3 , x4 , x5 , x6 } Y= {y1 , y2 , y3 , y4 , y5 , y6 }
The original data set D is represented like D= [XT, YT]
Step1 Calculate mean of the data points, µX, µY.
In this method the resultant fused image is obtained by taking the
average intensity of corresponding pixels from both the input im-
age.
F (x, y) = (A (x, y) +B (x, y))/2
Where A (x, y), B (x, y) are input image and F (x, y) is fused image.
And point (x, y) is the pixel value.
For weighted average method-

m n

Step2 Subtract the mean from each data point to generate normal- ized data set, nX=X- µX, nY =Y- µY, thus we get nD= [nXT, nYT].
Step3 Calculate Covariance matrix C of the normalizes data
set. Covariance is measure of how much each of the dimensions varies from the mean with respect to each other, i.e.

F ( x, y) = ∑ ∑ (WA( x, y) + (1 − W )B( x, y))

x =0 y =0

Where W is weight factor and point (x, y) is the pixel value.

D) SELECT MAXIMUM

cov( X ,Y ) = 1

(n

n

) (X iµX )(Y iµY )

In this method, the resultant fused image is obtained by selecting

1 i = 0

the maximum intensity of corresponding pixels from both the input image.

Covariance matrix is given by;

m n


Step4 Calculate Eigenvectors and Eigen values of the covariance matrix C. i.e.

F (x, y ) = ∑ ∑ Max(A(x, y ) + B(x, y ))

x=0 y =0

Where A (x, y), B (x, y) are input image and F (x, y) is fused image, and point (x, y) is the pixel value.

E) SELECT MINIMUM

C λI = 0

.
For eigenvalue
In this method, the resultant fused image is obtained by selecting the minimum intensity of corresponding pixels from both the input
And for eigenvector (C-λi I).Vi I = no. of eigenvalue. Vi is the eigenvector.
Step5 Generate principal component axis
The eigenvector with the highest eigenvalue is the principal com- ponent of the data set. It physically represents the axis of projection for best discrimination. Plotting with these highest eigenvector cor- responding highest eigenvalue against nX generates the axis.
Step6 Represent normalized data points in eigenspace
The normalized data points are transformed so that they can be represented the eigenspace. The eigenspace is formed by the two
image

m n

F ( x, y) = ∑ ∑ Min((A( x, y ) + B( x, y))

x =0 y =0

Where A (x, y), B (x, y) are input image and F (x, y) is fused image, and point (x, y) is the pixel value.

2) FREQUENCY TRANSFORMS DOMAIN FUSION

In frequency domain methods the image is first transferred in to frequency domain. It means that the Fourier Transform of the im-

IJSER © 2014 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 169

ISSN 2229-5518

age is computed first. All the Fusion operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image.
Frequency domain –it has the following method
A) Pyramid Decomposition Based Fusion i. Laplacian pyramid
ii. Gradient pyramid
iii. Morphological pyramid
IV. Ratio of low pass pyramid
V. filter-subtract-decimate method
B) Discrete wavelet transform based fusion

A) PYRAMID DECOMPOSITION BASED FUSION

A pyramid decomposition fusion consists of a number of images at different scales which together represent the
original image. In general, every pyramid transform consists of three major processes:

1) Decomposition

Decomposition is the process where a pyramid is generated in suc- cession at each level of the fusion. The depth of the fusion or num- ber of levels of fusion is predefined.
The input images are first passed through a low pass filter, the images are filtered. After that the pyramid is generated from the filtered images. The input images are then decimated to half their size, which would act as the input image matrices for the next level of decomposition.

2) Formation of the initial image for re-composition

The input images are merged after the decomposition process. This resultant image would be used as the initial
Input to the re-composition process. The finally decimated input images are worked upon either by averaging the decimated input images, selecting the minimum decimated input image or selecting the maximum decimated input image.

3) Re-composition

In the re-composition process, the resultant image is finally created from the pyramids formed at each level of decomposition.

i. LAPLACIAN PYRAMID

The Laplacian Pyramid implements a “pattern selective” approach to image fusion, so that the composite image is constructed not a pixel at a time, but a feature at a time.
are averaged reducing the noise. Selection is used where the source images are distinctly different and the averaging is used where the source images are similar [7].

ii. GRADIENT PYRAMID

A gradient pyramid of an image is obtained by applying gradient operators to the Gaussian pyramid at each level. The gradient oper- ators are used in the horizontal, vertical, and 2 diagonal directions. At each level, these 4 directional gradient pyramids are combined together to obtain a combined gradient pyramid.

iii. MORPHOLOGICAL PYRAMID

Applying morphological filters to the Gaussian pyramid at each level and taking the difference between 2 neighboring levels gener- ate a morphological pyramid. A morphological filer is generally used for image smoothing.

iv. RATIO OF LOW PASS PYRAMID

Ratio of Low Pass Pyramid is another method in which at every level of the image, the ratio of two successive level is taken.

v. FILTER-SUBTRACT-DECIMATE METHOD

The filter-subtract-decimate pyramid fusion method is conceptually the same as the Laplacian pyramid fusion method. The sole differ- ence is in the stage of obtaining the difference images during the creation of the pyramid.

B) DISCRETE WAVELET TRANSFORM BASED METHOD

Wavelets were first introduced in seismology to provide a time di- mension to seismic analysis that Fourier analysis lacked. Fourier analysis is ideal for studying stationary data (data whose statistical properties are invariant over time) but is not well suited for study- ing data with transient events that cannot be statistically predicted from the data’s past. Wavelets were designed with such non- stationary data.
“Wavelet transforms allow time – frequency localization”
Wavelet means “small wave” so wavelet analysis is about analyz- ing signal with short duration finite energy functions.
They transform the signal under investigation in to another repre- sentation, which presents the signal in a more useful form. Mathe- matically, we denote a wavelet as;

1

The first step is to construct a pyramid for each source image; the fusion is then implemented for each level of the pyramid using fea- ture selection decision. There are two modes of the combination averaging and the selection. In the selection process the most salient component pattern from the source image are copied while less salient patterns are discarded. In the averaging case source patterns

ψ a ,b(t ) = ψ ((t b) / a)

a

Where b=is location parameter a=is scaling parameter
(3)

IJSER © 2014 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 170

ISSN 2229-5518

for a given scaling parameter a, we translate the wavelet by varying the parameter b. we define the wavelet transform as:
and three images of horizontal (HL), vertical (LH), and diagonal (HH) wavelet coefficients which contain information of local spatial detail. A selected band of the multispectral image then replaces the

w(a, b) = t

f (t )

1 ψ ((t b) / a)

a

(4)
low-resolution component. This process is repeated for each band until all bands are transformed. A reverse wavelet transform is ap- plied to the fused components to create the fused multispectral im-
According equation (4), for every (a, b), we have a wavelet trans- form co-efficient, representing how much the scaled wavelet is sim- ilar to the function at location, t = b/a.
“If scale and position is varied very smoothly, then transform is called continuous wavelet transform.”
“If scale and position are changed in discrete steps, the transform is called discrete wavelet transform.”
For a 2D M×N image array A the 2D DWT is given by equation-

age. Image fusion method is shown in following fig (1) [3];
Fig.1. the image fusion scheme using the wavelet transforms.

T B V

W N.A.W N=  

D

(5)

III. COMPARATIVE STUDY OF VARIOUS IMAGE FUSION TECHNIQUES

Here, B is called the approximation or blur matrix and represents the average of the elements of A. Suppose A is the 4×4 matrix
On the basis of the study only few comparisons between the differ- ent existing fusion techniques have been made and are analyzed

a11

A = a21

a31

a41

a12

a22 a32 a42

a13

a23 a33 a43

a14

24

a34

44

(6)
theoretically which are shown in Table 1 as below.[2]
Table-1
Then B is given by

1 (a11+a12 +a21+a22 ) (a13 +a14 +a23 +a24 )

B = 

4 

31+a32

+a41+a42

) (a33

+a34

+a43 +a

44

(7)
V is called the vertical difference matrix and it is given by

1 (a11+a21 ) (a12 +a22 ) (a13 +a23 ) (a14 +a24 )

V = 

4 

31+a41

) (a32

+a42

) (a33 +a43

) (a34

+a44

(8)
H is called the horizontal difference matrix and is given by

1 (a11+a12 ) (a21+a22 ) (a13 +a14 ) (a23 +a24 )

H = 

4 

31+a32

) (a

41+a42

) (a33

+a34

) (a

43 +a

44

(9)
D is called the diagonal difference matrix and is given by

1 (a11+a22 ) (a12 +a21 ) (a13 +a24 ) (a23 +a14 )

D = 

4 

31+a42

) (a32

+a41

) (a33 +a44

) (a

43 +a

34

(10)
So by using these values the discrete wavelet transformed is calcu- lated in multiple lavel.
A wavelet transform is applied to the image resulting in a four- component image: a low-resolution approximation component (LL)

IJSER © 2014 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 171

ISSN 2229-5518

(c) (d)

IV EXPERIMENTAL RESULTS

Let P (i, j) is the original image, F (i, j) is the fused image, (i, j) is the pixel row and column index and M and N are the dimension of the image. The smaller the value of RMSE, the better the fusion per- formance. [3].

1) The root mean square error (RMSE) is given by:

Fig.3. Fused images using different algorithms ((a)-(e) are fused imag-

es, the methods used from (a) to (e) are: Average, Maximum, Mini- mum, PCA, and DWT. [16]

Table -2

M N 2

RMSE =

∑ ∑ [P (i , j )F (i , j )]

i =1 j =1

M × N

(11)
Now we define the peak signal to noise ratio (PSNR)
Table -3

PSNR = 10 × log

( f

2 max

RMSE 2

(12)

10  


Where fmax is the maximum gray scale value of the pixels in the fused image. The higher the value of the PSNR, the better the fusion performance.
Fig.2. Pair of Input Images


(a) (b)

V. CONCLUSION

This paper performs the survey on different Image fusion tech- niques Here, various techniques of Image Fusion that are useful in image fusion is to create a single enhanced image, these fused im- age is more suitable for human visual and machine perception. This paper presents that which approach is better among all the existing image fusion techniques. Although selection of fusion algorithm is problem dependent but this review results that spatial domain pro- vide high spatial resolution but spatial domain have image-blurring problem. The wavelet transform is the very good technique for the image fusion. It has better value of PSNR when compared to other fusion methods. It shows it is the better fusion technique and it also provides a very high quality spectral content.

REFERENCES

[1] Yong Yang, Dong Sun Park, and Shuying Huang, “Medical Image Fusion via an Effective Wavelet based Approach”.

[2] Bedi S. S., Khandelwal Rati, “Comprehensive and Comparative Study of Image Fusion Techniques” International Journal of Soft Computing and Engi- neering (IJSCE) ISSN: 2231-2307, Volume-3, Issue-1, March 2013.


IJSER © 2014 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 172

ISSN 2229-5518

[3] Ali F. E., Dokany I. M. El, Saad A. A., and Samie F. E. Abd El, “Fusion of MR and CT Images Using The Curvelet Transform” 25th national radio science con- ference (nrsc 2008) March 1820, 2008, Faculty of Engineering, Tanta Univ.,

Egypt.

[4] Kulkarni S., jyoti, “Wavelet Transform Applications” IEEE 978-1-4244-8679-

3/11/2011

[5] Sekhar Soma A., Prasad Giri M.N., “A Novel Approach of Image Fusion on

MR and CT Images Using Wavelet Transforms” IEEE 2011

[6] Zhang Y., “Understanding image fusion,” Photogramm. Eng. Remote Sens., vol. 70, no. 6, pp. 657–661, Jun. 2004.

[7] Krista Amolins, Yun Zhang, and Peter Dare, “Wavelet based image fusion techniques—an introduction, review and comparison”, ISPRS Journal of Photo- grammetric and Remote Sensing, Vol. 62, pp. 249-263, 2007

[8] H. Li, Manjunath B.S. and Mitra S.K., 1995. “Multisensor image

fusion using the wavelet transforms”, Graphical Models and Image Processing,

57:234–245.

[9] Jan Flusser, Filip Sroubek, and Barbara Zitov´a, “Image Fusion: Principles, Methods, and Applications” Tutorial EUSIPCO 2007

[10] Kekre H.B., Mishra Dhirendra, Saboo Rakhee, “Review on image fusion techniques and performance evaluation parameters” International Journal of Engineering Science and Technology (IJEST) ISSN: 0975-5462 Vol. 5 No.04 April

2013

[11] Rani K., Sharma R., “Study of Different Image fusion Algorithm” Interna- tional Journal of Emerging Technology and Advanced Engineering (ISSN 2250-

2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 5, May 2013).

[12] Shih–Gu Huang, “Wavelet for Image Fusion”.

[13] Y. Zhang, “Understanding image fusion,” Photogramm. Eng. Remote Sens., vol. 70, no. 6, pp. 657–661, Jun. 2004

.

[14] Amolins Krista, Yun Zhang, and Peter Dare, “Wavelet based image fusion

techniques—an introduction, review and comparison”, ISPRS Journal of Photo-

grammetric and Remote Sensing, Vol. 62, pp. 249-263, 2007.

[15] Zhiming Cui, Guangming Zhang, Jian Wu, “Medical Image Fusion Based on Wavelet Transform and Independent Component Analysis” 2009 International Joint Conference on Artificial Intelligence.

[16] Sruthy S, Latha Parameswaran Ajeesh P Sasi, “Image Fusion Technique using DT-CWT” IEEE 978-1-4673-5090-7/13/2013.

IJSER © 2014 http://www.ijser.org