Author Topic: Remote Sensing Image Restoration Using Various Techniques: A Review  (Read 3050 times)

0 Members and 1 Guest are viewing this topic.

IJSER Content Writer

  • Sr. Member
  • ****
  • Posts: 327
  • Karma: +0/-1
    • View Profile
Author : Er.Neha Gulati,Er.Ajay Kaushik
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—In the imaging process of the remote sensing ,there was degradation phenomenon in the acquired  images. In order to reduce the image blur caused by the degradation, the remote sensing images were restored to give prominence to the characteristic objects in the images.the images were restored. IMAGE restoration is an important issue in high-level image processing..The purpose of image  restoration  is  to estimate the  original  image  from  the  degraded  data. It  is widely used  in  various  fields  of  applications,  such  as  medical  imaging, astronomical  imaging,  remote  sensing,  microscopy  imaging, photography  deblurring,  and  forensic  science,  etc. Restoration is beneficial to interpreting and analyzing the remote sensing images. After restoration, the blur phenomenon of the images is reduced. The characters are highlighted, and the visual effect of the images is clearer. In  this  paper different image  restoration  techniques  like  Richardson-Lucy  algorithm, Wiener filter,  Neural Network,Blind Deconvolution.

Keywords—Image  Restoration,Degradation model, Richardson-Lucy algorithm,Wiener filter, Neural Network,Blind Deconvolution.

For the space remote sensing camera, many factors will cause image degradation during the image acquisition process,such as aberration of the optical system, performance of CCD sensors, motion of the satellite platform and atmospheric turbulence [14]. The degradation results in image blur, affecting identification and extraction of the useful information in the images.The degradation phenomenon of the acquired images causes serious economic loss. Therefore, restoring the degraded images is an urgent task in order to expand uses of the images.There are several classical image restoration methods, for example Wiener filtering, regularized filtering and Lucy-Richardson algorithm. These methods require the prior knowledge of the degradation phenomenon [16][19], which be denoted as the degradation function of the imaging system, i.e.,the point spread function (PSF). As the operational environment of the remote sensing camera is special and the atmospheric condition during image acquisition is various, it is usually impossible to obtain accurate degradation function.The field of image restoration  (sometimes  referred  to  as  image  deblurring  or image deconvolution) is concerned with the reconstruction or estimation of the uncorrupted image from a blurred and noisy one. Essentially, it  tries to perform an operation  on the image that is the inverse of the imperfections in the image formation system. The remote sensing images dealt with in this paper have high resolution. With the PSF as parameter, the images can be restored by the various techniques.

The task of deblurring an image is image deconvolution; if
the blur kernel is not known, then the problem is said to be “blind”.For a survey on the extensive literature in this area, see [Kundur and Hatzinakos 1996]. Existing blind deconvolution methods typically assume that the blur kernel has a simple parametric form, such as
a Gaussian or low-frequency Fourier components. However, as illustrated by our examples, the blur kernels induced during camera shake do not have simple forms, and often contain very sharp edges.Similar low-frequency assumptions are typically made for the inputimage, e.g., applying a quadratic regularization. Such assumptions can prevent high frequencies (such as edges) from appearing in the reconstruction. Caron et al. [2002] assume a power-law distribution on the image frequencies; power-laws are a simple form of natural image statistics that do not preserve local structure. Some methods [Jalobeanu et al. 2002; Neelamani et al. 2004] combine power-laws with wavelet domain constraints but do not work for the complex blur kernels in our examples.

Deconvolution methods have been developed for astronomical im-ages [Gull 1998; Richardson 1972; Tsumuraya et al. 1994; Zarowin 1994], which have statistics quite different from the natural scenes we address in this paper. Performing blind deconvolution in this do-main is usually straightforward, as the blurry image of an isolated star reveals the point-spread-function.

Another approach is to assume that there are multiple images avail-able of the same scene [Bascle et al. 1996; Rav-Acha and Peleg 2005]. Hardware approaches include: optically stabilized lenses [Canon Inc. 2006], specially designed CMOS sensors [Liu andGamal 2001],and hybrid
imaging systems [Ben-Ezra and Nayar 2004]. Since we
would like our method to work with existing cam-eras and imagery and to work for as many situations as possible, we do not assume that any such hardware or extra imagery is available.

Recent work in computer vision has shown the usefulness of heavy-tailed natural image priors in a variety of applications, including denoising [Roth and Black 2005], superresolution [Tappen et al.2003], intrinsic images [Weiss 2001], video matting [Apostoloff and Fitzgibbon 2005], inpainting [Levin et al. 2003], and separating
reflections [Levin and Weiss 2004]. Each of these methods is effectively “non-blind”, in that the image formation process (e.g., the blur kernel in superresolution) is assumed to be known in advance.Miskin and MacKay [2000] perform blind deconvolution on line art images using a prior on raw pixel intensities. Results are shown for small amounts of synthesized image blur. We apply a similar variational scheme for natural images using image gradients in place of intensities and augment the algorithm to achieve results for photo-graphic images with significant blur.

A. Image degradation model
As Fig 1 shows, image degradation process can be modeled as a degradation function together with an additive noise, operates on an input image f(x,y) to produce a degraded image g(x,y) [4]. As a result of the degradation process and noise interfusion, the original image becomedegraded image, representing image blur in different degrees.If the degradation function h(x, y) is linear and spatially invariant, the degradation process in the spatial domain is expressed as convolution of the f(x,y) andh(x, y) , given by

g(x,y)=f(x,y) * h(x,y)+n(x,y)               (1)

Figure1. Image degradation model

According to the convolution theorem , convolution of two spatial functions is denoted by the product of their Fourier transforms in the frequency domain.Thus,the degradation process in frequency domain can be written as

G(u,v)=F(u,v)H(u,v)+N(u,v)                (2)
B.  Image restoration theory
The objective of image restoration is to reduce the image blur during the imaging process. If we know the prior knowledge of the degradation function and the noises, the inverse process against degradation can be applied for restoration, including denoising and deconvolution. In frequency domain, the restoration process is given by the expression

  F(u,v)=G(u,v)-N(u,v)                          (3)

Because restoration will enlarge the noises, denoising is done before restoration to remove the noises. Denoising can be performed both in the spatial domain and in the frequency domain. The usual method is to select an appropriate filter according to the characters of the noises to filter out the noises. Spatial convolution is defined as multiplication in the frequency domain, and its inverse operation is division.
Therefore, deconvolution is carried out in the frequency domain as a rule. At last, the inverse Fourier transform is done to F(u,v) to complete the restoration.

C.  Blurring
Blur  is  unsharp  image  area  caused  by  camera  or  subject movement, inaccurate focusing, or the use of an aperture that gives  shallow  depth  of  field  [7].  Blur  effects  are  filters  that make  smooth  transitions  and  decrease  contrast  by  averaging the pixels next to hard edges of defined lines and areas where there are significant color transition [15].

Read More: Click here...