Author Topic: Region Filling and Object Removal by Exemplar-Based Image Inpainting  (Read 4834 times)

0 Members and 1 Guest are viewing this topic.

IJSER Content Writer

  • Sr. Member
  • ****
  • Posts: 327
  • Karma: +0/-1
    • View Profile
"
Quote
Author : Mrs.Waykule J.M., Ms. Patil V.A
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: (i) “texture synthesis” algorithms for generating large image regions from sample textures, and (ii) “inpainting” techniques for filling in small image gaps. The former has been demonstrated for “textures” – repeating two-dimensional patterns with some stochastic; the latter focus on linear “structures” which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches.

Index Terms— Image Inpainting, Texture Synthesis, Simultaneous Texture and Structure Propagation.
 
1   INTRODUCTION                                                                    
 A New algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way.
 Figure 1 show an example of this task, where the foreground person (manually selected as the target region) is automatically replaced by data sampled from the remainder of the image
Fig. 1 Removing large objects from images.  (a) Original pho-tograph.  (b) The region corresponding to the foreground person (covering about 19% of the image) has been manually selected and then automatically removed. Notice that the horizontal structures of the fountain have been synthesized in the
occluded area together with the water, grass and rock tex-tures.

2. PRESENT THEORY AND PRACTICES
 In the past, this problem has been addressed by two classes of algorithms: (i) "texture synthesis" algorithms for generating large image regions from sample textures, and (ii) "inpainting" techniques for filling in small image gaps.  The former work well for "textures" -- repeating two-dimensional patterns with some stochastic; the latter focus on linear "structures" which can be thought of as one-dimensional patterns, such as lines and object contours.

Fig.2 Removing large objects from photographs. a) Original image b) The result of region filling by traditional image inpainting. Notice the blur introduced by the diffusion process and the complete lack of texture in the synthesized area c) The final image where the bungee jumper has been completely removed and the occluded region reconstructed by our automatic algorithm

3. KEY OBSERVATIONS
 3.1 Exemplar-based synthesis suffices
The core of our algorithm is an isophote-driven image sam-pling process. It is well-understood that exemplar-based approaches perform well for two-dimensional textures [1], [11],[17]. But, we note in addition that exemplar-based texture synthesis is sufficient for propagating extended linear image structures, as well; i.e., a separate synthesis mechanism is not required for handling isophotes.
The core of our algorithm is an isophote-driven image sam-pling process. It is well-understood that exemplar-based approaches perform well for two-dimensional textures [1], [11],[17]. But, we note in addition that exemplar-based texture synthesis is sufficient for propagating extended linear image
structures, as well; i.e., a separate synthesis mechanism is not required for handling isophotes.   
Figure 3 illustrates this point. For ease of comparison, we adopt notation similar to that used in the inpainting literature.
The region to be filled, i.e., the target region is indicated by Ω, and its contour is denoted δΩ. The contour evolves inward as the algorithm progresses, and so we also refer to it as the “fill front”. The source region, Ф which remains fixed throughout the algorithm, provides samples used in the filling process.

Fig.3 Structure propagation by exemplar-based texture syn-thesis. (a) Original image, with the target region Ω, its con-tour δΩ, and the source region Φ clearly marked. (b) We want to synthesize the area delimited by the patch   centered on the point p ε ∂Ω. (c) The most likely candidate matches for   lie along the boundary between the two textures in the source region e.g.  and    (d) The best matching patch in the candidates set has been copied into the position occupied by   , thus achieving partial filling of   . Notice that both texture and structure (the separating line) have been propagated inside the target region. The target region Ω has, now, shrunk and its front δΩ.has assumed a different shape.
The user will be asked to select a target region, Ω, manually. (a) The contour of the target region is denoted as δΩ. (b) For every point p on the contour δΩ, a patch Ψp is constructed, with p in the centre of the patch. A priority is calculated based on how much reliable information around the pixel, as well as the isophote at this point. (c) The patch with the highest priority would be the target to fill. A global search is performed on the whole image to find a patch, Ψq that has most similarities with Ψp. (d) The last step would be copy the pixels from Ψq to fill Ψp. With a new contour, the next round of finding the patch with the highest continues, until all the gaps are filled.

Read More: Click here..."