Author Topic: Processing of Images Based on Segmentation Models  (Read 2959 times)

0 Members and 1 Guest are viewing this topic.


  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Processing of Images Based on Segmentation Models
« on: April 23, 2011, 10:57:01 am »
Processing of Images Based on Segmentation Models for Extracting Textured Component
Author : V M Viswanatha,   Nagaraj B Patil, Sanjay Pande MB
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— The method for segmentation of color regions in images with textures in adjacent regions being different can be arranged in two steps namely color quantization and segmentation spatially. First, colors in the image are quantized to few representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by labels assigned to each class of colors. This will form a class-map of the image. A mathematical criteria of aggregation and mean value is calculated. Applying the criterion to selected sized windows in the class-map results in the highlighted boundaries. Here high and low values correspond to possible boundaries and interiors of color texture regions. A region growing method is then used to segment the image.

Key Words: Texture segmentation, clustering, spital segmentation, slicing, texture composition, boundry value image, median-cut.

Segmentation is the low-level operation concerned with partitioning images by determining disjoint and ho-mogeneous regions or, equivalently, by finding edges or boundaries. Regions of an image segmentation should be uniform and homogeneous with respect to some characteristics such as gray tone or texture Region interiors should be simple and without many small holes. Adjacent regions of segmentation should have significantly different values with respect to the characteristic on which they are uniform. Boundaries of each segment should be simple, not ragged, and must be spatially accurate”.
Thus, in a large number of applications in image processing and computer vision, segmentation plays a fundamental role as the first step before applying the higher-level operations such as recognition, semantic interpretation, and representation.
Earlier segmentation techniques were proposed mainly for gray-level images on which rather comprehensive survey can be found. The reason is that, although color information permits a more complete representation of images and a more reliable segmentation of them, processing color images requires computation times considerably larger than those needed for gray-level images. With an increasing speed and decreasing costs of computation; relatively inexpensive color camera the limitations are ruled out. Accordingly, there has been a remarkable growth of algorithms for segmentation of color images. Most of times, these are kind of  “dimensional extensions” of techniques devised for gray-level images; thus exploit the well-established background laid down in that field. In other cases, they are ad hoc techniques tailored on the particular nature of color information and on the physics of the interaction of light with colored materials. More recently, Yining Deng and B. S. Manjunath [1][2] uses the basic idea of separate the segmentation process into color quantization and spatial segmentation. The quantization is performed in the color space without considering the spatial distributions of the colors. S Belongie, [3], in their paper present a new image representation, which provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture space. A new method of color image segmentation is proposed in [4] based on K-means algorithm. Both the hue and the intensity components are fully utilized.

Color Quantization is a form of image compres-sion that reduces the number of colors used in an image while maintaining, as much as possible, the appearance of the original. The optimal goal in the color quantization process is to produce an image that cannot be distinguished from the original. This level of quantization in fact may never be achieved. Thus, a color quantization algorithm attempts to approximate the optimal solution.
   The process of color quantization is often bro-ken into four phases .
1)   Sample image to determine color distribution.
2)   Select colormap based on the distribution
3)   Compute quantization mapping from 24-bit colors to representative colors
4)   Redraw the image, quantizing each pixel.
Choosing the colormap is the most challenging task. Once this is done, computing the mapping table from colors to pixel values is straightforward.
In general, algorithms for color quantization can be broken into two categories: Uniform and Non-Uniform. In Uniform quantization the color space is bro-ken into equal sized regions where the number of regions NR is less than or equal to Colors K. Uniform quantization, though computationally much faster, leaves much room for improvement. In Non-Uniform quantization the manner in which the color space is divided is dependent on the distribution of colors in the image. By adapting a colormap to the color gamut of the original image, it is  assured of using every color in the colormap, and thereby reproducing the original image more closely.
The most popular algorithm  for color quantization, invented by Paul Heckbert in 1980, is the median cut algorithm. Many variations on this scheme are in use. Before this time, most color quantization was done using the popularity algorithm, which essentially constructs a histogram of equal-sized ranges and assigns colors to the ranges containing the most points. A more modern popular method is clustering using Octree.

3.   Segmentation
The division of an image into meaningful struc-tures, image segmentation, is often an essential step in im-age analysis. A great variety of segmentation methods have been proposed in the past decades. They can be categorized into
Threshold based segmentation: Histogram thresholding and slicing    techniques  are used to segment the image.
Edge based segmentation: Here, detected edges in an image are assumed to represent object boundaries, and used to identify these objects.
Region based segmentation: Here the process starts in the middle of an object and then grows outwards until it meets the object boundaries.
Clustering techniques: Clustering methods attempt to group together patterns that are similar in some sense.
Perfect image segmentation cannot usually be achieved because of oversegmentation or undersegmentation. In oversegmentation pixels belonging to the same object are classified as belonging to different segments.
In the latter case, pixels belonging to different objects are classified as belonging to the same object. 

Read More: