International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1234

ISSN 2229-5518

.

Review on Image Search Engines

Ms. Kshitija A. Upadhyay Mrs. Gyankamal J. Chhajed Ms. Jyoti D. Gavade

Abstract— In this paper we have reviewed and analyzed different techniques to search images from database. We have re- viewed different techniques like text based image retrieval (TBIR), content based image retrieval (CBIR), CBIR with rele- vaeedback.We also reviewed different Generic CBIR systems (QBIC,Virage,Photobook,andFourEyes,Mars,VisualSeek,Netra) and World Wide Web Image search engines(Alta Vista pho- to and media finder WebSeek,ImageRover,WebSeer) that are available today .Variety of features used by these content- based image retrieval are also overviewed in this paper. We have also represented analysis of these systems by considering different factors like number of reference images,relevance feedback, user provided reference images, sketch support, im- plementation.

Index Terms— Image search, text based image retrieval (TBIR), content based image retrieval (CBIR), CBIR with relevance feedback,relevance feedback ,Generic CBIR,World wide web image search engines,reference image.

—————————— ——————————

1 INTRODUCTION

An image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital images. Text based image retrieval (TBIR) and Content-based image retrieval (CBIR) in particular are well-known fields of research in information management in which a large number of methods have been proposed and investigated but in which still no satisfying general solutions exist. The need for adequate solutions is growing due to the increasing amount of digitally produced images in areas like journalism, medicine, and private life, requiring new ways of accessing images. For example, medical doctors have to access large amounts of images daily [1],home-users often have image databases of thousands of images [2], and journalists also need to search for images by various criteria [3, 4]. In the past, sev- eral CBIR systems have been proposed and all these systems have one thing in common: images are represented by numeric values, called features or descriptors, that are meant to repre- sent the properties of the images to allow meaningful retrieval for the user.

2 IMAGE RETRIEVAL TECHNIQUES

2.1 Text based image search

Text based image search engines use only keywords as queries. Users type query keywords in the hope of finding a certain type of images.

————————————————

Kshitija is currently pursuing masters degree program in computer engi- neering at VPCOE Baramati ,Pune University,Maharashtra, India,. E- mail: kshitija20@gmail.com

Mrs. Gyankamal is currently Assistant Professor and Head in computer

engineering at VPCOE Baramati ,Pune University,Maharashtra, India, E-mail:gjchhajed@gmail.com

Jyoti is currently pursuing masters degree program in computer engineer-

ing at VPCOE Baramati ,Pune University,Maharashtra, India,. E-mail:

The search engine returns thousands of images ranked by the keywords extracted from the surrounding text. Text based image search engines rely on text for indexing of images. As a consequence of this, the quality of an engine's image results depends on the quality of the textual information that sur- rounds or associated with the images (e.g. filename, nearby text, page title, or picture tags within the HTML code).

Advantages:

Text based image search is easy to implement.TBIR doesn’t
require user to have a similar image to search.TBIR is user-
friendly, but not developer-friendly.TBIR is easy to conceptual-
ize as everything is done manually.

Limitation:

Text-based image search suffers from the ambiguity of
query keywords. The keywords provided by users tend to be
short. Also sometime it is hard for users to describe the visual content of target images using keywords accurately. Thus text
based image search results are noisy and consist of images with quite different semantic meanings.So the user gets rele- vant images by Text based image search engine only if it is annoted correctly.Manual annotation is needed and in order to fully describe content of images human annotator must pro- vide description of every objects characteristics.A comprehen- sive description of images is usually impossible as images con- tain much detail.This method becomes impractical as database grows in size.If database is large then annotation are not prob- ably made by single indexer and interpretation of images may vary.The user must know exact terms the annotator used in order to retrieve images he wants.

2.2 Content based image retrieval:

Content-based image retrieval (CBIR), is a technique for re- trieving images on the basis of automatically-derived features
jyotiatole@gmail.com

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1235

ISSN 2229-5518

such as color, texture and shape. "Content-based" means that the search will analyze the actual contents of the image. The term 'content' in this context might refer colors, shapes, tex- tures, or any other information that can be derived from the image itself. The user can either browse an image from the hard disk or he can also select the example images provided by us to search the image of that kind.
Content based image retrieval (CBIR) is based on the automating matching of feature of query image with that of image database through some image-image similarity evalua- tion. Therefore images will be indexed according to their own visual content such as color, texture, shape or any other feature or a combination of set of visual features.

Advantages: One of the main advantages of the CBIR ap- proach is the automatic retrieval process, instead of the tradi- tional keyword-based approach, which usually requires very laborious and time-consuming previous annotation of data- base images.CBIR retrieves relevant images fastly and doesn’t need of manual annotation of images.

Limitation: High feature similarity may not always corre- spond to semantic similarity .Different users at different time may give different interpretations for the same image.

CBIR systems can be classified into two categories:

a) General Content Based Image Retrieval System.

b) WWW Image Search Engine.

General Content Based Image Retrieval systems need to

be locally installed. These systems also operate on fixed and

predetermined image databases as opposed to the WWW im-

age search engines. In this section, a few representative CBIR

systems are introduced.

2.2.1 General Content Based Image Retrieval System:

i) QBIC:

The best-known system for content-based image re-

trieval is probably IBM's QBIC Query By Image Content

(Niblack et al. 1993, Flickner et al. 1995, Niblack et al. 1997),

developed at the IBM Almaden Research Center.QBIC was the

first commercial CBIR application .QBIC supports queries

based on example images, user-constructed sketches and
drawings, and selected color and texture patterns, etc. The
color feature used in QBIC are the average (R,G,B), (Y,i,q),
(L,a,b), and MTM (mathematical transform to Munsell) coordi- nates,and a k-element color histogram[5] . Its texture feature is
an improved version of the Tamura texture representation [6] i.e. combinations of coarseness, contrast, and directionality. Its shape feature consists of shape area, circularity, eccentricity, major axis orientation, and a set of algebraic moment invari- ants[5]. QBIC is one of the few systems which takes into ac- count the high dimensional feature indexing. The image query is based on one reference image and one feature at a time. The visual queries can also be combined with textual keyword predicates. The QBIC home page is available at [7].

ii) Virage:

Virage Image Engine (Bach et al. 1996, Gupta

1997) is a commercial content-based search engine developed

at Virage Technologies Inc. Virage supports visual queries

based on color, composition (color layout), texture, and struc-
ture (object boundary information). But Virage goes one step
further than QBIC. It also supports arbitrary combinations of the above four atomic queries. The users can adjust the weights associated with the atomic features according to their own emphasis. In [8], Jeffrey et al. further proposed an open frame- work for image management. They classified the visual fea- tures (“primitive”) as general (such as color, shape, or texture) and domain specific (face recognition, cancer cell detection, etc.). Virage is intended as a portable framework for different CBIR applications. The Virage Technologies Inc. home page is located at [9]. The Virage Image Engine has also been licensed into the AltaVista Photo & Media Finder.

iii) Photobook and FourEyes:

MIT Media Lab's Photobook [10] (Pentland et al.

1994) is a set of interactive tools for searching and querying

images. Photobook is divided into three separate image de-

scriptions, namely Appearance Photobook (face recognition),

Texture Photobook, and Shape Photobook, which can also be

combined. The features are compared using one of the match- ing algorithms that Photobook provides: Euclidean, Ma-

halanobis, divergence, vector space angle, histogram, Fourier peak, and wavelet tree distances, as well as any linear combi- nation of these. The latest version of Photobook allows also user-defined matching algorithms via dynamic code loading. The photobook WWW home page is located a [11].

Photobook includes also FourEyes (Minka 1996), an interactive tool for image segmentation and annotation. The user selects some image regions and gives them labels, and FourEyes extrapolates the label to other regions on the image and in the database. As there exists no such label model which is suitable for all kinds of labels in all kinds of images, FourEyes contains a learning agent which selects and combines appropriate models among a society of models based on ex- amples from the user to select the label model. In the FourEyes approach, relevance feedback techniques are applied in image retrieval, and the system tries to self-improve its responses to the queries.

iv) MARS:

MARS or Multimedia Analysis and Retrieval System

(Huang et al. 1996) is an interdisciplinary research effort in-

volving multiple research communities at the University of

Illinois. The main focus of MARS is to develop methods to or-

ganize various features into adaptive retrieval architecture, instead of finding the best representations for any particular

application area.MARS also includes relevance feedback archi- tecture for image retrieval (Rui et al. 1997a). This technique presents users with a list of images and asks the users to select the one closest to the desired image. As this process is repeated and more images selected, the MARS engine improves its list of suggested images. The MARS home page is located at [12].

v) Visual Seek:

Visual Seek (Smith and Chang 1996c) is a content-

based image and video query system developed at the Image

and Advanced Television Lab of Columbia University. It inte-

grates feature-based image indexing by color with region- based spatial query methods. This enables queries with multi-

ple color regions in the sketch image. Queries may be conduct- ed by sketching a layout of color regions, by providing the URL of a seed of Visual Seek is located at [13].

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1236

ISSN 2229-5518

vi) NETRA:

NETRA[14] is a prototype image retrieval system

that is currently being developed in the University of Califor-

nia, Santa Barbara (UCSB) (Ma and Manjunath 1997). NETRA

uses the color, texture, shape, and spatial location information of segmented image regions to search and retrieve similar re-

gions from the database. The used color indexing scheme for region-based image retrieval is presented in (Deng and Manju- nath 1999).

The system incorporates an automated image segmentation algorithm that allows region-based search. Im- ages are segmented into homogeneous regions at the time they are added to the database, and image attributes that represent each of these regions are computed. The user is then able to compose queries such as retrieve all images containing regions having the color of object A, texture of object B, shape of object C, and lie within the upper one-third of the image , in which the individual objects can be regions from different images. The NETRA WWW home page is available at [15].

2.2.2 WWW Image Search Engine:

Another important and related area is the retrieval

of multimedia and visual information from the World Wide

Web.WWW image search engines of efficiency comparable to

their textual counterparts do not yet exist, but a number of

experimental projects have been started. Image search engines

face the same problems as the text-based search engines, such

as the immense size, diversity, and dynamic nature of the

WWW. In addition, these systems must cope with some unique difficulties as computer-based general image analysis is a very

difficult task. The various issues on indexing and retrieval of images of the WWW have been discussed, for example, by Agnew et al. (1997) and La Cascia et al. (1998).

i) AltaVista Photo and Media finder:

The popular AltaVista WWW search engine currently

incorporates an image retrieval engine called the AltaVista

Photo & Media Finder (Swain 1999, Eberman et al. 1999) de-

veloped at the Compaq Cambridge Research Laboratory. The

system contains technologies from the Virage Image Engine

and from WebSeer . It can be found and used at [16].

The initial query is text-based, matching relevant text ex- tracted from the WWW pages containing the images. Optional-

ly, the query can be narrowed to include only photos or graphics, or to include only color or black & white images. In addition, images likely to be page decorations, i.e. small imag- es, wide banners, and very tall and thin images, as well as ob- jectionable images can be discarded from the query results. The system returns a set of images as the result of the initial query. These images can then be used as example images to search for visually similar images based on color and texture distributions, color layout, and image structure.

ii) WebSeek:

WebSEEk (Smith and Chang 1996a) is a WWW-oriented im- age search engine, developed at Columbia University along

with the VisualSeek image query system. It uses both textual keywords, for example from the URL addresses and HTML tags, and color information to categorize images. WebSEEk

consists of three modules which are the image collecting mod- ule, classification and indexing module, and image browsing and retrieval module. Currently, WebSEEk has catalogued over

665 000 images and videos in the WWW. The user interface of the search engine is available on-line at [17]

iii) Image Rover:

ImageRover combines textual and visual statistics in a

single index for content-based search of a WWW image data-

base. Textual statistics are captured in vector form using latent

semantic indexing (LSI) based on text in the containing HTML

document. Used visual statistics include color and texture.To

begin a search with ImageRover, the user first enters a few

keywords describing the desired images. After that, the user

can refine this initial query through relevance feedback. Dur-

ing relevance feedback, both visual and textual cues are com-

bined to gain better search performance.Image Rover home

page is available at [18].

iv) WebSeer:

WebSeer (Frankel et al. 1996) was a World Wide Web

image retrieval project at University of Chicago but it has been

subsequently terminated. With WebSeer, one could search for

images using keywords describing the contents of the image

and also, optionally, by the visual content of the image. The

content properties included whether or not the image is a pho-

tograph (Athitsos et al. 1997), or how many faces the image

contains (Rowley et al 98).

2.3 CBIR with relevance feedback :

In CBIR, the user is an inseparable part of the process. As the retrieval systems are usually not capable of returning the wanted images in their first response to the user, the image query becomes an iterative and interactive process towards the desired image or images.The relevance feedback approach has been applied also to content-based image retrieval (Rui et al.

1997b, Taycher et al. 1997, Minka 1996).

Some visual features may be more effective for cer-
tain query images than others. In order to make the visual sim- ilarity metrics more specific to the query, relevance feedback
[19]was widely used to expand visual examples. The user was asked to select multiple relevant and irrelevant image exam- ples from the image pool. A query-specific similarity metric was learned from the selected examples.

Advantages : In CBIR with relevance feedback,user is al- lowed to interact with system to “refine” the results of query until he/she is satisfied.

Limitation : The requirement of more users effort makes it unsuitable for web-scale commercial systems like Bing image search and Google image search in which users feedback has to be minimized.

3. FEATURES USED IN EXISTING IMAGE RETRIEVAL TECHNIQUES

Image features refer to characteristics which describe the contents of an image. Visual feature extraction is the foun- dation for all kinds of applications of content-based image re-

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1237

ISSN 2229-5518

trieval and, therefore, various types of features have been stud- ied extensively. Different visual features can also be catego- rized as distinct feature types which include color, texture, shape, and structure.

3.1. Color Feature Extraction:

3.1.1 Color Histogram:

The histogram of an image is a plot of the gray level
values or the intensity values of a color channel versus the
number of pixels at that value. The shape of the histogram
provides us with information about the nature of the image, or
subimage if we are considering an object within the image. For
example, a very narrow histogram implies a low contrast im-
age, a histogram skewed toward the high end implies a bright
image, and a histogram with two major peaks, called bimodal,
an object that is in contrast with the background. The histo-
gram features that we will consider are statistical based fea-
tures, where the histogram is used as a model of the probabil- ity distribution of the intensity levels. These statistical features
provide us with information about the characteristics of the intensity level distribution for the image.
The features based on the first order histogram prob- ability are the mean, standard deviation, skew, energy, and entropy[20].

i) Mean:

The mean is the average value, so it tells us something about
the general brightness of the image. A bright image will have a
high mean, and a dark image will have a low mean. We will
use L as the total number of intensity levels available, so the
gray levels range from 0 to ( L – 1). For example, for typical 8- bit image data, L is 256 and ranges from 0 to 255. We can define

the mean as follows:


Where,r is the number of rows, c is the number of columns, I(r,c) is the intensity of the pixel(r,c), M is total number of pix- els, is the gray level and is histogram probability.

ii) Color Moments:

Color moments have been successfully used in many retrieval systems especially when the image contains just the
object. The first order (mean), the second (variance) and the third order (skewness) color moments have been proved to be efficient and effective in representing color distributions of images.
Mathematically, the first three moments are defined as:

, ,

where,

fij is the value of the i-th color component of the image pixel

j and

N is the number of pixels in the image.

Using the additional third-order moment improves the
overall retrieval performance compared to using only the first and second order moments. However, this third-order moment
sometimes makes the feature representation more sensitive to scene changes and thus may decrease the performance. Since only 9 (three moments for each of the three color components) numbers are used to represent the color content of each image, color moments are a very compact representation compared to other color features. Due to this compactness, it may also lower the discrimination power. Usually, color moments can be used as the first pass to narrow down the search space before other sophisticated color features are used for retrieval.

iii) Color Entropy:

The entropy is a measure that tells us how many bits

we need to code the image data, and is given by
As the pixel values in the image are distributed among more intensity levels, the entropy increases. A complex image has higher entropy than a simple image. This measure tends to vary inversely with the energy.

3.2. Shape Feature Extraction:

Edge detection is useful for locating the boundaries of ob-
jects within an image. Any abrupt change in image frequency
over a relatively small area within an image is defined as an
edge. Image edges usually occur at the boundaries of objects
within an image, where the amplitude of the object abruptly
changes to the amplitude of the background or another object.

The shape representations can be divided into two gen-

eral categories: boundary-based and region-based. Boundary-

based methods utilize only information on the boundary of an

object, whereas region-based methods describe the shape

based on whole area of object.

Boundary based method may also contain description of

inner boundaries .The difference between two representations

is that boundary based methods model object as one dimen-

sional curve whereas region based methods operate on two

dimensional field. In the following sections, a number of both

boundary and region-based shape descriptors suitable for con-

tent-based image retrieval are briefly introduced. A more de-

tailed coverage on using shape features in CBIR can be found

in the Master's Thesis of Brandt (1999). Further instance

in(Sonka et al 1993)

3.2.1 Boundary-Based Shape Features i) Chain code:

The chain code (a.k.a. Freeman-code or F-code) can be used to represent the boundary of any object (Freeman 1974). The boundary is traced in either direction and a code is as- signed to each pixel depending on the direction of the next

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1238

ISSN 2229-5518

boundary pixel. Both 4 and 8-neighborhoods can be used in the construction of the chain code. The ordinary chain codes are sensitive to noise and not invariant to scaling and rotation and hence the chain codes need to be modified before used as shape descriptors.

ii) Fourier descriptor:

Another basic boundary-based shape representation is

the Fourier descriptor (Zahn and Roskies 1972, Persoon and Fu

1977) in which the complex coefficients of the Fourier trans-

form of the boundary trace are used as a shape features. The

Fourier descriptor is invariant to scaling, rotation, and reflec-

tion. The UNL (Universidade Nova de Lisboa) Fourier de-

scriptor (Rauber and Steiger Garcao 1992) is an extension to the

Fourier descriptor which can handle also open curves. It is

computed in two stages: First the image is transformed into

polar coordinated by the UNL transform and then a Fourier

transform is computed on the result. Furthermore, as a part of

the MARS research project, Rui et al. (1996) proposed the Mod- ified Fourier Descriptor (MFD) which is robust to a affine

transformations and noise generated by spatial discretizations.

iii) Wavelet descriptor :

The wavelet transform can also be used to describe ob-

ject boundaries analogously to the Fourier transform. Wavelets

are effective in representing local properties of a boundary due

to the localization properties of wavelet bases. Chuang and

Kuo (1996) used wavelets to construct a descriptor which has

many desirable properties like multiresolution representation,

invariance, uniqueness, stability, and spatial localization.

3..2.2 Region based shape features [21] :

i) Heuristic region descriptors:

A simple shape feature can be constructed from a combi-

nation of heuristic region descriptors, such as area, Euler's

number, circularity, eccentricity, elongatedness, rectangularity,

and the orientation of the major axis (Sonka et al. 1993).

ii) Moment Invariants:

The moment invariant method (Hu 1962) is a common

region-based shape representation. It is based on using such

region-based moments which are invariant to transformations

as shape features. Hu identified seven such moments and sub-

sequently many improved versions have been proposed.

iii) Finite element method:

In Shape Photobook ,Pentland et al. (1995) used a shape representation based on the physical model of the object. The

tool used was a standard engineering method called finite el- ement method (FEM). In FEM, a positive definite symmetric matrix, called the stiffness matrix is defined. It describes how each point in an object is connected to other points in the ob- ject. The eigenvectors of the stiffness matrix is then used as shape feature.

iv) Keyimages:

Tegolo (1994) developed a method to locate subimages in

the stored images of a database and match them with the que-

ry image. The used image characteristics were designed to al-

low image retrieval from a minimum set of image descriptors called key images. First, both the query image and stored im-

ages are segmented. Then, moments and geometrical features are computed from the segmented regions, and finally a matching process is run on the resulting features.

3.3. Texture Feature Extraction:

3.3.1 Tamura Features:

The Tamura features , including coarseness, contrast,
directionality, linelikeness, regularity, and roughness[22] are
designed in accordance with psychological studies on the hu- man perception of texture. The computations of these three
features are given as follows.

i) Coarseness:

Coarseness is a measure of the granularity of the tex-
ture. To calculate the coarseness, moving averages Ak(x,y) are
computed first using 2k × 2k (k = 0, 1, …, 5) size windows at
each pixel (x, y), i.e.,

Where g(i, j) is the pixel intensity at (i, j).
Then, the differences between pairs of non-overlapping moving averages in the horizontal and vertical directions for
each pixel are computed, i.e.,


After that, the value of k that maximizes E in either direc- tion is used to set the best size for each pixel, i.e.,

The coarseness is then computed by averaging over the entire image, i.e.,

Instead of taking the average of , an improved version of the coarseness feature can be obtained by using a histogram to characterize the distribution of . Compared with using a single value to represent coarseness, using histogram-based coarseness representation can greatly increase the retrieval performance. This modification makes the feature capable of dealing with an image or region which has multiple texture properties, and thus is more useful to im- age retrieval applications.

ii) Contrast:


The formula for the contrast is as follows:
where the kurtosis α4 = μ4/σ4, μ4 is the fourth moment about the mean, and σ2 is the variance. This formula can be used for both the entire image and a region of the image.

iii) Directionality

To compute the directionality, image is convoluted
with two 3x3 arrays. These two arrays are as follows:

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1239

ISSN 2229-5518

and


and a gradient vector at each pixel is computed. The magni- tude and angle of this vector are defined as:


where ΔH and ΔV are the horizontal and vertical differ- ences of the convolution. Then, by quantizing θ and counting the pixels with the corresponding magnitude |ΔG| larger than a threshold, a histogram of θ, denoted as HD, can be construct- ed. This histogram will exhibit strong peaks for highly direc- tional images and will be relatively flat for images without strong orientation. The entire histogram is then summarized to obtain an overall directionality measure based on the sharp- ness of the peaks:
In this sum p ranges over np peaks; and for each peak p, wp is the set of bins distributed over it; while φp is the bin that takes the peak value.

iv) Linelikeness :

To catch the texture element composed of lines or line-

likeness, a M × M -sized direction matrix is defined.

The element of the matrix is the relative frequen-

cy of two pixels separated by a distance d, one with the direc-

tion code i and the other with j.The Linelikeness is defined as

v) Regularity :

The regularity of repeated patterns in a texture is calculated

by taking the sum of the standard deviations of the measures

, , , and

()

vi) Roughness:

Roughness of image is calculated as sum of contrast and
coarseness values.

3.3.2 Wold Decomposition :

According to a psychological study by Rao and Lohse

(1993), the main components of texture perception can be de-

scribed as periodicity, directionality, and randomness. Hence, it

is justified to apply texture models which relate to these per-

ceptual components in content-based image retrieval. To cap-

ture the properties of human texture feature of image retrieval

a set of texture features based on two dimensional(2D) wold

decomposition was proposed by (Liu and Picard 1996).

The Wold theory allows to represent a given 2D random

field y (m, n) with three mutually orthogonal components by

the following decomposition:

w(m, n) is indeterministic whereas the other fields p(m, n) and approximated y harmonic and evanescent fields. The per- ceptual properties of these components have been shown to closely agree with the components of human texture percep- tion (Liu and Picard 1996).

IV ANALYSIS

Analysis of these systems by considering different
factors like number of reference images, relevance feedback,
and user provided reference images,sketch sup-
port,implementation is given in Table 1. The feature types used in these systems are summarized in Table 2.

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1240

ISSN 2229-5518

Search engines

Factors

QBIC

Virage

Photo-

book

Mars

Visual

Seek

Netra

AltaVista

Web

Seek

Image

Rover

Web

seer

No. of Ref. Images

1

1

1

Many

1

Many

1

Many

Many

None

Relevance feedback

No

No

Yes

Yes

No

No

No

Yes

Yes

No

User provided ref

images

Yes

No

No

No

Yes

No

No

Yes

Yes

No

Sketch Support

Yes

No

No

No

Yes

No

No

No

No

No

Implementation

Both

Web

Local

Web

Web

Web

Web

Web

Web

Web

Table 1 : Analysis of image search engines based on factors.

Search engines

Features

QBIC

Virage

Photo-

book

Mars

Visual

Seek

Netra

AltaVista

Web

Seek

Image

Rover

Webseer

Color

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Color layout

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Texture

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Shape

Yes

Yes

Yes

Yes

Yes

Yes

Keywords

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Table 2: Analysis of image search engines based on features used.

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1241

ISSN 2229-5518

REFERENCES

[1] J.S. Bridle, “Probabilistic Interpretation of Feedforward Classifi- cation Network Outputs, with Relationships to Statistical Pattern Recognition,” Neurocomputing—Algorithms, Architectures and Ap- plications, F. Fogelman-Soulie and J. Herault, eds., NATO ASI Se- ries F68, Berlin: Springer-Verlag, pp. 227-236, 1989. (Book style with paper title and editor)
[2] W.-K. Chen, Linear Networks and Systems. Belmont, Calif.: Wadsworth, pp. 123-135, 1993. (Book style)
[3] H. Poor, “A Hypertext History of Multiuser Dimensions,” MUD History, http://www.ccs.neu.edu/home/pb/mud- history.html. 1986. (URL link *include year)
[4] K. Elissa, “An Overview of Decision Theory," unpublished. (Unplublished manuscript)
[5] R. Nicole, "The Last Word on Decision Theory," J. Computer

Vision, submitted for publication. (Pending publication)

[6] C. J. Kaufman, Rocky Mountain Research Laboratories,
Boulder, Colo., personal communication, 1992. (Personal communication)
[7] D.S. Coming and O.G. Staadt, "Velocity-Aligned Discrete Oriented Polytopes for Dynamic Collision Detection," IEEE Trans. Visualization and Computer Graphics, vol. 14, no. 1, pp. 1-12, Jan/Feb 2008, doi:10.1109/TVCG.2007.70405. (IEEE Transactions )
[8] S.P. Bingulac, “On the Compatibility of Adaptive Control- lers,” Proc. Fourth Ann. Allerton Conf. Circuits and Systems Theory, pp. 8-16, 1994. (Conference proceedings)
[9] H. Goto, Y. Hasegawa, and M. Tanaka, “Efficient Schedul-
ing Focusing on the Duality of MPL Representation,” Proc.

IEEE Symp. Computational Intelligence in Scheduling (SCIS

’07), pp. 57-64, Apr. 2007, doi:10.1109/SCIS.2007.367670.

(Conference proceedings)
[10] J. Williams, “Narrow-Band Analyzer,” PhD dissertation,
Dept. of Electrical Eng., Harvard Univ., Cambridge, Mass.,
1993. (Thesis or dissertation)
[11] E.E. Reber, R.L. Michell, and C.J. Carter, “Oxygen Absorp-
tion in the Earth’s Atmosphere,” Technical Report TR-0200
(420-46)-3, Aerospace Corp., Los Angeles, Calif., Nov.
1988. (Technical report with report number)
[12] L. Hubert and P. Arabie, “Comparing Partitions,” J. Classi-

fication, vol. 2, no. 4, pp. 193-218, Apr. 1985. (Journal or

magazine citation)
[13] R.J. Vidmar, “On the Use of Atmospheric Plasmas as Electro-
magnetic Reflectors,” IEEE Trans. Plasma Science, vol. 21, no. 3, pp.876-880 ,available at http://www.halcyon.com/pub/journals/21ps03-vidmar, Aug.
1992. (URL for Transaction, journal, or magzine)
[14] J.M.P. Martinez, R.B. Llavori, M.J.A. Cabo, and T.B. Peder-
sen, "Integrating Data Warehouses with Web Data: A Sur- vey," IEEE Trans. Knowledge and Data Eng., preprint, 21
Dec. 2007, doi:10.1109/TKDE.2007.190746.(PrePrint)

IJSER © 2013

http://www.ijser.org