Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - content.writer

Pages: [1] 2 3 4
Author : Hamideh Hamidian, Ali Akbar Jalali
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 3, March-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— In this paper, a numerical approach for the fractional order proportional-integral-derivative controller (FO-PID) design for the unstable first order time delay system is proposed. The controller design is based on the system time delay. In order to obtain the relation between the controller parameters and the time delay, for several amounts of the plant time delay and the fractional derivative and integral orders, the ranges of stabilizing controller parameters are determined. First, for a typical time delay plant and the fractional order controller, the D-decomposition technique is used to plot the stability region(s). The controller derivative gain has been considered as one. By changing the fractional derivative and integral orders, a small amount in each stage, some ranges of proportional and integral gains are achieved which stabilize the system, independent of the fractional  ,   orders. Therefore a set of different controllers for any specified time delay system is obtained. This trend for several various systems with different values of time delay has been done and the proportional and integral gains of the stabilizing controller have been calculated. Then we have fitted these values to the exponential functions and the proportional and integral gains have been obtained in terms of the system time delay. Using these relations, we can specify some ranges of the proportional and integral gains and obtain a set of stabilizing controllers for any given system with certain time delay. In these relations, fractional derivative and integral orders haven’t part, and therefore can be applied to any fractional order controller design (for  ). Thus we have reached a numerical approach from the graphical D-decomposition method. In this method, there is freedom in choosing the values of  and   (they can fall in the range of [0.1, 0.9] ), and there is no need to plot the stability boundaries and check the different regions to determine the stable one. This numerical method does not offer the complete set of the stabilizing controllers. Whenever the system time delay is more, the specified range of proportional and integral gains will be smaller. In other words, the extent of obtained stability region is inversely proportional to the system time delay. Finally, the introduced numerical approach is used for stabilizing an unstable first order time delay system.
Index Terms—Fractional order PID controller, numerical approach, time delay.

Although great advances have been achieved in the control science, the proportional-integral-derivative controller is still the most used industrial controller.
   According to the Japan Electric Measuring Instrument Manufacturers’ Association in 1989, PID controller is used in more than 90% of control loops [1], [2]. As an example for the the application of PID controllers in industry, slow industrial processes can be pointed, low percentage overshoot and small settling time can be obtained by using this controller [1]. Widespread application of the PID controller is due to the simple and implementable structure and its robust performance in the wide range of the working conditions [3], [4]. This controller provides feedback, it has the ability to eliminate steady-state offsets through integral action, and it can anticipate the future through derivative action. The mentioned benefits have caused widespread use of the PID controllers. The derivative action in the control loop will improve the damping, and therefore by accelerating the transient response, a higher proportional gain can be obtained. Precise attention must be paid to setting the derivative gain because it can amplify high-frequency noise. In this paper, for the fractional order PID controller design, the derivative gain ( ) is set 1, that will result in design simplicity. Most available commercial PID controllers have a limitation on the de-rivative gain [2]. During the past half century, many theoretical and industrial studies have been done in PID controller setting rules and stabilizing methods [3]. So far several different techniques have been proposed to obtain PID controller parameters and the research still continues to improve the system performance and increase the control quality. Ziegler and Nichols in 1942 proposed a method to set the PID controller parameters. Hagglund and Astrom in 1995, and Cheng- Ching in 1999, introduced other techniques [5]. By generalizing the derivative and integral orders, from the integer field to non-integer numbers, the fractional order PID controller is obtained. In fractional order PID controller design, there is more freedom in selecting the parameters and more flexibility in their setting . This is due to posse of choice -both integer and non-integer numbers- for integral and derivative orders. Therefore control requirements will be easier to comply [6], [7].
   Before using the fractional order controllers in design, an introduction to fractional calculus is required. Over 300 years have passed since the fractional calculus has been introduced. The first time, calculus generalization to fractional, was proposed by Leibniz and Hopital for the first time and afterwards, the systematic studies in this field by many researchers such as Liouville (1832), Holmgren (1864) and Riemann (1953) were performed [8]. Fractional calculus is used in many fields such as electrical transmission losses systems and the analysis of the mechatronic systems. Some controller design techniques are based on the classic PID control theory generalization [7]. Due to the recent advances in the fractional calculus field and the emergence of fractance electrical element, the fractional order controller implementation has become more feasible [6], [9], [10]. Consequently, fractional order PID controller analysis and synthesis have received more attention [11], [12], [13], [14], [15], [16]. Results obtained from various articles published in this field, indicate that the fractional order PID controllers enhance the stability, performance and robustness of the feedback control system [6], [11], [12]. Maiti, Biswas and Konar [1] have significantly re-duced the overshoot percentage, the rise and settling times, compared to classic PID controller, using the fractional order PID controller. Applying the fractional order PID controller ( ), the system dynamic cha-racteristics can be adjusted better [17]. Many dynamic processes can be described by a first order time delay transfer function [18]. The need to control time delay processes can be found in different industries such as rolling mills. Varying time delay process control becomes difficult using classical control methods [19]. Simple formulas are available for setting the PID controller parameters for the stable first order time delay system, but when the system is unstable, the problem will be more difficult and therefore the unstable systems control requires more attention. Many attempts have been made in field of their stabilization [20], [21], [22], [23], [24]. So far, various design techniques have been suggested for the fractional order controller design [13], [14], [25], [26]. It has been shown that fractional order PID controllers have a better performance comparing to integer order ones, for both integer and fractional order control systems.
   In the controller design for an unstable system, the most important design issue is stabilizing the closed-loop system [6]. As an example of previous research in stabilizing the unstable processes, we can point to De Paor and O’Malley research in 1989, which discussed unstable open loop system stabilization with a PID or PD controller [23]. Hamaci [3] has concluded that fractional order PID controller has a better response than classic one. In this paper, a numerical method is introduced to design the fractional order controllers for any unstable first order system with specified time delay.

2.1 A Review to Design Methods
Hamamci and Koksal [4] have designed the fractional order PD controller to stabilize the integration time delay system, which result that stability region extent is reversed with the system time delay. Maiti, Biswas, and Konar, in 2008, could significantly reduce the overshoot percentage, the rise time, and the settling time by using fractional order PID controllers. They introduced PSO (particle swarm optimization) optimization technique for the fractional order PID controller design. In their me-thod, the controller has been designed based on required maximum overshoot and the rise time. In the mentioned technique, the closed loop system characteristic equation is minimized in order to get an optimal set of the control-ler parameters [1]
    One of the methods to obtain the complete set of stabi-lizing PID controllers is plotting the global stability re-gions in the   -space, which is called the D-decomposition technique [3], [4], [6], [8]. This technique is used in both fractional and integer order systems anal-ysis and design.
   Cheng and Hwang [6] has designed the fractional order proportional - derivative controller to stabilize the unsta-ble first order time delay system and D- decomposition method has been used. The graphical D- decomposition technique results for such systems are simple.
   The D- decomposition technique can be used for frac-tional order time delay systems and fractional order chaos systems. In this method, the stability region boundaries are obtained, which are described by real root boundary (RRB), infinite root boundary (IRB), and complex root boundary (CRB). By crossing these boundaries in the  -space, several regions will be achieved. By choosing an arbitrary point from each region and checking its stability, the region’s stability is tested. If the selected point is stable, the region including that point would be stable, and if the selected point is not stable then the region would be unstable. By obtaining the stability boundaries and plotting the stability regions, a complete set of stabilizing fractional order controller parameters is obtained. The mentioned algorithm is simple and effective.

Read More: Click here...

Author : R.Elumalai, Dr.A.R.Reddy
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 3, March-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— AES Rijndael is a block cipher developed by NIST as the Advanced Encryption Standard (AES) replacing DES and published as FIPS 197 in November 2001 [5] to address the threatened key size of Data Encryption Standard (DES). AES-Rijndael was developed by Joan Daemen and Vincent Rijmen, Rijndael [4, 5] and was selected from five finalists. Advancement in computation speed every day puts lots of pressure on AES and AES may not with stand attack for longer time. This work focuses on improving security of an encryption algorithm, beyond AES. Though there are various techniques available to enhance the security, an attempt is made to improve the diffusion strength of an algorithm. For enhancing the diffusion power AES Rijndael in MixColumn operation the branch number of MDS matrix is raised from 5 to 9 using a new 8X8 MDS matrix with trade off of speed [8, 9] and implemented on R8C microcontroller.
Index Terms— diffusion, MDS matrix, AES Rijndael, security, encryption standard, R8C, microcontroller.

The AES Rijndael algorithm basically consists of four byte oriented transformation for encryption and inverse transformation for decryption process over number of rounds depending on plain text size and key length  namely [1,2],

1) Byte substitution (S-box) a non linear operation, operating on each of the State bytes independently.

2) Shifting rows (Row transformation) is obtained by shifting row of states cylindrically.   

3) Mix Column transformation,  the columns of the State are considered as polynomials over GF(28) and multiplied modulo X4 + 1 with a fixed polynomial c(x ), given by
c(x ) = ‘03’ x3 + ‘01’ x2 + ‘01’ x + ‘02’
The inverse of MixColumn is similar to MixColumn. Every column is transformed by multiplying it with a specific multiplication polynomial d(x), given by
d(x ) = ‘0B’ x3 + ‘0D’ x2 + ‘09’ x + ‘0E’ .

4) Add round key, a Round Key is applied to the State by a simple bitwise EXOR. The Round Key is derived from the Cipher Key by means of the key schedule. The Round Key length is equal to the block length.

The round transformation in C pseudo can be written as [1]
The final round of the cipher is slightly different. It is defined by:
FinalRound (State,RoundKey)
ByteSub(State) ;
ShiftRow(State) ;
AddRoundKey(State, roundkey);

In AES Rijndael confusion and diffusion are obtained by non- linear S-Box operation and by the linear mixing layer over rounds respectively.

The linear mixing layer guarantees high diffusion over multiple rounds. Rijndael in his proposal approved by NIST replacing DES in the 2001 proposed MixColumn which operates on space of 4-byte to 4-byte linear transformations according to the following criteria[1,2]:
1. Invertibility;
2. Linearity in GF(2);
3. Relevant diffusion power;
4. Speed on 8-bit processors;
5. Symmetry;
6. Simplicity of description.
Criteria 2, 5 and 6 have lead to the choice of polynomial multiplication modulo x4+1. Criteria 1, 3 and 4 impose conditions on the coefficients. Criterion 4 imposes that the coefficients have small values, in order of preference ‘00’, ’01’, ’02’, ’03’…The value ‘00’ implies no processing at all, for ‘01’ no multiplication needs to be executed, ‘02’ can be implemented using xtime and ‘03’ can be implemented using
xtime and an additional EXOR. The criterion 3 induces more complicated conditions on the coefficients.

In Mix Column, the columns of the State are considered as polynomials over GF (28) and multiplied modulo x4 + 1 with a fixed polynomial c(x ) [1,2].  The Mix Column transformation operates independently on every column of the state and treats each sub state of  the column as term of a(x) in the operating equation b(x)=c(x)⊗a(x), where c(x)= ‘03’X3+’01’X2+’01’X+’02’.This polynomial is co-prime to (X4 + 1)  For example, in the figure1. a(x) is  a0,jX3+ai,jX2+a2,jX+a3,j and it is used as multiplicand of operation.

Read More: Click here...

Image Processing / Color Image Segmentation – An Approach
« on: May 05, 2011, 03:10:03 pm »
Author : S.Pradeesh Hosea,  S. Ranichandra, T.K.P.Rajagopal
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 3, March-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— The literature on the color image segmentation is very large and it has been delimited to review some important literature to trace the core issues. On the basis of the identified issues, objectives were drawn to prosecute a fresh study in the color image segmentation. This Literature review helps researcher to understand various techniques, themes, methodologies, approaches and controversies so for applied for color image segmentation. The algorithm combining color and texture information for the segmentation of color images. The algorithm uses maximum likelihood classification combined with a certainty based fusion criterion. It was validated using mosaics of real color textures presented in Color and texture fusion: application to aerial image segmentation.

Index Terms— color images, segmentation, image processing. 

HD. Cheng, X. H. Jiang, Y. Sun, Jingli Wang in 2001, described the concept of monochrome image seg-mentation approaches operating in different color spaces such as histogram thresholding, characteristic feature clustering, edge detection, region-based methods, fuzzy techniques, neural networks in Color image seg-mentation [2].

In 2001, F.Kurugollu, B.Sankur and A.E. Harmanci proposed the techniques of multiband image segmenta-tion based on segmentation of subsets of bands using multithresholding followed by the fusion of the resulting segmentation “channels”. For color images the band sub-sets are chosen as the RB, RG and BG pairs, whose two-dimensional histograms are processed via a peak-picking algorithm to effect multithresholding is present in Color image segmentation using histogram multithresholding and fusion [3].

The method of simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure is present in an efficient k-means clustering algorithm. It was proposed by T. Kanungo, D. M. Mount, N. Netanya-hu, C. Piatko, R. Silverman, and A. Y Wu in 2002[4].

Chi zhang, P. Wang in 2002, described the concept based on K-means algorithm in HSI space and has the advantage over those based on the RGB space. Both the hue and the intensity components are fully utilized. In the process of hue clustering, the special cyclic property of the hue component is taken into consideration is present in A New Method of Color Image Segmentation Based on Intensity and Hue Clustering[5].

The method of a color image segmentation system that performs color, clustering in a color space followed by color region segmentation in the image domain. The region segmentation algorithm merges clusters in the image domain based on color similarity and spatial adjacency is present in Color Image Segmentation in the Color and Spatial Domains. It was proposed by Tie Qi Chen', Yi L. Murphey', Robert Karlsen and Grant Gerhartd in 2002[6].
Faguo Yang and Tianzi Jiang in 2003, described the concept of a novel pixon-based adaptive scale method for image segmentation. The key idea of our approach is that a pixon-based image model is combined with a Markov random field (MRF) model under a Bayesian framework is present in Pixon-Based Image Segmentation With Markov Random Fields.[7].

The method to split colox information is the image to be segmented. Hence, this is a blind colour image seg-mentation method. It consists of four subsystems: preprocessing, cluster detection, cluster fusion and postprocessing is present in A four-stage system for blind colour image segmentation. It was proposed by Ezequiel López-Rubio, José Muñoz-Pérez, José Antonio Gómez-Ruiz in 2003[8].

Dmitriy Fradkin, Ilya Muchnik in 2004, described the concept to constructing hierarchical classifiers us- ing cluster analysis and suggests new methods and im-provements in each of these approaches. We also suggest a new method for constructing features that improve classification accuracy is present in A Study of K-Means Clustering for Improving Classification Accuracy of Multi-Class SVM [9].

Cheolha Pedro Lee in 2005, described the concept based on the statistics of image intensity where the statis-tical information is represented as a mixture of probabili-ty density functions defined in a multi-dimensional image intensity space. Depending on the method to estimate the mixture density functions, three active contour models are proposed: unsupervised multi-dimensional histogram method, half-supervised multivariate Gaussian mixture density method, and supervised multivariate Gaussian mixture density method is present in Robust Image Segmentation using Active Contours [10].

In 2007, Chris Vutsinas described the concept of Image Segmentation: K-Means and EM Algorithms. In this method, two algorithms for image segmentation are studied. K-means and an Expectation Maximization algorithm are each considered for their speed, complexity, and utility. Implementation of each algorithm is then discussed [11].

Ahmed REKIK, Mourad Zribi, Ahmed Ben Hamida, Mohammed Benjelloun in 2007, described the concept of Image analysis, usually, refers to a process of images provided by a computer in order to find the objects within the image. . It consists of subdividing an image into its constituent parts as well as extracting them, is present in the Review of satellite image segmentation for an optimal fusion system based on the edge and region approaches [12].

In 2008, Milind M. Mushrifand Ajoy K. Ray introduced the method of a new color image segmentation algorithm using the concept of histon, based on Rough-set theory, is presented in Color image segmentation: Rough-set theoretic approach. The histon is an encrustation of histogram such that the elements in the histon are the set of all the pixels that can be classified as possibly belonging to the same segment. In rough-set theoretic sense, the histogram correlates with the lower approximation and the histon correlates with upper approximation [13].

The concept of Fusion of multispectral image with a hyperspectral image generates a composite image which preserves the spatial quality from the high resolution (MS) data and the spectral characteristics from the hyper-spectral data , is presented in Performance analysis of high-resolution and hyperspectral data fusion for classification and linear feature extraction. It was proposed by Shashi Dobhal in 2008[14].

Sheng-xian Tu, Su Zhang, Ya-zhu Chen, Chang-yan Xiao and Lei Zhang in 2008, a new hierarchical approach called bintree energy segmentation was presented for color image segmentation. The image features are ex-tracted by adaptive clustering on multi-channel data at each level and used as the criteria to dynamically select the best chromatic channel, where the segmentation is carried out. In this approach, an extended direct energy computation method based on the Chan-Vese model was proposed to segment the selected channel, and the seg-mentation outputs are then fused with other channels into new images, from which a new channel with better features is selected for the second round segmentation. This procedure is repeated until the preset condition is met. Finally, a binary segmentation tree is formed, in which each leaf represents a class of objects with a dis-tinctive color [15].

A novel method of colour image segmentation based on fuzzy homogeneity and data fusion techniques is pre-sented. The general idea of mass function estimation in the Dempsteri-Shafer evidence theory of the histogram is extended to the homogeneity domain. The fuzzy homo-geneity vector is used to determine the fuzzy region in each primitive colour, whereas, the evidence theory is employed to merge different data sources in order to increase the quality of the information and to obtain an optimal segmented image. Segmentation results from the proposed method are validated and the classification accuracy for the test data available is evaluated, and then a comparative study versus existing techniques is pre-sented. It was described by Salim Ben Chaabane, Mouniri Sayadi, Farhat Fnaiech and Eric Brassart in 2009[16].

Fahimeh Salimi, Mohammad T. Sadeghi in 2009, intro-duced a new histogram based lip segmentation technique is proposed considering local kernel histograms in different illumination invariant colour spaces. The histogram is computed in local areas using two Gaussian kernels; one in the colour space and the other in the spatial domain. Using the estimated histogram, the posterior probability associated to non-lip class is then computed for each pixel. This process is performed considering different colour spaces. A weighted averaging method is then used for fusing the posterior probability values. As the result a new score is obtained which is used for labeling the pixels as lip or non-lip. The advantage of the proposed method is that the segmentation process is totally unsupervised [17].

In 2009, Damir Krstinic, Darko Stipanicev, Toni Jakov-cevic described a pixel level analysis and segmentation of smoke colored pixels for the automated forest fire detection. Variations in the smoke color tones, environmental illumination, atmospheric conditions and low quality of the images of wide outdoor area make smoke detection a complex task. In order to find an efficient combination of a color space and pixel level smoke segmentation algorithm, several color space transformations are evaluated by measuring separability between smoke and non-smoke classes of pixels [18].

The concept of a new color thresholding method for detecting and tracking multiple faces in video sequence. The proposed method calculates the color centroids of image in RGB color space and segments the centroids region to get ideal binary image at first. Then analyze the facial features structure character of wait-face region to fix face region. The novel contribution of this paper is creating the color triangle from RGB color space and ana-lyzing the character of centroids region for color segmenting. It was proposed by Jun Zhang, Qieshi Zhang, and Jinglu Hu in 2009[19].

Read More: Click here...

A Distributed Administration Based Approach for Detecting and Preventing Attacks on Mobile Ad Hoc Networks

Author : Himadri Nath Saha , Prof. (Dr.) Debika Bhattacharyya , Prof.(Dr.) P. K. Banerjee
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 3, March-2011
ISSN 2229-5518

Download Full Paper : PDF

Abstract - Certain security attacks specific to Mobile Ad Hoc Networks (MANETs) such as black hole attacks, gray hole attacks and blackmail attacks and also flooding attacks are lethal in terms of hampering availability of network service. In this paper, we propose a protocol for detecting flooding, black hole, gray hole and blackmail attacks and taking measures against the nodes committing them. Our scheme is based on a concept of an underlying backbone network of administrator nodes that we assume to be trustworthy and honest throughout. These administrators have greater transmission and reception range than the general nodes in the MANET and have the power to take corrective actions on the basis of the reports sent by the other nodes. The association of these administrator nodes is dynamically increased to ensure better network coverage by upgrading certain general nodes to become administrators subject to certain constraints such as the transmission and reception range and the performance over a sufficiently large period of time. We have modeled a possible life cycle for a general node in the network and have shown how our protocol unlike the existing ones is resilient and conservative while taking actions against any node emphasizing that an honest node should not be penalized by mistake. We give an elaborate description of the procedures and how they lead to detection of the attacks.

Keywords: Black hole attack, Blackmail attack, MANET,Gray hole attack, Flooding, Watch Node, Adminstrator.

THE security of communication in ad hoc wireless networks is very important and at the same time is much more challenging than it is for structured networks. Security attacks on MANETs can be broadly classified into active and passive attacks. In passive attacks, the malicious nodes attempt to obtain information form the network without disrupting the network operations. On the other hand, active attacks hamper network operations and can be carried out by nodes that are external or internal to the network. Internal attacks are harder to tackle as the nodes carrying them out are already accepted as a part of the network and are associated with other nodes in the network through already established trust relationships. We are concerned about such internal nodes that carry out active attacks like flooding, black hole, gray hole and blackmail attacks after establishing themselves in the network. Before we proceed to deal with the detection and prevention of these attacks, it is important to thoroughly understand these attacks and their characteristics.
Flooding attack: In flooding attack, a malicious node sends a huge number of junk packets to a node to keep it busy with an aim to prevent it from participating in other activities in the network. This can lead to an obvious disruption of network availability as the nodes communicating with the victim will not be attended. Apart from this threat, other complicacies can also be generated as follows:

•   Two malicious nodes can cooperate to carry out an attack where one floods an honest node in their vicinity while the other carries out a packet dropping (black hole) attack thereby preventing the honest node from detecting the black hole attack being carried out by the other malicious node.
•   In critical situations, where a node comes up and is waiting for receiving its identity from the other existing nodes, a malicious node can flood this node or the neighbors to delay the acceptance of the new node in the network.
These are two small examples out of many possible which justifies the need for a protocol which detects and takes action against the nodes trying to flood the network.

Black hole attack: In black hole attack, a malicious node upon receiving a route request packet from a node replies by sending a false routing reply to the sending node to misguide it to send the data to it and then it drops the data packet. This is the simplest way a black hole attack can be carried out and it is trivially easy to detect the node that is dropping all packets and consequently isolate it in the network. Let us look into a more complex scenario. In a situation where a group of nodes cooperate to create a black hole where the data packet is transferred and retransferred within the black hole until it runs out of its time to live (TTL) and gets eventually dropped without causing the node dropping it to be blamed anyhow. This is how cooperative black hole attack is carried out and it is increasingly challenging to detect the chain of nodes responsible and take corrective measures.

Gray hole attack: In a gray hole attack, the malicious nodes are harder to be detected as they selectively drop packets. Such a malicious node can pretend to be honest over a time in the network observing a pattern in the traffic flow. For instance, say after a node comes into the network and an existing node receives the request for identity from the new node, then it is expected that the existing node will reply with an identification data to the new node. At this moment, a malicious node can drop packets from any nodes meant for the new node thereby preventing or delaying participation of the new node in the network.

Blackmail attacks: In a blackmail attack, or more effectively a cooperative blackmail attack, malicious nodes complain against an honest node to make other nodes that need to send data to believe that routing through the victim is harmful. Such attacks can prevent senders from choosing the best route to the destination thereby hampering efficiency and throughput in the network.

Having discussed the threat areas we now introduce our approach to fight against these attacks. The crude definition of MANET calls for a cluster of mobile nodes with equal or different computing power but with equal status inter-communicating without taking the aid of any central authority whatsoever. An ideal MANET as per the basic definition incorporates only peer to peer communication.
In other words such networks are structure less.
We however have been deeply influenced by the concept of using a logical structure over on infrastructure less network as in [1].

We emphasize on deploying of MANET for a definite purpose such as military activities, fighting disaster in calamity struck areas and so on. So it is not unjust to assume that there exists a logical authority in the form of an individual or a group who sets up the communication network for a purpose and that they will always be honest. On the basis of this assumption we mark these nodes as administrator nodes that place themselves in positions so as to ensure maximum network coverage. They have large transmission and reception power than the general nodes that participate in the network activities. We make the general nodes go through four phases in their lives in the network, namely, WHITE, GRAY, BLACK and BLUE. As we have said earlier, our scheme takes special care to avoid rash decisions taken on nodes to prevent honest nodes getting misjudged as malicious. A WHITE node is one which is honest with a high probability and is the default phase of a new node in the network. A node which is under suspect is made GRAY and a GRAY node which does not improve its behavior is made a BLACK node and its isolation is effected. A WHITE node that has transmission and reception powers comparable to that of the administrators and has been honest for a sufficiently large amount of time can be upgraded to the status of BLUE nodes. We have given the general nodes the power to watch the activities of other nodes and judge independently whether other nodes in its vicinity are carrying out malicious activities. On detecting such activities, the general nodes can send out complains to the administrators who have the authority to take corrective measures on the basis of the received complains.


Mn DOPED SnO2 Semiconducting Magnetic Thin Films Prepared by Spray Pyrolysis Method
Author : K.Vadivel, V.Arivazhagan, S.Rajesh
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract -- Semiconducting magnetic thin films of SnO2 doped with Mn was prepared by spray pyrolysis method. The polycrystalline nature of the films with tetragonal structure was observed from X-ray Diffractometer. The calculated crystalline size was 16-22 nm and the lattice constant is a=4.73A◦ and c=3.17A◦. The compositional studies give the weight percentage of the used materials. The absorption edge starts with 294 nm and rise in transmittance spectra shows the nanocrystalline effect of as deposited films. The calculated band gap from the absorption coefficient is 3.25 eV which greater than the bulk band gap of Tin oxide. The electrical properties of the prepared films also reported in this paper.
Keywords-- Mn doped SnO2, Spray Pyrolysis, XRD, UV, Electrical studies.

THE study of SnO2 transparent conducting oxide thin films are of great interest due to its unique attractive properties like high optical transmittance, uniformity, nontoxicity, good electrical, low resistivity, chemical inertness, stability to heat treatment, mechanical hardness, Piezoelectric behavior and its low cost. SnO2 thin films have vast applications as window layers, heat reflectors in solar cells, flat panel display, electro-chromic devices, LEDS, liquid crystal displays, invisible security circuits, various gas sensors etc. Undoped and Cu, Fe and Mn doped SnO2 thin films have been prepared by vapor deposition technique and reported that SnO2 belongs to n-type semiconductor with a direct optical band gap of about 4.08 eV [6]. To improve the quality of the films as well as the physical and chemical properties, the addition of some metal ions as impurities is expected to play an important role in changing the charge carriers concentration of the metal oxide matrix, catalytic activity, the surface potential, the phase composition, the size of crystallites, and so on [8- 10]. It is expected that various concentration of Mn in SnO2 may affect the structural, optical  and magnetic properties of the films. From bang gap engineering point of view, suitable band gap is essential for the fabrication of optical devices. So far our knowledge is concerned there are very few reports available on the deposition of Mn doped SnO2 thin films by spray pyrolysis method. In considering the importance of these materials in the field of magnetic materials, we have prepared Mn doped SnO2 films using a simple and locally fabricated spray pyrolysis system relatively at the temperature of 450°C.
Mn doped SnO2 thin films were prepared by spray pyrolysis method. Mn doped SnO2 thin films were prepared by spray pyrolysis method. The starting  materials  were  SnCl4.5H2O  for  Tin  and  Mn(CHOO3)2.4H2O  for  Manganese.The concentration of 0.5m of Stannous chloride and 0.1m of Manganese acetate was taken in two different beakers with double distilled water.  Then  98%  of  Stannous  chloride  solution  and  2%  of manganese  acetate  solution was mixed  together and  stirred using magnetic stirrer for 4  hours and allowed  to aging  for ten days. The clear solution of the mixer was taken for film preparation by spray pyrolysis method. The temperature of the substrate in this method for preparing nanocrystalline films plays an important role. Here the temperature of the substrate kept at 450◦C and the solution was sprayed using atmospheric air as carrier gas. Then the film was allowed to natural cool down. The structural studies on as deposited manganese doped tin oxide thin films were analyzed using X-Ray diffractometer (Shimadzu XRD-6000). Using EDAX (JSM 6390) the elemental composition of the films was carried out. The optical and electrical properties of the films done by UV-Vis spectrometer (Jasco-570 UV/VIS/ NIR) and Hall (Ecopia HMS-3000) measurement system.

Read More:

Author : Ajay Vikram Singh, Dr. Sameer Sinha
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract- We have Investigate and study to X-ray Plasmon Satellites in Rare earth compounds ( La2CuO4 , Nd2CuO4 , Gd2CuO4   ,  PrNiSb2 , NdNiSb2 , Pr(OH)3 ,  Nd(OH)3 , Sm(OH)3  )

Keywords- Surface Plasmon Satellites, Relative Intensity & Energy Separation.

IN the characteristic X-ray Spectra, Diagram as well as non Diagram lines are present. Those lines which fit in the conventional energy level diagram are called Diagram lines. & those lines which do not fit in the conventional energy level diagram are called non diagram lines. It is also known as “Satellites or Second order lines”. Satellites are generally of weak intensity lines & are found close to more intense parent line. The satellites which are observed on higher energy side are called high energy satellites (HES) whereas those are observed on lower energy side are called lower energy satellites (LES). First Siegbahn & Stenstroem observed these satellites in the K-Spectra of element from Cr (24) to Ge (32) while coster theraeus & Richtmyer in the L-Spectra of element from Cu (29) to Sb (51) & Hajlmar, Hindberg & Hirsch in the M-Spectra of elements from Yb (70) to U (92). Several theories were proposed from time to time to explain the origin of these satellites. Out of these theories the plasmon theory is found to be the most suitable theory especially for those satellites.

Plasmon theory was first proposed by Bohm & pines which are extended by Housten, Ferrel, Noziers & Pines. According to this theory the low energy plasmon satellites are emitted when valence electron excites a plasmon during the annihilation of core hole conversely if Plasmon pre exists, its energy add up to the energy of diagram line.

The radiation less reorganization of electronic shell of an atom is known as Auger effect. Auger satellites have also been observed by Korbar and Mehlhorn [1] Haynes et at. [2] Edward and Rudd [3]. Theoretical explanation for K series Auger spectrum was given by Burhop and Asaad [4] using intermediate coupling. Later on more refined theory, using  relativistic and configuration interaction has been used by Listengarter [5] and Asaad [6]
In Auger primary spectra, one can also observe secondary electron peaks close to the primary peaks are produced by incident electrons which have undergone well energy losses. The most common source of such energy loss in the excitation of collective plasma oscillations of the electrons in the solid. This gives rise to a series of plasma peaks of decreasing magnitude spaced by energy ħp where p is the frequency of plasma oscillation.  ( Download Full Paper to View Equations )
Auger peaks are also broadened by small energy losses suffered by the escaping electrons. This gives rise to a satellite on the low energy of the Auger peak. Energy loss peaks have well defined energy with to primary energy.
The involvement of Plasmon oscillation in the X-ray emission or absorption spectra of solids has been widely studied during the last few decades and has been recognized that the electron –electron interaction has played an important role.

This Paper is devoted to Investigate and study to X-ray Plasmon Satellites in Rare earth compounds     
 ( La2CuO4 , Nd2CuO4 , Gd2CuO4   ,  PrNiSb2 , NdNiSb2 , Pr(OH)3 ,  Nd(OH)3 , Sm(OH)3  )

According to Plasmon theory ,  if the valence electron , before filling the core vacancy , also excites a Plasmon ,then the energy ħωp needed for the excitation of Plasmon oscillation is taken from the transiting valence electron so that the emitted radiation will be derived off an energy ħωp and a low energy satellites will emitted whose sepration from the main X-ray line will correspond to ħωp . On the other hand if the Plasmon pre exists , during the X-ray emission process , then , on its decay it can give its energy to the transiting valence electron before it annihilates the core vacancy . Thus the energy of emitted X-ray photon will be higher than the main emission line and by an amount ħωp giving rise to high energy satellite .

In order to confirm the involvement of Plasmon in the emission of X-ray satellites the relative intensity of single Plasmon satellites must be calculated . In this process first we deal with mathematical details of canonical transformation carried out over the model Hamiltonian of the system .

Read More:

Others / Impact of Leverage on Firms Investment Decision
« on: April 23, 2011, 05:42:39 pm »
Author : Franklin John. S, Muthusamy. K
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract - The present paper is aimed at analyzing the impact of leverage on firm’s investment decision of Indian pharmaceutical companies during the period from 1998 to 2009. To measure the impact of leverage on firm’s investment decision, pooling regression, random and fixed effect models are used by taking, leverage, sales, cash flow, Return on Asset, Tobin’s Q, liquidity and retained earnings as independent variable and investment as dependent variable. In addition , we demarcate between three types of firms (i) Small firms, (ii) Medium firms and (iii)Large firms. The results reveal that a significant positive relationship between leverage and investment, while we found a negative relationship between leverage investment for medium firms and positive relationship between leverage and investment in large firms. Our econometric results reveal an insignificant relationship between the two variables for medium and large firms.

Index Terms-- Investment, Tobin’s Q, Cash flow, Liquidity, ROA, Size and Retained Earnings.
Investment is a crucial economic activity in the corporate financial management. Such an activity leads to the country’s economic development provide employment to the people and to eliminate poverty .This paper investigates the effort of debt financing on the firms investment decision on pharmaceutical industry in India. It plays a significant role in the country’s economic and industrial development and trade and to prevent diseases’ for increasing the life of people. This industry is providing a basic material to other industrial sectors. It requires capital for financing firm’s assets. Among the different sources of fund, debt is a cheaper source because of its lowest cost of capital. The investment decision of the firm is of three categories that can be adopted by firm’s management besides the financing decision and the net profit allocation decision. The investment decision has a direct influence on the firms asset structure, more over in their degree of liquidity and consists of spending the financial funds for the purchase of real and financial assets for the firm. In  order to gain cash and the growth of the wealth of firms owner. The investment decision and the financing decision are interdependent that is the investment decision is adopted in relation to the level of financing source but the option to invest is also crucial in order to calculate the level of financing capitals and the need for finding their sources.
As far as the hierarchy of financing sources as it exists in the economic literature, is concerned, cash flow is the cheapest financing sources followed by debts and in the end, by its issuing of new shares. Debts can be cheaper than the issue of new shares because the loan contract can be created as to minimize the consequences of information problem. Giving the fact the degree of information asymmetry and the agent costs depend on the peculiarities of every firm, such firms are more sensitive to financial factors than other. The debt limit of the firms is determined in the view, since interest payment is tax deductible, the firm prefers debt financing to equity and it would rather have an infinite amount of debt, However, this leads to negative equity value in some status so that the firm would rather go bankrupt instead of paying its debt. Therefore debt to remain risk-free, lenders will limit the amount of debt. They can limit the debt by accepting the resale value of capital as collateral and ensuring that this value is not lower than the amount of debt, so that they can recover their money in case of bankruptcy. Alternatively, lenders may limit the amount of debt in order to ensure that the marker value of equity is always non-negative and bankruptcy is sub-optimal for the firm.
While there is by now a rapidly expanding literature on the presence of finance constraints on investment decisions of firms for developed countries , a limited empirical research has been forthcoming in the context of developing countries for two main reasons. First until recently, the corporate sector in emerging markets encountered several constraints in accessing equity and debt markets. As a consequence, any research on the interface between capital structure of firms and finance constraints could have been largely constraint- driven and have less illuminating. Second, several emerging economies, even until the late 1980s, suffered from financial depression, with negative real rates of interest as well as high levels of statutory pre-emption. This could have meant a restricted play of market force for resource allocating.
Issues regarding the interaction between financing constraint and corporate finance have, however, gained prominence in recent years, especially in the context of the fast changing institutional framework in these countries. Several emerging economies have introduced market-oriented reforms in the financial sector. More importantly the institutional set-up within which corporate houses operated in the regulated era has undergone substantial transformation since the 1990s. The moves towards market-driven allocation of resources, coupled with the widening and deepening of financial market, have provided greater scope for corporate house to determine their capital structure.
The rest of the paper unfolds as follows. Section II discuses the historical background of the study. Section III explains methodology, data, variable description and the data employed in the paper. Section IV presents the results and discusses robustness check followed by the concluding remarks in the final section.

Several authors have studied the impact of financial leverage on investment. They reached conflicting conclusions using various approaches. When we talk about investment, it is important to differentiate between overinvestment and under-investment.   Modigliani and Miller (1958) argued that the investment policy of a firm should be based only on those factors that would increase the profitability, cash flow or net worth of a firm. Many empirical literatures have challenged the leverage irrelevance theorem of Modigliani and Miller. The irreverence proposition of Modigliani and Miller will be valid only if the perfect market assumptions underlying their analysis are satisfied   .However the corporate world is characterized by various market imperfections costs, institution restrictions and asymmetric information.  The interaction between management, shareholders and debt holders will generate frictions due to agency problems and that may result to under-investment or over-investment incentives. As stated earlier one of the main issues in corporate finance is whether financial leverage has any effects on investments policies.
Myers (1977), high leverage overhang reduces the incentives of the shareholder-management coalition in control of the firm to invest in positive net present value of investment opportunities, since the benefits accrue to the bondholders rather than the shareholders thus ,highly levered firm are less likely to exploit valuable growth opportunities as compared to firm with low levels of leverage a related under investment theory centers on a liquidity affect in that firm with large debt commitments invest less no matter what their growth opportunities . Theoretically, even if leverage creates potential underinvestment incentives, the effect could be reduced by the firm corrective measures. Ultimately, leverage is lowered if future growth opportunities are recognized sufficiently early.
Another problem which has received much attention is the overinvestment theory. It can be explained as investment expenditure beyond that requires to maintain assets in place and to finance expected new investment in positive NPV projects where there is a conflict between manager and share holder. managers perceive an opportunities to expand the business even if the management  under taking poor projects and reducing shareholders welfare .The managers’ abilities’ to carry  such a policy are restrained by the availability of cash flow and further tightened by the financing of debt. Hence, leverage is one mechanism for overcoming the overinvestment problem suggesting a negative relationship between debt and investment for firm with low growth opportunities. Does debt financing induce firms to make over-investment or under-investment? The issuance of debt commits a firm to pay cash as interest and principal. Managers are forced to service such commitments .too much debt also is not considered to be good as it may lead to financial distress and agency problems.

Hite (1977) demonstrates a positive relationship because given the level of financial leverage an investment increase would lower financial risk and hence the cost of bond financing. In contrast Deangels and Masulis (1980) claim a negative relationship since the tax benefit of debt would compete with the tax benefit of capital investment. Dotan and Ravid (1988) also show a negative relationship because investment increase would raise financial risk and hence the cost of bond financing how the investment increase affects financial risk and the sub suitability between tax shields and hence; financial leverage may depend on firm-specific factors.

Read More:

Others / New Formula of Nuclear Force
« on: April 23, 2011, 05:38:24 pm »
Author : Md.  Kamal Uddin
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract -It is well established that the forces between nucleons are transmitted by meson. The quantitative explanation of nuclear forces in terms of meson theory was extremely tentative & in complete but this theory supplies a valuable point of view . it is fairly certain now that the nucleons within nuclear matter are in a state made rather different from their free condition by the proximity of other nucleons charge independence of nuclear forces demand the existence of neutral meson as amongst the same type of nucleolus (P-P) or (N-N). this force demand the same spin & orbital angular momentum. The exchange interaction in produced by only a neutral meson. The involving mesons without electric charge, that it gives exchanges forces between proton & Neutron & also therefore maintains charge in dependence character. It is evident for the nature of the products that neutral mesons decay by strong & weak interaction both. It means that neutral mesons constituents responsible for the electromagnetic interaction. Dramatically neutral mesons plays important role for electromagnetic & nuclear force both.

Index Terms - Restmass energy,Mesons,photons,protons,neutrons,velocity of light,Differentiation.

IT  is well  established  that  the   forces  between  nucleons  are  transmitted  by   meson.  The  quantitative  explanation  of  nuclear  forces  in   terms  of  meson  theory  was  extremely  tentative  &  incomplete,  but  this  theory  supplies  a  valuable  point  of  view.  Yukawa   first  pointed  out  that  nuclear  force  can  be  explained  by  assuming  that  particle  of  mass  about  200   times  the  electron  mass(mesons)  exist  &  can  be 
emitted  &  absorbed  by  nuclear  particles(neutrons  &  protons)  with  such  an  assumption  a  force  between  nuclear  particles  of  right  range    &   right  shape(rapid  decrease  at  large  distances  is  now  obtaining.       
Now we have the rest mass energy = m0 c2
Differentiating with respect to r (Inner  radius  at  which  nuclear  force  comes  into  play)
 =  c2dm0  +  m0d(c2)      =  c2dm0   +   
      dr              dr                    dr
mo  d(c2)  dc   =  c2dm0    +   2m0c.dc
         dc    dr         dr                 dr
. This  force  is  short  range,  attractive & along  the  line  joining  the  two  particles (central  force).(The  wide  success  of  this  first  application  of  quantum  mechanics  to  nuclear  phenomena  gives  us  confidence  in   general use   of  quantum  mechanics  for  the  description  of  the  force  between  heavy  particles  in  nuclei.

Where dm0c2 = either  rest  mass energy of π0  mesons(For  neutral  theory),or rest   mass energy  of π+,  π-& π0mesons(  for   symmetrical  theory)
dm0 =either   mass of  π0  mesons   or  mass of π+ ,  π-& π0   mesons
m0 =  mass of nucleons
m0cdc = rest   mass  energy of nucleons
dr=  Range  of  nuclear  force,  which  can  be  calculated   from  differentiation  of  Nuclear  radius.( The  force  between  two  nucleons  is  attractive   for  distance    r(radius)  greater  than   dr  (range)  &  is  repulsive  otherwise).This  strongly  suggests  &  well  proved  that  to  some  degree  of  approximation  the  total  isotopic  spin  T  is  a  constant  of  the  motion  &  is  conserved  in  all  processes,  at  least  with   a  high  probability.

dc= The average  velocity  of  neutron  &  proton.  A  large  velocity  is  used  in  nuclear  disintegration.
c = Velocity of light
2   =   multiplicity of interacting particles is given by (2T+1), the isotopic spin has no such meaning for leptons or a gamma rays
1   =   either  multiplicity of π0 mesons  or π+, π-  & π0 mesons (evidence  of   involving  of   mesons  (all  type))

Where T   =   Vector sum of isotopic spin of proton & proton, neutron& neutron, neutron  &  proton The  success   of  these   applications   supplies   additional   support   for  the  hypothesis  of   the  charge   independence &   charge   symmetry of  nuclear force. As  the  nuclear  interactions  do  not  extend  to  very  large  distances  beyond  the  nuclear  radius  &  this  character  is  useful  to  solve  the  problem. The  Full  charge  independence  for  any   system  in  which  the  number  of  neutrons  equals  the  numbers  of  protons,  this  formula  give  the   evidence  the  charge  symmetry,  merely   means  that  the  neutron-neutron  &  proton  proton  interaction  are  equal  but  says  nothing  about  the  relations  of  neutron  proton  interaction  to  others. Nuclear  forces  are  symmetrical  in  neutrons  &  protons. i.e.  the  force  between  two  protons  are  the  same  as  those  between  two  neutrons.  This  identity  refers  to  the magnitude  as  well  as  the  spin  dependence  of  the  forces.
Now we can see the following reaction
   P+ P      -----   P+P+π0
   P+N   -----P+N+π+
   N +P-----  N+   P + π-
These reactions are soon as Y + P   --    P + π0   
 The  capture  of  photons  can  effect  the  production  of  mesons  by  an  electromagnetic  interaction,  decay  electromagnetically  since  these  processes  involve  no  change  of  strangeness.
P + P   ---    P + P + π0
P + N  -----     P + N + π0   

It  is   found  that  only  two  assumptions  are  in  agreement  with  theoretical   &  experimental  facts,  notably  the  equality  of  the  forces  between   two  like  &  two  unlike  nuclear  particles  in  the  singlet  state.  These  assumption  are  either(1)  that  nuclear  particles  interact  only   with  neutral  mesons(  neutral  theory)  or(2)   that  they  interact  equally  strongly  with  neutral,  positive  &  negative  mesons(symmetrical  theory).  It  is  obvious  that  the  part  of  the  force  which  does  not  depend  on  the  spin  of  the  nuclear   does  not  fulfill  any   useful   function  in  the  theory.  The  force  between  proton  &  neutron  are  result  from  the  transfer  of  positive  meson  from  the  former  to  the  latter  or  a  negative  meson  in  the  opposite  direction.  So  there  vector  sum  of  component of  isotopic  spin  of  these  particle  must  be  zero. The  charges  on  charged  mesons  must  be  equal  in  their  magnitude.
Charge independence of nuclear forces demand the existence of π0 meson as amongst the same type of nucleons (p-p) or (N-N).This  force  demand  the  same  spin & orbital  angular  momentum. Positive  pions  are  not  able  to  surmount  the  nuclear  coulomb  barrier  &  there  fore  undergo  spontaneous  decay   while  negative  pions  are  captured  by  nuclei. The  exchange  of  a  pion  is  thus  equivalent  to  charge  exchange. we  can  think of  nucleons  as  exchanging  their  space   &  spin  co-ordinates In  the  neutral  theory,  therefore  neutron  &  protons   are  completely  equivalent  &  indistinguishable  as  far  as  the  associated  meson  fields  are  concerned.  Such particle decay into two gamma rays. These gamma rays are π0 – rest systems are emitted in opposite direction & therefore spin π0 must be Zero as the spin of photon is unity. It  is  evident  from  the  nature  of  the  products  that  neutral  mesons  decay  by  the  electromagnetic  interaction while  charged  pions  decay  by  strong  &  weak  interaction  both.  It  means  that  neutral  mesons  constituents  responsible  for  the  electromagnetic  interaction. We  know  that  neutron  & proton  can  change  into  one another  by  meson capture. Protons  &neutron  can   transform  into  each  other by  capture  of  positive  &  negative  pion  respectively, or  get  transform  into  the  same  particle  through  neutral  meson  interaction.     During  these  transformation either  an  emission  or  an  absorption of  meson  is  essential. The  attraction   between   any  nucleons  can   arise  from  the   transfer  of a  neutral  meson from  one  nucleon  to  the   other. If  the  meson  were  assumed  to  be  charged  (  positive  or  negative)  the  resulting  force  between  nuclear  particles  turned  out  to  be  of  the  exchange  type  which  had  been  successful  in  the  interpretation  in  nuclear  physics.  The  mesons  must  obey  Bose  statistics  because  they   are  emitted  in  the  transformation  of  a  neutron  into  a  proton(  or  vise  versa)  both  obey  Fermi  statistics.

Read More:

Author : Sumita Mishra, Prabhat Mishra, Naresh K Chaudhary, Pallavi Asthana
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— This article describes a comprehensive system for surveillance and monitoring applications. The development of an efficient real time video motion detection   system is motivated by their potential for deployment in the areas where security is the main concern. The paper presents a platform for real time video motion detection and subsequent generation of an alarm condition as set by the parameters of the control system. The prototype consists of a mobile platform mounted with RF camera which provides continuous feedback of the environment. The received visual information is then analyzed by user for appropriate control action, thus enabling the user to operate the system from a remote location. The system is also equipped with the ability to process the image of an object and generate control signals which are automatically transmitted to the mobile platform to track the object.

Index Terms— Graphic User Interface, object tracking, Monitoring, Spying, Surveillance, video motion detection.

Video Motion Detection Security Systems (VMDss) have been available for many years. Motion detection is a feature that allows the camera to detect any movement in front of it and transmit the image of the detected motion to the user.  VMDss are based on the ability to respond to the temporal and/or spatial variations in contrast caused by movement in a video image. Several techniques for motion detection have been proposed, among them the three widely used approaches are background subtraction optical flow and temporal differencing. Background subtraction is the most commonly used approach in present systems. The principle of this method is to use a model of the background and compare the current image with a reference. In this way the foreground objects present in the scene are detected. Optical flow is an approximation of the local image motion and specifies how much each image pixel moves between adjacent images. It can achieve success of motion detection in the presence of camera motion or background changing. According to the smoothness constraint, the corresponding points in the two successive frames should not move more than a few pixels. For an uncertain environment, this means that the camera motion or background changing should be relatively small. Temporal differencing based on frame difference, attempts to detect moving regions by making use of the difference of consecutive frames (two or three) in a video sequence.

This method is highly adaptive to dynamic environments hence it is suitable for present application with certain modification. Presently advanced surveillance systems are available in the market at a very high cost. This paper aims at the low cost efficient security system having user friendly functional features which can also be controlled from a remote location. In addition the system can also be used to track the object of a predefined color rendering it useful for spying purposes.

The proposed system comprises of two sections. The transmitter section consists of a computer , RS232 Interface, microcontroller, RF Transmitter, RF video receiver. The Receiver section consists of a Mobile Platform, RF receiver, microcontroller, RF camera, motor driver, IR LEDs. The computer at the transmitter section which receives the visual information from camera mounted on mobile platform works as control centre. Another function of control centre is to act as   the web server that enables access to system from a remote location by using internet. The control centre is also responsible for transmitting the necessary control signal to the mobile platform.

The system can operate in four independent modes.

3.1 PC Controlled Mode
In this mode the mobile platform is directly controlled by control centre using a visual GUI program developed using Microsoft Visual Studio 6.0(Visual Basic programming language). The user can control the mobile platform after analyzing the received video.

3.2 Internet Controlled Mode

This mode is an extension to the PC Controlled mode
where client-server architecture is incorporated. This mode enables an authorized client computer to control the mobile platform from a remote location via internet. Client logs onto the control centre which provides all control tools for maneuvering the mobile platform. Instant images of the environment transmitted from the camera mounted on the mobile platform are used to generate appropriate control signals.

3.3 Tracing Mode
In this mode the system is made to follow the object whose color information has been stored at the control centre in program developed in MATLAB. Basically the program performs the image processing of the object and generates the control signals in order to make the mobile platform to trace the object.

3.4 Motion Detection Mode
In this mode the platform is made to focus on a particular object whose security is our concern. The mobile platform transmits the visual information of the
object to the control centre for analysis. A Program developed using MATLAB at the control centre is then used to analyze four consecutive and based on this analysis a security alarm is raised if required.

4.1 Program for mode 1

This program has been developed in Microsoft Visual Studio 6.0(Visual Basic programming language).It consists of 12 buttons, 2 checkboxes, 1 video box, 1 picture
box. These 16 buttons are configured as:
•   7 buttons to control the directions of the  platform.
•   2 checkboxes for controlling lights and night vision respectively.
•   2 buttons for camera control (start, stop).
•   2 buttons for capturing video.
•   1 button for capturing snapshot
The video box displays the video using VideoCapPro ActiveX Control, received from the camera mounted on the mobile platform and similarly the picture box displays the snapshot taken when the button for capturing the snapshot is depressed. The program transmits the control signals via serial port using MSCOMM (Microsoft Common Control) component.

Read More:

Networking / Handoff Analysis for UMTS Environment
« on: April 23, 2011, 05:28:48 pm »
Author : Pankaj Rakheja, Dilpreet Kaur, Amanpreet Kaur
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— UMTS is one of the third generation mobile telecommunication technologies. It supports various multimedia applications and services at an enhanced data rate with better security. It also supports mobile users and for that there is a process called handover where new channels are assigned to the user when it moves from a region covered by one node to a region covered by other node. In this paper we are analysing the effect of handover over the performance of the system.

Index Terms— DPCH, Handover, UTRA. 

Universal Mobile Telecommunications System [1-2] is a third-generation broadband which supports packet-based transmission of text, digitized voice and video. The multimedia here can reach data rates up to 2 megabits per second (Mbps). It also offers a consistent set of services to mobile computer and phone users, no matter where they are located in the world. It is based on the Global System for Mobile Communications (GSM) standard i.e. it is overlaid on GSM. It is also endorsed by major standard bodies and manufacturers as the planned standard for mobile users around the world. It can ensure a better Grade of Service and Quality of Service on roaming to both mobile and computer users. Users will have access through a combination of terrestrial wireless and satellite transmissions.

Cellular telephone systems used previously [3] were mainly circuit-switched, meaning connections were always dependent on availability of circuits. A packet-switched connection uses the Internet Protocol (IP) [4-5] which uses concept of virtual circuit i.e. a virtual connection is always available to connect an endpoint to the other end point in the network. UMTS has made it possible to provide new services like alternative billing methods or calling plans. For instance, users can now choose to pay-per-bit, pay-per-session, flat rate, or asymmetric bandwidth options. The higher bandwidth of UMTS also enabled other new services like video conferencing. It may allow the Virtual Home Environment to fully develop, where a roaming user can have the same services to either at home, in the office or in the field through a combination of transparent terrestrial and satellite connections.

The term handover [6] is also known as handoff. Whenever a user terminal moves into area covered by a different RNC while the conversation is still going on, then new channels are allocated to the user terminal which is now under different control node or MSC. This is carried out to ensure continuity of communication and to avoid call dropping. For this to take place the system needs to identify the user terminal and monitor its signal strength and setting of a threshold value below which a call or application drops and enabling new channel allocation before this level.

There is handoff margin which needs to be optimized for proper synchronization. It is the difference between signal strength at which handover should occur and the minimum required signal strength. If it is too low then there will be insufficient time to complete the process and if it is too large then unnecessary handovers will occur. The most important thing is the handovers are not visible to the users.

Handover types

Handovers can be broadly classified into two types namely: Intracellular and Intercellular handover. In the Intracellular handover, mobile or user terminal moves from one cellular system to another. And in the Intercellular handover, user terminal moves from one cell to the other. This is further classified into soft and hard handover.

Soft handover

Here we follow make before break concept where the user terminal is allocated new channels first and then previous channels are withdrawn. The chances of losing continuity are very less but it needs user terminal or mobile to be capable of toning to two different frequencies. The complexity at user end increases a lot. It is quite reliable technique but here channel capacity reduces.

Hard Handover

Here we follow break before make concept where from the user terminal previously allocated channels are first withdrawn and then new channels are allocated. The chances of call termination are more than in soft handover. At the user terminal complexity is less as it need not be capable of toning to two different frequencies. It provides advantage over soft handover in terms of channel capacity but it is not as reliable as soft handover.

Prioritizing handoffs

Handoff requests are more important than new call requests or application requests as call dropping in between will be more annoying for the user than not being able to make a new call. So a guard channel is especially reserved for the handoffs. We also queue the requests made for proper flow and order control.

The most obvious cause for performing a handover is that due to its movement a user can be served in the another cell more efficiently (like less power emission, less interference etc). It may however also be performed for other reasons which may be system load control.

Classification of cells

Active Set: It is defined as the set of Node-Bs the UE is simultaneously connected to (i.e., the UTRA cells currently assigning a downlink DPCH to the UE constitute the active set).

Monitored Set: It is defined as the set of nodes not in the active set but are included in CELL_INFO_LIST.

Detected Set: It is defined as the set of nodes neither in the active set nor in CELL_INFO_LIST but are detected by UT special considerations in UMTS environment.

In UMTS environment the different types of air interface measurements are:

Intra-frequency measurements: Those measurements which are carried out on downlink physical channels at the same frequency as that of the active set. The measurement object here corresponds to one cell.

Inter-frequency measurements: Those measurements which are carried out on downlink physical channels at frequencies that differ from the frequency of the active set. The measurement object here corresponds to one cell.

Inter-RAT measurements: Those measurements which are carried out on downlink physical channels belonging to another radio access technology than UTRAN, e.g. GSM. The measurement object here corresponds to one cell.

Traffic volume measurements: Those measurements which are carried out on uplink channels to analyse the volume of traffic on them. The measurement object here corresponds to one cell.

Quality measurements: These measurements are carried out on downlink channels to obtain the various quality parameters, e.g. downlink transport block error rate. The measurement object here corresponds to one transport channel in case of BLER. A measurement object corresponds to one timeslot in case of SIR (TDD only).

UE-internal measurements: Measurements of UE transmission power and UE received signal level.

UE positioning measurements: Measurements of UE position.
The UE supports a number of measurements running in parallel. The UE also supports that each measurement is controlled and reported independently of every other measurement.

Read More:

Performance and Emission Characteristics of Stationary CI Engine with Cardnol Bio Fuel Blends
Author : Mallikappa, Rana Pratap Reddy, Ch.S.N.Muthy
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— The compression ignition engine is the most popularly used prime mover. The compression ignition (CI) engine moves a large portion of the world’s goods & generates electricity more economically than any other device in their size range [1]. All most all the CI engines use diesel as a fuel, but the diesel is one of the largest contributors to environmental pollution problems. The application of bio diesel as a substitute for conventional petroleum fuel in diesel engine gain ever increasing demand throughout the world wide, because it is produced from renewable resources, bio degradable and potential to exhaust emissions & use of bio diesel in diesel engines generates rural employment opportunities by cultivating such oil producing crops[1-5]. In this research work the detailed investigation on performance and emission characteristics of four stroke single cylinder engine with variable loads were studied, cardnol bio fuel volumetric blends like 0, 10, 15, 20%, and 25% were used. The results indicate that brake power increases (by 76% approximately) as load increases. Brake specific energy conversion decreases (by 30-40 % approximately) with increase in load. Brake thermal efficiency increases with higher loads and emission levels (HC, CO, NOX) were nominal up to 20% blends.

Keywords: Compression Ignition, characteristics, cardnol bio fuel, Performance, Emissions

1. Introduction
IN today’s world the majority of automotive and trans-portation vehicles are powered by compression ignition engines. The compression ignition engine moves a large portion of the world’s goods & generates electricity more economically than any other device in their size range. All most all the CI engines use diesel as a fuel, but the diesel is one of the largest contributors to environmental pollution problems. Bio fuel is an alternative to petroleum based fuel, renewable energy source, bio de-gradable and non-toxic fuel, being beneficial for reser-voirs, lakes, marine life and other environmentally sensi-tive places such as large cities and mines & use of bio diesel in diesel engines generates rural employment opportunities by cultivating such oil producing crops [1-5].

The issue of energy security led governments and re-searchers to look for alternate means of renewable and environment-friendly fuels. Bio fuel has been one of the promising, and economically viable alternatives. Fuel and energy crisis and the concern of society for depleting world’s non-renewable resources initiate various sectors to look for alternative fuels.  One of the most promising fuel alternatives is the vegetable oils and their derivatives.  Plenty of scientific articles and research activities from around the world were printed and recorded.  Oils from coconut, soy bean, sunflower, safflower, peanut, linseed and palm were used depending on what country they grow abundantly. It has been reported that in diesel engines; vegetable oils can be used as fuel, straight as well as in blends with the diesel. It is evident that [2] there are various problems associated with vegetable oils being used as fuel in compression ignition engines, mainly caused by their high viscosity. The high viscosity is due to the molecular mass and chemical structure of vegetable oils, which in turn leads the problems in pumping, combustion and atomization in the injector system of diesel engine. Due to the high viscosity, vegetable oils normally introduce the development of gumming, the formation of injector deposits, ring sticking as well as incompatibility with conventional lubricating oils in long-term operations.

India is the largest producer, processor and exporter of Cashews, Anarcadium Occidentale Linn, in the world [6]. It was brought to India during the 1400 by Portuguese missionary. Cashew came conquered and took deep root in the entire coastal region of India. While the tree is na-tive to central and Southern America it is now widely distributed throughout the tropics, particularly in many parts of Africa and Asia. In India Cashew nut cultivation now covers a total area of 0.70 million hectares of land, producing over 0.40 million metric tons of raw Cashew nuts. The Cashew (Anacardium Occidentale) is a tree in the flowering plant family Anacardiaceae. The plant is native to northeastern Brazil, where it is called by its Por-tuguese name Caju (the fruit) or Cajueiro (the tree). It is now widely grown in tropical climates for its cashew "nuts" and cashew apples.

1.1 Specification of Cashew nut shell
The shell is about 0.3 cm thick, having a soft feathery outer skin and a thin hard inner skin. Between these skins is the honeycomb structure containing the phenolic material known as CNSL. Inside the shell is the kernel wrapped in a thin skin known as the teesta.
1.2 Composition of cashew nut
The shell is about 0.3 cm thick, having a soft feathery outer skin and a thin hard inner skin. Between these skins is the honeycomb structure containing the phenolic material known as CNSL. Inside the shell is the kernel wrapped in a thin skin known as the testa.The nut consists of the following kernel 20 to 25%, kernel liquid 20 to 25%, testa 2%, others rest being the shell. The raw material for the manufacture of CNSL is the Cashew.
Properties   Diesel   B10   B15   B20   B25   B30
 Flash point (C)   50   53   55   56   58   61
   817   823   829   836   841   846
Viscosity at 400C
(Centistokes)   2   2.5   3.1   3.5   4.2   5.5
Calorific value   (KJ/Kg)   40000   40130   40196   40261   40326   40392
     According to the invention [6] CNSL is subjected to fractional distillation at 200° to 240°C under reduced pressure not exceeding 5mm. mercury in the shortest possible time which gives a distillate containing cardol and the residual tarry matter, for example, in the case of a small quantity of oil, say 200 ml/ the distillation period is about 10 to 15 minutes. A semi-commercial or commercial scale distillation of CNSL may however take longer times. It has been found that there are certain difficulties of operation with regard to single-stage frac-tional distillation method, i.e. frothing of the oil which renders difficult the fractionation of cardol and also formation of polymerised resin. These difficulties can be over come in the two-stage distillation, if care is taken not to prolong the heating; this is to avoid the undue formation of polymerised resins and possible destruction partially or completely of the cardol or anacardol. When CNSL is distilled at a reduced pressure of about 2 to 2.5 mm. mercury, the distillate containing anacardol and cardol distils firstly at about 200°C to 240°C. This first distillate is then subjected to a second distillation under the same identical conditions of tem-perature and pressure when the anacardol distils over at a temperature of 205°C to 210°C and the cardol distils over at a temperature of 230°C to 235°C. In practice it has been found that the preliminary decarboxylation of the oil is essential, since there will be excessive frothing, which renders the distillation procedure unproductive and uneconomical. A specific feature of this invention is that both cardol and anacardol may be obtained by a three-step process. The first step of the process is to get the decarboxylated oil by heating the oil to a temperature of 170°C to 175°C under reduced pressure of 30-40 mm. mercury. The next two steps are the same as above for the production of both cardol or cordnol and anacardol.

Read More:

Electronics / Enhancement of Person Identification using Iris Pattern
« on: April 23, 2011, 05:20:54 pm »
Author : Vanaja roselin.E.Chirchi, Dr.L.M.Waghmare, E.R.Chirchi
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract — The biometric person identification technique based on the pattern of the human iris is well suited to be applied to access control. Security systems having realized the value of biometrics for two basic purposes: to verify or identify users. In this busy world, identification should be fast and efficient. In this paper we focus on an efficient methodology for identification and verification for iris detection using Haar wavelet and the classifier used is Minimum hamming distance, even when the images have obstructions, visual noise and different levels of illuminations.

Index Terms—Biometrics, Iris identification, Haar wavelet, occluded images, veriEye.

1.   Introduction
Biometrics which refers to identifying an individual by his or her physiological or behavioral characteristics has capability to distinguish between authorized user and an imposter. An advantage of using biometric authentication is that it cannot be lost or forgotten, as the person has to be physically present during at the point of identification process [9].Biometrics is inherently more reliable and capable than traditional knowledge based and token based techniques. The commonly used biometric features include speech, fingerprint, face, Iris, voice, hand geometry, retinal identification, and body odor identification [10] as in Fig.1

Fig. 1: Examples of Biometrics ( Download Full Paper to View )

To choose the right biometric to be highly fit for the particular situation, one has to navigate through some complex vendor products and keep an eye on future developments in technology and standards. Here comes a list of Biometrics with comparatives:

Facial Recognition: Facial recognition records the spatial geometry of distinguishing features of the face. Different vendors use different methods of facial recognition, however, all focus on measures of key features of the face. Facial recognition has been used in projects to identify card counters or other undesirables in casinos, shoplifters in stores, criminals and terrorists in urban areas. This biometric system can easily spoof by the criminals or malicious intruders to fool recognition system or program. Iris cannot be spoofed easily.

Palm Print: Palm print verification is a slightly modified form of fingerprint technology. Palm print scanning uses an optical reader very similar to that used for fingerprint scanning; however, its size is much bigger, which is a limiting factor for use in workstations or mobile devices.
Signature Verification: It is an automated method of examining an individual’s signature. This technology is dynamic such as speed, direction and pressure of writing, the time that the stylus is in and out of contact with the “paper”. Signature verification templates are typically 50 to 300 bytes. Disadvantages include problems with long-term reliability, lack of accuracy and cost.

Fingerprint: A fingerprint as in Fig.1 recognition system constitutes of fingerprint acquiring device, minutia extractor and minutia matcher. As it is more common biometric recognition used in banking, military etc., but it has a maximum limitation that it can be spoofed easily. Other limitations are caused by particular usage factors such as wearing gloves, using cleaning fluids and general user difficulty in scanning.

Iris Scan: Iris as shown in Fig.2 is a biometric feature, found to be reliable and accurate for authentication process comparative to other biometric feature available today. As a result, the iris patterns in the left and right eyes are different, and so are the iris patterns of identical twins. Iris templates are typically around 256 bytes. Iris scanning can be used quickly for both identification and verification applications because of its large number of degrees of freedom. Iris as in Fig. 2 is like a diaphragm between the pupil and the sclera and its function is to control the amount of light entering through the pupil. Iris is composed of elastic connective tissue such as trabecular meshwork. The agglomeration of pigment is formed during the first year of life, and pigmentation of the stroma occurs in the first few years [7][8].

Fig. 2:  Structure of Eye ( Download Full Paper to View )

The highly randomized appearance of the iris makes its use as a biometric well recognized. Its suitability as an exceptionally accurate biometric derives from [4]:
i.   The difficulty of forging and using as an imposter person;
ii.   It is intrinsic isolation and protection from the external environment;
iii.   It’s extremely data-rich physical structure;
iv.   Its genetic properties—no two eyes are the same. The characteristic that is dependent on genetics is the pigmentation of the iris, which determines its color and determines the gross anatomy. Details of development, that are unique to each case, determine the detailed morphology;
v.   its stability over time; the impossibility of surgically modifying it without unacceptable risk to vision and its physiological response to light, which provides a natural test against artifice.
After the discovery of iris, John G. Daugman, a professor of Cambridge University [8] ,[9], suggested an image-processing algorithm that can encode the iris pattern into 256 bytes based on the Gabor transform.
In general, the iris recognition system is composed of the following five steps as depicted in Fig. 3 According to this flow chart, preprocessing including image enhancement.

2.   Image Acquisition.
An image of the eye to be analyzed must be acquired first in digital form suitable for analysis. In further implementation we will be using Chinese academy of science-Institute of automation (CASIA) iris image database available in the public domain [7].

Read More:

Others / Application of Reliability Analysis: A Technical Survey
« on: April 23, 2011, 05:15:07 pm »
Author : Dr. Anju Khandelwal
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— The objective of this paper is to present a survey of recent research work of high quality that deal with reliability in different fields of engineering and physical sciences. This paper covers several important areas of reliability, significant research efforts being made all over the world. The survey provides insight into past, current and future trends of reliability in different fields of Engineering, Technology and medical sciences with applications with specific problems.

Index Terms— CCN, Coherent Systems, Distributed Computing Systems, Grid Computing, Nanotechnology, Network Reliability, Reliability.

THIS Traditional system-reliability measures include reliability, availability, and interval availability. Re-liability is the probability that a system operates without interruption during an interval of interest under specified conditions. Reliability can be extended to in-clude several levels of system performance. A first per-formance-oriented extension of reliability is to replace a single acceptable-level-of-operation by a set of perfor-mance-levels. This approach is used for evaluating net-work performance and reliability. The performance-level is based on metrics derived from an application-dependent performance model. For example, the perfor-mance-level might be the rate of job completion, the re-sponse time, or the number of jobs completed in a given time-interval. Availability is the probability that the sys-tem is in an operational state at the time of interest. Availability can be computed by summing the state probabilities of the operational states. Reliability is the probability that the system stays in an operational state throughout an interval. In a system without repair, relia-bility and availability are easily related. In a system with repair, if any repair transitions that leave failed states are deleted, making failure states absorbing states, reliability can be computed using the same methods as availability. Interval availability is the fraction of time the system spends in an operational state during an interval of interest. The mean interval availability can be computed by determining the mean time the system spends in operational states. Mean interval availability is a cumulative measure that depends on the cumulative amount of time spent in a state.
For example, how can traditional reliability assessment techniques determine the dependability of manned space vehicle designed to explore Mars, given that humanity has yet to venture that far into space? How can one determine the reliability of a nuclear weapon, given that the world has in place test-ban treaties and international agreements? And, finally, how can one decide which artificial heart to place into a patient, given neither has ever been inside a human before? To resolve this dilemma, reliability must be: 1) reinterpreted, and then 2) quantified. Using the scientific method, researchers use evidence to determine the probability of success or failure. Therefore, reliability can be seen as an image of probability. The redefined concept of reliability incorporates auxiliary sources of data, such as expert knowledge, corporate memory, and mathematical modeling and simulation. By combining both types of data, reliability assessment is ready to enter the 21st century. Thus, reliability is a quantified measure of uncertainty about a particular type of event (or events). Reliability can also be seen as a probability.

Reliability is a charged word guaranteed to get attention at its mere mention. Bringing with it a host of connota-tions, reliability, and in particular its appraisal faces a critical dilemma at the dawn of a new century. Tradition-al reliability assessment consists of various real-world assessments driven by the scientific method; i.e., conducting extensive real-world tests over extensive time periods (often years) enabled scientists to determine a product’s reliability under a host of specific conditions. In this 21st century, humanity’s technology advances walk hand in hand with myriad testing constraints, such as political and societal principles, economic and time considerations, and lack of scientific and technology knowledge. Because of these constraints, the accuracy and efficiency of traditional methods of reliability assessment become much more questionable. Applications are the important part of research. Any theory has importance, if it is useful and applicable. Many researchers are busy these days applying concepts of Reliability in various fields of Engineering and Sciences. Some important applications are given here:
2.1   Nano-Technology
Nano-reliability measures the ability of a nano-scaled product to perform its intended functionality. At the nano scale, the physical, chemical, and biological properties of materials differ in fundamental, valuable ways from the properties of individual atoms, mole-cules, or bulk matter. Conventional reliability theories need to be restudied to be applied to Nano-Engineering. Research on Nano-Reliability is extremely important due to the fact that nano-structure components account for a high proportion of costs, and serve critical roles in newly designed products. In this paper, Shuen-Lin Jeng et al.[1] introduces the concepts of reliability to nano-technology; and presents the work on identifying various physical failure mechanisms of nano-structured materials and devices during fabrication process and operation. Modeling techniques of degradation, reliability functions and failure rates of nano-systems have also been discussed in this paper.
Engineer’s are required to help increase reliability, while maintaining effective production chedulesto produce current, and future electronics at the lowest possible cost. Without effective quality control, devices dependent on nanotechnology will experience high manufacturing costs, including transistors which could result in a disruption of the continually steady Moore’s law. Nano Technology can potentially transform civilization. Realization of this potential needs a fundamental understanding of friction at the atomic scale. Furthermore, the tribological considerations of these systems are expected to be an integral aspect of the system design and will depend on the training of both existing and future scientists, and engineers in the nano scale. As nanotechnology is gradually being integrated in new product design, it is important to understand the mechanical and material properties for the sake of both scientific interest and engineering usefulness. The development of nanotechnology will lead to the introduction of new products to the public. In the modern large-scale manufacturing era, reliability issues have to be studied; and results incorporated into the design and manufacturing phases of new products. Measurement and evaluation of reliability of nano-devices is an important subject. New technology is developed to support the achievement of this task. As noted by Keller, et al. [2], with ongoing miniaturization from MEMS towards NEMS, there is a need for new reliability concepts making use of meso-type (micro to nano) or fully nano-mechanical approaches. Ex-perimental verification will be the major method for uvalidating theoretical models and simulation tools. Therefore, there is a need for developing measurement techniques which have capabilities of evaluating strain fields with very local (nano-scale) resolution.

2.2   Computer Communication Network
Network analysis is also an important approach to model real-world systems. System reliability and system unreliability are two related performance indices useful to measure the quality level of a supply-demand system. For a binary-state network without flow, the system unreliability is the probability that the system can not connect the source and the sink. Extending to a limited-flow network in the single-commodity case, the arc capacity is stochastic and the system capacity (i.e. the maximum flow) is not a fixed number. The system unreliability for (+ 1), the probability that the upper bound of the system capacity equals can be computed in terms of upper boundary points. An upper boundary point is the maximal system state such that the system fulfills the demand. In his paper Yi-Kuei Lin [3] discusses about multicommodity limited-flow network (MLFN) in which multicommodity are transmitted through unreliable nodes and arcs. Nevertheless, the system capacity is not suitable to be treated as the maximal sum of the commodity because each commodity consumes the capacity differently. In this paper, Yi-Kuei Lin defines the system capacity as a demand vector if the system fulfils at most such a de-mand vector. The main problem of this paper is to meas-ure the quality level of a MLFN. For this he proposes a new performance index, the probability that the upper bound of the system capacity equals the demand vector subject to the budget constraint, to evaluate the quality level of a MLFN. A branch-and-bound algorithm based on minimal cuts is also presented to generate all upper boundary points in order to compute the performance index.

In a computer network there are several reliability problems. The probabilistic events of interest are:
* Terminal-pair connectivity
* Tree (broadcast) connectivity
* Multi-terminal connectivity

These reliability problems depend on the net-work topology, distribution of resources, operating envi-ronment, and the probability of failures of computing nodes and communication links. The computation of the reliability measures for these events requires the enume-ration of all simple paths between the chosen set of nodes. The complexity of these problems, therefore, increases very rapidly with network size and topological connectivity. The reliability analysis of computer communication networks is generally based on Boolean algebra and probability theory. Raghavendra, et al. [4] discusses various reliability problems of computer networks including terminal-pair connectivity, tree connectivity, and multi-terminal connectivity. In his paper he also studies the dynamic computer network reliability by deriving time-dependent expressions for reliability measures assuming Markov behavior for failures and repairs. This allows computation of task and mission related measures such as mean time to first failure(MTFF) and mean time between failures (MTF).

Read More:

Author : Ramkumar P.B, Pramod K.V
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— Mathematical Morphology in its original form is a set theoretical approach to image analysis.It studies image transformations with a simple geometrical interpretation and their algebraic decomposition and synthesis in terms of elementary set operations.Mathematical Morphology has taken concepts  and tools from different branches of Mathematics like algebra (lattice theory) ,Topology,Discrete geometry ,Integral Geometry,Geometrical Probability,Partial Differential Equations etc.In this paper ,a generalization of Morphological  terms is introduced.In connection with algebraic generalization,Morphological operators can easily be defined by using this structure.This can provide information about operators and other tools within the system

Index Terms—morphological space, transform systems, slope transforms, legendre, kernel. 

Read More:

Author : Prof. A.P. Thakare, Mr. Vinod H. Yadav
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— It is always a good idea that a bus commuter waiting at a stop gets to know how far a bus is. If his route of travel happens to be common for more than one bus- route number, it is even better for him to know which is the nearest bus or the earliest arriving bus. This will enable him to opt for the bus or some other mode of commuting. This becomes very useful for the physically challenged commuter, as after knowing in advance the bus arrival s/he will be ready to accommodate in the bus.
A thought of  project “Bus Proximity Indicator” is the best solution for the above situation and is best suitable for the B.E.S.T. (The Brihanmumbai Electric Supply & Transport) In this  a wireless RF linkage between a certain bus and a bus stop can be used for determination of the bus proximity that help’s commuter to know how far his bus is. This project tells him the Bus number, bus name and the approaching time by displaying it on the LCD which is on the bus stop.  This project also satisfies the need of automization in bus services.

Index Terms— Amplitude Shift Keying, Atmel’s AT89C52 Microcontroller, RF encoder/ decoder IC ST12CODEC, C51 Cross Compiler, Radio frequency transmitter, Timer astable multivibrator. 

THE Bus Proximity Indicator presented in the section uses radio frequency of 433 MHZ. The prefixed code of the bus is generated by the encoder ST 12 CODEC. This code is transmitted after Amplitude shift Keying. The receiver po-sitioned at the bus stop detects the radio frequency signal and the bus identification is done by decoder ST 12 CODEC.

The block diagram and relevant description of the same is given including of Transmitter and receiving section

Fig.1. (Download Full Paper for Diagram View )

2.1 Transmitter Section
The basic block diagram for the Transmitter section is as shown in the block diagram. It consists of the following blocks:
a)   TAMV 555
b)   Encoder
c)   RF Transmitter
d)   Battery

 TAMV – Timer astable multivibrator,
 RF TX – Radio frequency transmitter

a)   TAMV 555:
The 555 timer IC is used as an astable multivibrator and as an address setter for triggering an IC ST12CODEC which is used as an encoder
(Figure 1: Transmitter Section of Bus Proximity Indicator)

b)   RF Encoder:
A logic circuit that produces coded binary outputs from encoded inputs. This uses ST CODEC 12BT for encoding the data. The encoder encodes the data and sends it to RF Transmitter. The IC ST12 CODEC is a single chip telemetry device, which may be an encoder or a decoder. When combined with a Radio transmitter / receiver it may be used to provide encryption standard for data communication system The IC ST12CODEC performs all the necessary data manipulation and encryption for an optimum range reliable radio link.
Transmitter and receiver use same IC ST12 CODEC in RF encoder mode for serial communication. This IC is capable of transmitting 12 bits containing 4 bit address bit and 8 bit data. The transmitted information is sent by RF with 434 MHZ RF transmitter. ST12 CODEC works on 5v.
RF Transmitter:
RF transmitter’s uses ASK (Amplitude Shift Keying) for modulating the data send by ST12 CODEC .This modulated information is then transmitted with 433 MHz frequency through RF antenna to receiver. It helps in transmitting data present in encoder via antenna at particular frequency.

c)   Battery:
A single 9V battery is used to supply power to the transmitter section.

2.2 Receiver Section
The basic block diagram for the Receiver section is as shown above. It consists of the following blocks,
a)   RF Receiver
b)   RF Decoder
c)   Microcontroller
d)   Power supply
e)   LCD

RF RX: Radio frequency receiver
LCD: Liquid crystal display
RFDC: RF Decoder
µC: Microcontroller AT 89C51

(Figure 2: Receiver Section of Bus Proximity Indicator) (Download Full Paper to View )
a)   RF Receiver:
It is enhanced single chip IC RWS 434 which receives the 433.92 MHz transmitted signal, transmitted by RF transmit-ter. It uses ASK (Amplitude Shift Keying) conventional heterodyne receiver IC for remote wireless applications.

b)   RF Decoder:
A logic circuit that used to decode coded binary word. This uses IC ST12 CODEC for decoding the data which is transmitted by IC RWS 434. The decoder converts the serial data which has been sent from RF receiver to parallel form and sends it to microcontroller. The coded data decoded by this block is given to LCD.

c)   Microcontroller (IC 89C52):
This is the most important block of the entire system. The microcontroller works at crystal frequency of 11.0592 MHz. It receives the parallel data from ST12 CODEC IC and compares it with the program code which already stored in it. This microcontroller has the baud rate 9600 bits/sec.
The 89C52 is a low power, high performance CMOS 8 bit microcomputer with 8k bytes of flash programmable and erasable read only memory (PEROM).The device is manufactured using Atmel’s high density nonvolatile memory technology and it is compatible with the industry standard 89C51 and 89C52 instruction set and pin out.
The on chip Flash allows the program memory to be reprogrammed in system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with flash on a monolithic chip, the Atmel’s AT89C52 is a powerful microcomputer which provides a highly flexible and cost effective solution to many embedded control applications.
d)   Power Supply:
The performance of the master box depends on the proper functioning of the power supply unit. The power supply converts not only A.C into D.C, but also provides output voltage of 5V, 1 amp. The essential components of the power supply are Transformer, four diodes which forms bridge rectifier, capacitor which work as a filter and positive voltage regulator IC 7805. It provides 5v to each block of the transmitter.

e)   16 X 2 LCD:
LCD modules are useful for displaying the information from a system.
These modules are of two types, Text LCD and Graphical LCD. In this project a Text LCD of size (16 x 2) with a two line by sixteen character display is used to display the various sequence of operations during the operation of the project. This is used for visual information purpose. The LCD will display the data coming from normal keyboard or form microcontroller as a visual indication.

Read More:

Pages: [1] 2 3 4