Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - IJSER Content Writer

Pages: 1 ... 20 21 [22]
316
Quote
Author : R.Divya, T.Thirumurugan
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract - Numerous key management schemes have been proposed for sensor networks. The objective of key management is to dynamically establish and maintain secure channels among communicating nodes. Many schemes, referred to as static schemes, have adopted the principle of key predistribution with the underlying assumption of a relatively static short-lived network (node replenishments are rare, and keys outlive the network). An emerging class of schemes, dynamic key management schemes, assumes long-lived networks with more frequent addition of new nodes, thus requiring network rekeying for sustained security and survivability. This paper proposes a dynamic key management scheme by combining the advantages of simple cryptography and random key distribution schemes. When the hamming distance between the two nodes is found high, the unique key is changed instead of changing the set of keys and the communication takes place by using any one of the set of key x-oring with the new unique key. The security and performance of the proposed algorithm is compared with the existing dynamic key management scheme based on Exclusion Basis System and prove that the proposed scheme performs better when compared to existing scheme by considering the number of nodes colluded with time. The result obtained by simulation also shows that the proposed scheme provides security solution and performs better than the existing scheme.

Index Terms - WSN’s, dynamic key management, collusion, hamming distance, security.
 
1.   INTRODUCTION   
THE envisioned growth in utilizing sensor networks in a wide variety of sensitive applications ranging from healthcare to warfare is stimulating numerous efforts to secure these networks. Sensor networks comprise a large number of tiny sensor nodes that collect and (partially) process data from the surrounding environment. The data is then communicated, using wireless links, to aggregation and forwarding nodes (or gateways) that may further process the data and communicate it to the outside world through one or more base stations (or command nodes). Base stations are the entry points to the network where user requests begin and network responses are received. Typically, gateways and base stations are higher-end nodes. It is to be noted, however, that various sensor, gateway, and base station functions can be performed by the same or different nodes. The sensitivity of collected data makes encryption keys essential to secure sensor networks.

1.1   Key Management
The term key may refer to a simple key (e.g., 128-bit string) or a more complex key construct (e.g., a symmetric bivariate key polynomial).A large number of keys need to be managed in order to encrypt and authenticate sensitive data exchanged. The objective of key management is to dynamically   establish and maintain secure channels among communicating parties.
   
            Typically, key management schemes use administrative keys (key encryption keys) for the secure and efficient (re-)distribution and, at times, generation of the secure channel communication keys (data encryption keys) to the communicating parties. Communication keys may be pair-wise keys used to secure a communication channel between two nodes that are in direct or indirect communications, or they may be group keys shared by multiple nodes. Network keys (both administrative and communication keys) may need to be changed (re-keyed) to maintain secrecy and resilience to attacks, failures, or network topology changes. Numerous key management schemes have been proposed for sensor networks. Most existing schemes build on the seminal random key pre-distribution scheme introduced by Eschenauer and Gligor [1]. Subsequent extensions to that scheme include using deployment knowledge [2] and key polynomials [3] to enhance scalability and resilience to attacks. These set of schemes is referred as static key management schemes since they do not update the administrative keys post network deployment.

           An example of dynamic keying schemes is proposed by Jolly et al. [4] in which a key management scheme based on identity based symmetric keying is given. This scheme requires very few keys (typically two) to be stored at each sensor node and shared with the base station as well as the cluster gateways. Rekeying involves reestablishment of clusters and redistribution of keys. Although the storage requirement is very affordable, the rekeying procedure is inefficient due to the large number of messages exchanged for key renewals. Another emerging category of schemes employ a combinatorial formulation of the group key management problem to affect efficient rekeying      [5, 6]. These are examples of dynamic key management schemes.            While static schemes primarily assume that administrative keys will outlive the network and emphasize pair wise communication keys.
   Dynamic schemes advocate rekeying to achieve resilience to attack in long-lived networks and primarily emphasize group communication keys. Since the dynamic scheme has the advantage of long lived network and rekeying when compared to the static schemes, the dynamic key management is chosen as a security scheme for the WSN’s.

2.        KEY MANAGEMENT SCHEMES IN SENSOR  NETWORKS

The success of a key management scheme is determined in part by its ability to efficiently survive attacks on highly vulnerable and resource challenged sensor networks. Key management schemes in sensor networks can be classified broadly into dynamic or static solutions based on whether rekeying (update) of administrative keys is enabled post network deployment.

2.1   Static Key Management Schemes

The static schemes assume that once administrative keys are predeployed in the nodes, they will not be changed. Administrative keys are generated prior to deployment, assigned to nodes either randomly or based on some deployment information, and then distributed to nodes. For communication key management, most static schemes use the overlapping of administrative keys to determine the eligibility of neighboring nodes to generate a direct pair-wise communication key. Communication keys are assigned to links rather than nodes. In order to establish and distribute a communication key between two non neighboring nodes and/or a group of nodes, that key is propagated one link at a time using previously established direct communication keys.

2.2   Dynamic Key Management Schemes

Dynamic key management schemes may change administrative keys periodically, on demand or on detection of node capture. The major advantage of dynamic keying is enhanced network survivability, since any captured key(s) is replaced in a timely manner in a process known as rekeying. Another advantage of dynamic keying is providing better support for network expansion; upon adding new nodes, unlike static keying, which uses a fixed pool of keys, the probability of network capture increase is prevented. The major challenge in dynamic keying is to design a secure yet efficient rekeying mechanism. A proposed solution to this problem is using exclusion-based systems (EBSs); a combinatorial formulation of the group key management problem

3.           SENSOR NETWORK MODEL

Both the proposed and the existing security algorithm are based on a wireless sensor network consisting of a command node and numerous sensor nodes which are grouped into clusters. The clusters of sensors can be formed based on various criteria such as capabilities, location, communication range, etc.  Each cluster is controlled by a cluster head, also known as gateway, which can broadcast messages to all sensors in the cluster. We assume that the sensor and gateway nodes are stationary and the physical location and communication range of all nodes in the network are known. Each gateway is assumed to be reachable to all sensors in its cluster, either directly or in multihop. Sensors perform two main functions: sensing and relaying. The sensing component is responsible for probing their environment to track a target/ event. The collected data are then relayed to the gateway. Nodes that are more than one hop away from the gateway send their data through relaying nodes. Sensors communicate only via short-haul radio communication. The gateway fuses reports from different sensors, processes the data to extract relevant information and transmits it to the command node via long-haul transmission.

Read More: Click here...

317
Quote
Author : Amrendar Kumar, Abhilasha Singh, Biplab Bhattacharjee
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Traditionally, drugs were discovered by testing compounds manufactured in time consuming multi-step processes against a battery of in vivo biological screens. Promising compounds were then further studied in development, where their pharmacokinetic properties, metabolism and potential toxicity were investigated. Here we present a study on herbal lead compounds and their potential binding affinity to the effectors molecules of major disease like Alzheimer’s disease. Clinical studies demonstrate a positive correlation between the extent of Acetyl cholinesterase enzyme and Alzheimer’s disease. Therefore, identification of effective, well-tolerated acetyl cholinesterase represents a rational chemo preventive strategy. This study has investigated the effects of naturally occurring nonprotein compounds polygala and Jatrorrhizine that inhibits acetylcholinesterase enzyme. The results reveal that these compounds use less energy to bind to acetylcholinesterase enzyme and inhibit its activity. Their high ligand binding affinity to acetylcholinesterase enzyme introduce the prospect for their use in chemopreventive applications in addition they are freely available natural compounds that can be safely used to prevent Alzheimer’s Disease.
 
Index Terms— Alzheiemr’s Disease, Acetylcholinesterase, Binding Affinity, Jatrorrhizine, Clinical Studies, Docking, Rational, Toxicity

1   INTRODUCTION
Alzheimer's disease, most common form of dementia is incurable, degenerative, and terminal disease mostly diagnosed in people over 65 years of age. The disease advances with symptoms include confusion, irritability and aggression, mood swings, language breakdown, long-term memory loss, and the general withdrawal of the sufferer as their senses decline. Gradually, bodily functions are lost, ultimately leading to death. In advanced stages of the disease, all memory and mental functioning may be lost. The condition predominantly affects the cerebral cortex and hippocampus, which lose mass and shrink (atrophy) as the disease advances. These changes, occurring in the association area of the cerebral cortex, the hippocampus and the middle and temporal lobes, are accompanied by decreased concentrations of the neurotransmitter acetylcho-line.Acetylcholinesterase is also known as AChE. An acetyl cholinesterase inhibitor or anti-cholinesterase is a chemical that inhibits the cholinesterase enzyme from breaking down acetylcholine, increasing both the level and duration of action of the neurotransmitter acetylcholine.

2 METHODOLOGY
Some small molecules were taken as targeting agent which are responsible for inhibiting biological process in AD. The investigation drug Galantamine was used as a reference drug in the studies.
2W9I was taken as targeting protein and structure for the same was taken from PDB.
Then Initial screening is done by Lipinski’s rule of 5. The accepted compounds that were showing better interaction with the target protein and their energy were minimized using Marvin’s Sketch.
Then the selected conformations are saved in three formats:
1.   SDF Format
2.   PDB Format
3.   MOL. Format
Then docking is done between the protein and small molecules with QUANTUM. The best three results obtained were analysed under HEX and ARGUS. IC50 value is taken by QUANTUM.Then graphs are plotted by seeing the values.

After this, ADME TOX analysis was performed. In this analysis ADME features were showing interac-tion 
With 2W9I in both Argus and Quantum analysis were predicted under ADME test for toxicity predic-tion.
AMES test is considering for initial screening of the molecule based on their ability to induce mutation.
AMES test is used for determining if a chemical is mutagen. The molecules which showing ability to induce mutation were rejected in toxicity based screening.

Using ADME TOX analysis it was found that Ja-trorrhizine was showing lower AMES test values than reference molecules. Further health effects of these molecules in blood, cardiovascular system, gastrointestinal system, kidney, liver and lungs were predicted. LD 50 values also were predicted for selecting reliable molecule for ADME analysis.
After ADME analysis,a graph was plotted on ADME-TOX values.
                               
After all analysis, one compound Jatrorrhizine was the best molecule.
This molecule is considered as better ligands for 2W9I based on interaction, Pharmacokinetics, and pharmacodynamic features and can be used for Chemotherapeutic use.

Read More: Click here...

318
Quote
Author : Gove Nitinkumar Rajendra, Bedi Rajneesh kaur
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract: In today’s computer world security, integrity, confidentiality of the organization’s data is the most important issue. This paper deals with the confidentiality of the data that organization manages and works with. This paper proposes a new approach to data security using the concept of genetic algorithm and brain mu waves with pseudorandom binary sequence to encrypt and decrypt the data. The feature of such an approach includes high data security and high feasibility for practical implementation.
 
Index Terms—Mu waves, Genetic algorithms, Pseudorandom binary sequence, Encryption, Crossover operator, Data security, Confidentiality.

INTRODUCTION
Recently, due to the big data losses from illegal data access, data security has become an important issue for public, private and defense organizations. In order to protect this valuable data or information from unauthorized readers and illegal modifications and reproductions various types of cryptographic techniques are used.
There are two basic types of cryptographic techniques [1],[2]: symmetric and asymmetric cryptography. In symmetric cryptography, same key is used for encryption and decryption. While in asymmetric cryptography, two different keys are used, one for encryption called public key and another for decryption called private key.
Symmetric key algorithms are typically fast and are suitable for processing large stream of data. Some of the popular and efficient symmetric algorithms include Twofish, Serpent, AES, Blowfish and IDEA etc. There are other encryption algorithms which are proposed. Genetic algorithms [4] are among such techniques.
Generally, genetic algorithms contain three basic operators: reproduction, crossover and mutation [5].
Reproductions and crossover together gives the genetic algorithms most of their power.

This paper proposes a new approach for encrypting large volume of organization data and highly secret personal data or information. First, an 8 character long string is interpreted from the mu waves generated by the brain. Second, a pseudo random binary sequence is generated from the string obtained after processing above string. Third, the first character string and the pseudorandom sequence is applied to crossover operator which will output two keys which then are concatenated to get a final 512 bit key.
This key is then used to encrypt and decrypt data.
The rest of the paper is organized as follows. In section 2, the proposed method is introduced. Section 3 gives the analysis of proposed method. Section 4 concludes the paper.
1.   THE PROPOSED METHOD
The proposed method block diagram is shown in fig 1. It consists of key generation logic, encryption and decryption modules, which are explained in following subsections.
1.1   The Key Generation Logic
Fig 2 shows the model of key generation logic. It consists of sensory input detection unit which is responsible for detecting the mu waves of the pass though of user (pass thought is the thinking which is used as a key in latter processing.) and interpreting appropriate characters that the user is thinking about to press. This involves signal acquisition, feature extraction, and finally translation.
The other modules involved in this are character swapper, pseudorandom binary sequence generator and crossover operator. The pseudorandom binary sequence generator is explained in following subsections.
2.1.1 Character Swapper
There are mainly two function performed by this unit. First is, separating the characters according their position i.e., odd or even. Then each two consecutive odd/even positioned characters are swapped with their next character.
2.1.2 Pseudorandom Binary Sequence Generator
Fig 3 shows a general model of PRBSG [3]. It is a non linear forward feedback shift register with a feedback function f and non linear function
When register is loaded with a non-zero value, a pseudo random sequence with very good randomness and statistical properties is generated.
The only signal required for the operation of this module clock pulse. The balance, run and correlation properties of the sequence generated make it more useful for generating the private key.
2.1.3 Crossover Operator
Crossover in simple words is a process in which two strings are mixed such that they match their desirable qualities in a random fashion.
Crossover operator proceeds in three steps as given below:
1. Two new strings are selected.
2. A random location from strings is selected.
3. The portions of strings on right side are swapped together.
 
2.1.4 Key Generation Process
The key generation algorithm used here produces a very strong key which is very difficult to guess even with exhaustive search. The process of key generation is as given below:
1. Scan the pass thought. Take the string generated after sensing, filtering and processing Mu waves. This is key1.
2. Pass this string to the character swapper.
3. Pass the non zero output of character swapper to the pseudo random binary sequence generator.
4. The output of PRBSG is key2.
5. Both key1 and key2 are 256 bit long.
6. Apply both these keys to crossover operator.
7. Finally, concatenate the two strings generated at
after crossover operation.
This whole process is depicted in fig 2.
2.2 The Encryption Process
The encryption process emulates the operation of key generator and crossover operator. The encryption process comprises of following steps:
1. Generate the key using the key generator logic as Kn.
2. Take mode 8 of the key generated to get decimal value ranging from 0 to 7.
3. Kn =mod (Kn, 8)
4. Initialize i=0
5. Take two consecutive bytes of the data file as A1 and A2
6. Crossover the two consecutive bytes of the data file as B1 and B 2 Using the number Ki.
7. Encrypt data as C1 and C2 .This is done as follows:
Xi=Ki XOR Ki << 4
Xi+1=ki+1 XOR ki+1 <<4
C1=Bi XOR X1
C2=B2 XOR Xi+1
And i=i+2
Repeat steps 4 to 6 until end of the file.
2.3 The Decryption Process
The steps for encryption are just reversal of the encryption. First extract the key1 from sensory output, then obtain key2 through character swapper, generate PRBS and then the key, apply the process using crossover operator decrypt the data.

Read More: Click here...

319
Quote
Author : Dr. Sunil Kumar Singh, Dr. Shekh Aqeel
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract- This paper is concerned with various effects of disease caused death on the host population in an epidemic model of SIR type. Various effects of disease caused death on the host population are studied in this epidemic model. The basic problem discussed in this paper is to be describing the spread of an infection caused death within a population. It is further assumed that there is no substantial development of immunity and that removed infectious are in effect cured of disease. The rate of natural birth and death is assumed to be balanced.

Key Word - Mathematical modeling, Population size , Birth- rate , Death- rate & Infection rate.
 
INTORDUCTION
Anderson and May [7] studied the effects of disease caused death on the population size in model for a disease which spreads through direct infection within a population whose size is allowed to very in time. Two important new phenomena a was revealed by their study.

 A threshold for the population size exists that determines whether the population can sustain an epidemic; fatal diseases are found to have a regulating effect on the growth of the population. Many subsequent works have fallowed this line of research [5]. Another characteristic of this body of research is that the emphasis is on the interplay between the net intrinsic growth rate r and the rate of disease caused death  : it r >  then the disease it likely to become endemic. To explain this phenomenon, potential mechanisms   to endemicity other then a large intrinsic growth rate r need to be studied. In recent study, we discovered that a long incubation period incorporated into a SIR model may provide an explanation for concurrence of high pathogenicity and long life span of infectiousness. We took different approach to study of epidemic models by assuming that the population a small intrinsic growth rate r so that disease caused mortality rate
 is relatively large. This approach has following advantage.
1.   If greatly reduces the technicality in mathematical analysis, One may start with the case r = 0 and then consider the case of small positive r.
2.   If enables us to isolate those effects on the population that directly   related to the disease e.g. we discovered our essential difference between a model that incorporates an incubation period and one that does not. Even in a simple model that does not contain an incubation period, this new approach leads  to the discovery of several interesting details not found in literature.

3.   By keeping the mathematical technicalities at its minimum, this approach may allow our models more accessible to field  epidemiologist and hence encourage of application of  mathematics in epidemiological studies.

In the present work we demonstrate our approach through a very simple model. We assume that the disease spreads through direct contact among the hosts, the disease has no incubation period as considered in most of previous works [2,4] and that the intrinsic growth rate of host population is zero i.e r = 0 so that in absence of disease, the population size remains constant. The mathematical analysis of model is very elementary, and it provides epidemiologically interesting details about an epidemic. Also, we demonstrate that the kind of phenomenon one many observe in the case of a small positive intrinsic growth is essentially the some as we obtained here.  In particular, this seems to suggest that, an SIR model is essentially a model for an epidemic; it does not provide an epidemiologically relevant mechanism for disease endimicity.
For other studies on epidemological models with varying population size closely related to the one we consider her, see Greenhalgh [1] and Mena- Lorca and Hethcote [6] and references there on. Other models with varying population and disease- caused death have been studies by Brauer [2], Bremerman and Thieme [3], Gao and Hethcote [4], and Hethcote [6], and Pugliese [7].

FORMULATION OF MATHEMATICAL MODEL :

The population is partitioned into classes of susceptible, infectious and immune individuals, with population N(t) is
           n(t)  = x(t) + y(t) +z (t)                  (1)
Let us consider the per capital birth rate is a constant ‘b’ and all newborns are susceptible. The per capital natural death rate assumed to be ‘b’ so that total population remains constant in absence of disease. Suppose the disease spreads through direct contact between susceptible and infections individuals. We assume that the   transmission coefficient per unit time by  x(t)y(t). This is equivalent to assuming that the contact rate between individual is  n(t), proportional to n (t).
   The disease is assumed to causes death to infected individuals, with a death rate constant  . Let the average infectious period for an infectious individual be   so that transfer from infectious class to immune class is at a constant rate  . It is also assumed that the disease confers permanent immunity so that no transfer from infectious class to immune class exists. Since vaccination is one of the major means of control and prevention of many viral infections, the effect of a vaccination strategy is also considered. All susceptible individuals are vaccinated at a constant per capita rate ‘p’. Based on these modeling hypotheses, the following set of differential equations is derived.

Read More: Click here...

320
Quote
Author : Ali Akbar Jalali, Shabnam Khosravi
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— In this paper, a direct synthesis approach to fractional order controller design  Is investigated. The proposed algorithm makes use of Taylor series of both desired closed-loop and actual closed-loop transfer function which is truncated to the first five terms. FOPID Controller parameters are synthesized in order to match the closed-loop response of the plant to the desired closed-loop response. The standard and stable second-order model is considered for both plant and the desired closed-loop transfer functions. Therefore for a given plant with damping ratio  and natural frequency . The tuned FOPID controller results in the desired closed-loop response with damping ratio and natural frequency .  An example is presented that indicates the designed FOPID results in actual closed-loop response very close to desired response rather than PID controller. It is shown that the proposed method performs better than Genetic Algorithm in obtaining the desired response.
Index Terms— FOPID controller, Taylor series expansion, second order model. 

1   INTRODUCTION                                                                     
For many decades, proportional-integral-derivative (PID) controllers have been very popular in industries for process control applications. The popularity and widespread use of PID controllers are attributed primarily to their simplicity and performance characteristics. Owing to the paramount importance of PID controllers, continuous efforts are being made to improve their quality and robustness [1], [2].
An elegant way of enhancing the performance of PID controllers is to use fractional order controllers where the integral and derivative operators have non-integer orders. Podlubny proposed the concept of fractional order control in 1999 [3]. In FOPID controller, despite of the proportional, integral and derivative constants, there are two more adjustable parameters: the power of s in integral and derivative operators,   respectively. Therefore this type of controllers is generalizations of PIDs and consequently has a wider scope of design, while retaining the advantages of classical ones.
Several methods have been reported for FOPID de-sign. Vinagre, Podlubny, Dorcak, Feliu [4] proposed a frequency domain approach based on expected crossover frequency and phase margin. Petras came up with a method based on the pole distribution of the characteristic equation in the complex plane [5]. In recent years evolutionary algorithms are used for FOPID tuning. YICAO, LIANG, CAO [6], presented optimization of FOPID controller parameters based on Genetic Algorithm. A method based on Particle Swarm Optimization was proposed [7]. In this paper a tuning method for FOPID controller is proposed. Suppose a standard and stable second order plant such that desired response is not available. Tuning FOPID con-troller by the proposed method results in desired closed-loop response. The standard second order is considered for desired response. It is shown that the proposed me-thod performs better than Genetic Algorithm in obtaining the desired response. The rest of the paper is organized as follows: In section 2 the tuning method for FOPID controller is described. An example is investigated in section 3 and finally Section 4 draws some conclusions.
2    OBTAINING THE TUNING METHOD FOR FOPID CONTROLLER
 Consider the block diagram of feedback control system in fig. 1. The objective is design a FOPID controller,   such that for a given plant,   with standard second order model, the actual closed-loop response results in desired closed-loop response. Desired closed-loop response denoted by  and described by standard second order model as follows

Read More: Click here...

321
Quote
Author : Sami H. O. SALIH, Mamoun M. A. SULIMAN
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract—Different order modulations combined with different coding schemes, allow sending more bits per symbol, thus achieving higher throughputs and better spectral efficiencies. However, it must also be noted that when using a modulation technique such as 64-QAM with less overhead bits, better signal-to-noise ratios (SNRs) are needed to overcome any Intersymbol Interference (ISI) and maintain a certain bit error ratio (BER). The use of adaptive modulation allows wireless technologies to yielding higher throughputs while also covering long distances. The aim of this paper is to implement an Adaptive Modulation and Coding (AMC) features of the WiMAX and LTE access layer using SDR technologies in Matlab. This papper focusing on the physical layer design (i.e. Modulation), here the various used modulation type will be implemented in a single Matlab function that can be called with the appropriate coefficients. A comparison with the hardware approaches will be made in terms of SNR vs. BER relation.
Index Terms—. Adaptive Modulation and Coding (AMC), Cognitive Radio (CR), LTE, Software Defined Radio (SDR), WiMAX.

1   INTRODUCTION                                                                      
The growth in the use of the information networks lead to the need for new communication networks with higher data rates. The telecommunication industry is also changing, with a demand for a greater range of services, such as video conferences, or applications with multimedia contents. The increased reliance on computer networking and the Internet has resulted in a wider demand for connectivity to be provided "any where, any time", leading to a rise in the requirements for higher capacity and high reliability broadband wireless access Broadband wireless Access (BWA) telecommunication systems.
BWA intensively focused in the last few years. Thus, various new technologies with high transmission abilities have been designed. The BWA has become the best way to meet escalating business demand for rapid Internet connection and integrated "triple play" services. That is the very base of the HSPA, WiMAX, and LTE concept: a wireless transmission infrastructure that allows a fast deployment as well as low maintenance costs.
The emergent demand of all types of services, not only voice and data but also multimedia services, aims for the design of increasingly more intelligent and agile communication systems, capable of providing spectrally efficient and flexible data rate access. These systems are able to adapt and adjust the transmission parameters based on the link quality, improving the spectrum efficiency of the system, and reaching, in this way, the capacity limits of the underlying wireless channel.
Link adaptation techniques, often referred to as adap-tive modulation and coding (AMC), are a good way for reaching the cited requirements. They are designed to track the channel variations, thus changing the modula-tion and coding scheme to yield a higher throughput by transmitting with high information rates under favorable channel conditions and reducing the information rate in response to channel degradation.
2   BWA DEVELOPMENT ROADMAP
2.1 Preface
The current WiMAX revision is based upon IEEE802.16e-2005, approved in December 2005. It is a supplement to the IEEE802.16-2004. [1] Thus, IEEE 802.16e-2005 improves by:
   Adding support for mobility
   Scaling of the Fast Fourier transform (FFT) to the channel bandwidth in order to keep the carrier spacing constant across different channel band-widths (typically 1.25 MHz, 5 MHz, 10 MHz or 20 MHz
   Advanced antenna diversity schemes, and hybrid automatic repeat-request (HARQ)
   Adaptive Antenna Systems (AAS) and MIMO technology
   Denser sub-channelization, thereby improving indoor penetration
   Introducing Turbo Coding and Low-Density Parity Check (LDPC)
   Introducing downlink sub-channelization, allowing administrators to trade coverage for capacity or vice versa
   Adding an extra QoS class for real time applica-tions
In the other hand, Long Term Evolution (LTE) is the latest standard in the 3rd Generation Partnership Project (3GPP), mobile network technology tree that produced the GSM/EDGE and UMTS/HSPA network technologies.[1][2]
The LTE specification provides downlink peak rates of at least 100 Mbps, an uplink of at least 50 Mbps and RAN round-trip times of less than 10 ms. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time division duplexing (TDD).
The main advantages with LTE are high throughput, low latency, plug and play, FDD and TDD in the same platform, an improved end-user experience and a simple architecture resulting in low operating costs. LTE will also support seamless passing to cell towers with older network technology such as GSM, cdmaOne, UMTS, and CDMA2000. The next step for LTE evolution is LTE Advanced and is currently being standardized in 3GPP Release 10. [3]

The most important similarity between LTE and Wi-MAX is orthogonal frequency division multiplex (OFDM) signaling. Both technologies also employ Viterbi and turbo accelerators for forward error correction. From a chip designer's perspective, that makes the extensive reuse of gates highly likely if one had to support both schemes in the same chip or chip-set. From a software defined radio (SDR) perspective, the opportunity is even more enticing. Flexibility, gate reuse and programmability seem to be the answers to the WiMAX-LTE multimode challenge.
2.2 Hypothesis of AMC
In traditional communication systems, the transmission is designed for the "worst case" channel scenario thus, coping with the channel variations and still delivering an error rate below a specific limit. Adaptive transmission schemes, how-ever, are designed to track the channel quality by adapting the channel throughput to the actual channel state. These techniques take advantage of the time-varying nature of the wireless channel to vary the transmitted power level, symbol rate, coding scheme, constellation size, or any combination of these parameters, with the purpose of improving the link average spectral efficiency (bits/s/Hz).

Read More: Click here...

322
Quote
Author : Subhransu Sekhar Dash, C.Nalini Kiran, S.Prema Latha
International Journal of Scientific & Engineering Research, Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract- Power quality standards (IEEE-519) compel to limit the total harmonic distortion within the acceptable range .This paper mainly deals with shunt active power filter which has been widely used for harmonic elimination. Active power filter which has been used here monitors the load current constantly and continuously adapt to the changes in load harmonics. The performance of three phase shunt active power filter using instantaneous power theory with PI and Hysteresis current controller is explained in this paper.
Index Terms- Active power filters (APF), composite load, harmonic compensation, linear and non linear load, reactive power.
 
1   INTRODUCTION
A harmonic is a component of a periodic wave having a frequency that is an integral multiple of the fundamental power line frequency. Harmonics are the multiple of the fundamental frequency, and whereas total harmonic distortion is the contribution of all the harmonic frequency currents to the fundamental. Harmonics are the by-products of modern electronics. They occur frequently when there are large numbers of personal computers (single phase loads), uninterruptible power supplies (UPSs), variable frequency drives (AC and DC) or any electronic device using solid state power switching supplies [1] to convert incoming AC to DC. Non-linear loads create harmonics by drawing current in abrupt short pulses, rather than in a smooth sinusoidal manner. The terms “linear” and “non-linear” define the relationship of current to the voltage waveform. A linear relationship exists between the voltage and current, which is typical of an across-the-line load. A non-linear load has a discontinuous current relationship that does not correspond to the applied voltage waveform. All variable frequency drives cause harmonics because of the nature of the frontend rectifier.
1.1   Need For Harmonic Compensation:
The implementation of Active Filters in this modern electronic age has become an increasingly essential element to the power network. With advancements in technology since the early eighties and significant trends of power electronic devices among consumers and industry, utilities are continually pressured in providing a quality and reliable supply. Power electronic devices [2] such as computers, printers, faxes, fluorescent lighting and most other office equipment all create harmonics. These types of devices are commonly classified collectively as ‘nonlinear loads’. Nonlinear loads create harmonics by drawing current in abrupt short pulses rather than in a smooth sinusoidal manner.  The major issues associated with the supply of harmonics to nonlinear loads are severe overheating and insulation damage. Increased operating temperatures of generators and transformers degrade the insulation material of its windings. If this heating were continued to the point at which the insulation fails, a flashover may occur should it be combined with leakage current from its conductors. This would permanently damage the device and result in loss of generation causing widespread blackouts.
 One solution to this foreseeable problem is to install active filters for each nonlinear load in the power system network. Although presently very uneconomical, the installation of active filters proves indispensable for solving power quality [1][2] problems in distribution networks such as harmonic current compensation, reactive current compensation, voltage sag compensation, voltage flicker compensation and negative phase sequence current compensation. Ultimately, this would ensure a polluted free system with increased reliability and quality.
The objective of this project is to understand the modeling and analysis of a shunt active power filter. In doing so, the accuracy of current compensation for current harmonics found at a nonlinear load, for the PQ theory control technique is supported and also substantiates the reliability and effectiveness of this model for integration into a power system network. The model is implemented across a two bus network including generation to the application of the nonlinear load.
The aim of the system simulation is to verify the active filters effectiveness for a nonlinear load. In simulation, total harmonic distortion measurements are undertaken along with a variety of waveforms and the results are justified accordingly.
    One of the most important features of the shunt active filter system proposed is its versatility over a variety of different conditions. The application of the positive sequence voltage detector from within the active filter controller is the key component of the system. The positive sequence voltage detector gives incredible versatility to the application of the active filter, because it can be installed and compensate for load current harmonics even when the input voltage is highly distorted. When filters alike do not contain this feature and is installed with a distorted voltage input, the outcome is a low efficient current harmonic compensator with poor accuracy of compensation current determination.
1.2   Harmonic filters:
 Harmonic filters are used to eliminate the harmonic distortion caused by nonlinear loads. Specifically, harmonic filters are designed to attenuate or in some filters eliminate the potentially dangerous effects of harmonic currents active within the power distribution system. Filters can be designed to trap these currents and, through the use of a series of capacitors, coils, and resistors, shunt them to ground. A filter may contain several of these elements, each designed to compensate a particular frequency or an array of frequencies.
1.3 Types of harmonic filters involved in harmonic compensation:
 Filters are often the most common solution that is used to mitigate harmonics from a power system. Unlike other solutions, filters offer a simpler inexpensive alternative with high benefits. There are three different types of filters each offering their own unique solution to reduce and eliminate harmonics. These harmonic filters are broadly classified into   passive, active and hybrid structures. The choice of filter used is dependent upon the nature of the problem and the economic cost associated with implementation.
  A passive filter is composed of only passive elements such as inductors, capacitors and resistors thus not requiring any operational amplifiers. Passive filters are inexpensive compared with most other mitigating devices. Its structure may be either of the series or parallel type. The structure chosen for implementation depends on the type of harmonic source present. Internally, they cause the harmonic current to resonate at its frequency. Through this approach, the harmonic currents are attenuated in the LC circuits tuned to the harmonic orders requiring filtering. This prevents the severe harmonic currents traveling upstream to the power source causing increased widespread problems.
An active filter is implemented when orders of harmonic currents are varying. One case evident of demanding varying harmonics from the power system are variable speed drives. Its structure may be either of the series of parallel type. The structure chosen for implementation depends on the type of harmonic sources present in the power system and the effects that different filter solutions would cause to the overall system performance.
 Active filters use active components such as IGBT-transistors to inject negative harmonics into the network effectively replacing a portion of the distorted current wave coming from the load.
 This is achieved by producing harmonic components of equal amplitude but opposite phase shift, which cancel the harmonic components of the non-linear loads. Hybrid filters combine an active filter and a passive filter. Its structure may be either of the series or parallel type. The passive filter carries out basic filtering (5th order, for example) and the active filter, through precise control, covers higher harmonics.
1.4 Passive Filters:
     Passive filters are generally constructed from passive elements such as resistances, inductances, and capacitances. The values of the elements of the filter circuit are designed to produce the required impedance pattern. There are many types of passive filters, the most common ones are single-tuned filters and high-pass filters. This type of filter removes the harmonics by providing a very low impedance path to the ground for harmonic signals.

Read More: Click here...

323
Quote
Author : Chirag I Patel, Ripal Patel, Palak Patel
International Journal of Scientific & Engineering Research Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Objective is this paper is recognize the characters in a given scanned documents and study the effects of changing the Models of ANN.Today Neural Networks are mostly used for Pattern Recognition task. The paper describes the behaviors of different Models of Neural Network used in OCR. OCR is widespread use of Neural Network. We have considered parameters like number of Hidden Layer, size of Hidden Layer and epochs. We have used Multilayer Feed Forward network with Back propagation. In Preprocessing we have applied some basic algorithms for segmentation of characters, normalizing of characters and De-skewing. We have used different Models of Neural Network and applied the test set on each to find the accuracy of the respective Neural Network.
Index Terms— Optical Character Recognition,  Artificial Nueral Network, Backpropogation Network, Skew Detection. 

1   INTRODUCTION                                                                      
Such software’s are useful when we want to convert our Hard copies into soft copies. Such software’s reduces almost 80% of the conversion work while still some verification is always required.
    Optical character recognition, usually abbreviated to OCR, involves computer software designed to translate images of typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them in (ASCII or Unicode). OCR began as a field of research in artificial intelligence and machine vision. Though academic research in the field continues, the focus on OCR has shifted to implementation of proven techniques [4].

2 ARTIFICIAL NUERAL NETWORK
Pattern recognition is extremely difficult to automate. Animals recognize various objects and make sense out of large amount of visual information, apparently requiring very little effort. Simulating the task performed by animals to recognize to the extent allowed by physical limitations will be enormously profitable for the system. This necessitates study and simulation of Artificial Neural Network. In Neural Network, each node perform some simple computation and each connection conveys a signal from one node to another labeled by a number called the “connection strength” or weight indicating the extent to which signal is amplified or diminished by the connec-tion.
 
Different choices for weight results in different functions are being evaluated by the network. If in a given network whose weight are initial random and given that we know the task to be accomplished by the network , a learning algorithm must be used to determine the values of the
weight that will achieve the desired task. Learning Algorithm qualifies the computing system to be called Artificial Neural Network. The node function was predetermined to apply specific function on inputs imposing a fundamental limitation on the capabilities of the network.
     Typical pattern recognition systems are designed using two pass. The first pass is a feature extractor that finds features within the data which are specific to the task being solved (e.g. finding bars of pixels within an image for character recognition). The second pass is the classifier, which is more general purpose and can be trained using a neural network and sample data sets. Clearly, the feature extractor typically requires the most design effort, since it usually must be hand-crafted based on what the application is trying to achieve.

One of the main contributions of neural networks to pattern recognition has been to provide an alternative to this design: properly designed multi-layer networks can learn complex mappings in high-dimensional spaces without requiring complicated hand-crafted feature extractors. Thus, rather than building complex feature detection algorithms, this paper focuses on implementing a standard backpropagation neural network. It also encapsulates the Preprocessing that is required for effective.

2.1 Backpropogation
Backpropagation was created by generalizing the Wi-drow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear out-put layer are capable of approximating any function with a finite number of discontinuities.

3   ANALYSIS
By analyzing the OCR we have found some parameter which affects the accuracy of OCR system [1][5]. The parameters listed in these papers are skewing, slanting, thickening, cursive handwriting, joint characters. If all these parameters are taken care in the preprocessing phase then overall accuracy of the Neural Network would increase.

Read More: Click here...

324
Electronics / Electrical Power Generation Using Piezoelectric Crystal
« on: August 17, 2011, 06:06:47 am »
Quote
Author : Anil Kumar
International Journal of Scientific & Engineering Research Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract- The usefulness of most high technology devices such as cell phones, computers, and sensors is limited by the storage capacity of batteries. In the future, these limitations will become more pronounced as the demand for wireless power outpaces battery development which is already nearly optimized. Thus, new power generation techniques are required for the next generation of wearable computers, wireless sensors, and autonomous systems to be feasible. Piezoelectric materials are excellent power generation devices because of their ability to couple mechanical and electrical properties. For example, when an electric field is applied to piezoelectric a strain is generated and the material is deformed. Consequently, when a piezoelectric is strained it produces an electric field; therefore, piezoelectric materials can convert ambient vibration into electrical power. Piezoelectric materials have long been used as sensors and actuators; however their use as electrical generators is less established. A piezoelectric power generator has great potential for some remote applications such as in vivo sensors, embedded MEMS devices, and distributed networking. Developing piezoelectric generators is challenging because of their poor source cha-racteristics (high voltage, low current, high impedance) and relatively low power output. This paper presents a theoretical analysis to increase the piezoelectric power generation that is verified with experimental results.   

Index Terms-Piezoelectric materials, piezoelectricity, power generation, PZT ceramics.

1   INTRODUCTION                                                                      
Mechanical stresses applied to piezoelectric materials distort internal dipole moments and generate electrical potentials (voltages) in direct proportion to the applied forces. These same crystalline materials also lengthen or shorten in direct proportion to the magnitude and polarity of applied electric fields.
Because of these properties, these materials have long been used as sensors and actuators. One of the earliest practical applications of piezoelectric materials was the development of the first SONAR system in 1917 by Lan-gevin who used quartz to transmit and receive ul-trasonic waves [1]. In 1921, Cady first proposed the use of quartz to control the resonant frequency of oscillators. Today, piezoelectric sensors (e.g., force, pressure, acceleration) and actuators (e.g., ultrasonic, micro positioning) are widely available.
The same properties that make these materials useful for sensors can also be utilized to generate electricity. Such materials are capable of converting the mechani-cal energy of compression into electrical energy, but developing piezoelectric generators is challenging be-cause of their poor source characteristics (high voltage, low current, high impedance). This is especially true at low frequencies and relatively low power output.
These challenges have limited the use of such genera-tors primarily because the relatively small amount of available regulated electrical power has not been useful. The recent advent of extremely low power electrical and mechanical devices (e.g., micro electromechanical systems or MEMS) makes such generators attractive in several applications where remote power is required. Such applications are sometimes referred to as power scavenging and include in vivo sensors, embedded MEMS devices, and distributed networking.
Several recent studies have investigated piezoelectric power generation. One study used lead zirconate tita-nate (PZT) wafers and flexible, multilayer polyvinylidene fluoride (PVDF) films inside shoes to convert mechanical walking energy into usable electrical energy [2], [3]. This system has been proposed for mobile computing and was ultimately able to provide continuously 1.3 mW at 3 V when walking at a rate of 0.8 Hz.
Other projects have used piezoelectric films to extract electrical energy from mechanical vibration in machines to power MEMS devices [4]. This work extracted a very small amount of power (<5uW) from the vibration and no attempt was made to condition or store the energy. Similar work has extracted slightly more energy (70uW) from machine and building vibra-tions [5].
Piezoelectric materials have also been studied to gen-erate electricity from pressure variations in micro hydraulic systems [6]. The power would presumably be used for MEMS but this work is  still in the conceptual phase. Other work has used piezoelectric materials to convert kinetic energy into a spark to detonate an ex-plosive projectile on impact [7]. Still other work has proposed using flexible piezoelectric polymers for energy conversion in windmills [8], and to convert flowin oceans and rivers into electric power [9].A recent medical application has proposed the use of piezoelectric materials to generate electricity to promote bone growth [10]. This work uses an implanted bone prosthesis containing a piezoelectric generator confi-gured to deliver electric current to specific locations around the implant. This device uses unregulated (high voltage) energy and it is not clear if the technique has advanced beyond the conceptual phase. The above studies have all had some success in extracting electrical power from piezoelectric elements. However, many issues such as efficiency, conditioning and sto-rage have not been fully addressed.
This paper presents the idea to increase the power generation by the piezoelectric. A few researchers have used single off the-shelf piezoelectric devices to harvest electrical power, yet little has been done to overcome the main weaknesses associated with piezoelectric power harvesting.  This research seeks to systematically overcome the weaknesses associated with cantilever-mounted piezoelectric used for mobile power harvesting to maximize the power from a piezoelectric device the load impedance must match the impedance of device. This is problematic for frequencies between 10-100 Hz because a single piezoelectric may have impedance in the range of several hundred thousand ohms to ten million ohms. Thus, little current can be produced, and battery charging is diminished due to low current production. To reduce the impedance and increase electrical current, two off-the-shelf actuators (8 piezoelectric totals) are connected electrically in parallel and tuned to resonate in the frequency range of an ambient vibration similar to that produced by a person walking. A picture of the experimental setup may be seen in Figure 1.

To demonstrate the power harvesting advantage, 40 and 80 mAhr Nickel Metal Hydride batteries are re-charged with each individual actuator then charged with both actuators connected in parallel. For a 1.4 Hz frequency (a brisk walking pace), the parallel combination charges two 40 mAhr batteries in 3.09 hours, and two 80 mAhr batteries in 5.64 hours. The individual actuators require 16.1 hours to charge a 40 mAhr battery, and 22.7 hours to charge an 80 mAhr battery. Clearly, the parallel combination of multiple off-the-shelf piezoelectric actuators increases battery charge times, and adding more parallel devices could increase power production so long as the total voltage exceeds the charged voltage of the battery. Since most produc-tion piezoelectric devices are designed as actuators, research is being conducted to optimize piezoelectric for power harvesting. Specifically, the locations of each piezoelectric on the cantilevered structure is being stu-died and designs that reduce electrical cancellation due to out of phase motions are ongoing.
 On october 6, 2009 the Hefer intersection along the old coastal road of Route 4 in Israel was the place where a piezoelectric generator was put to the test and generated some 2,000 watt-hours of electricity. The setup consists of a ten-meter strip of asphalt, with generators lying underneath, and batteries in the road’s proximity. Being the first practical test of the system, the researchers still expect energetic and feasibility results. Technion was helped by Innowattech, a company from Israel to finish the pilot project The project manager, Dr. Lucy Edery-Azulay, explained that the generators developed by In-nowattech are embedded about five centimeters beneath the upper layer of asphalt. “The technology is based on piezoelectric materials that enable the conversion of mechanical energy exerted by the weight of passing ve-hicles into electrical energy. As far as the drivers are concerned, the road is the same,” she says Edery-Azulay added that expanding the project to a length of one kilometer along a single lane would produce 200 KWh, while a four-lane highway could produce about a MWh sufficient electricity to provide for the average consumption in 2,500 households.


Read More: Click here...

325
Engineering, IT, Algorithms / Translation of Software Requirements
« on: August 17, 2011, 06:03:09 am »
Quote
Author : Hamideh Hamidian, Ali Akbar Jalali
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 3, March-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract—Stakeholders typically speak and express software requirements in their native languages. On the other hand, software engineers typically express software requirements in English to programmers who program using English-like programming languages. Translation of software requirements between the native languages of the stakeholders and English introduces further ambiguities. This calls for a system that simplifies translation of software requirements while minimizing ambiguities. Unfortunately, this problem has been overlooked in the literature. This paper introduces a system designed to facilitate translation of requirements between English and Arabic. The system can also facilitate the analysis of software requirements written in Arabic. This is achieved through enforcing writing software requirements statements using templates. Templates are selected such that they enforce following best practices in writing requirements documents.
Index Terms— Requirements, Software Engineering, Translation

1   INTRODUCTION
SOFTWARE requirements engineering is concerned with understanding and specifying the services and constraints of a given software system. It involves software requirements elicitation and specification [1]. Elicitation of software requirements from stakeholders typically results in user re-quirements, which are natural language statements that de-scribe the high-level goals of a given software system [2]. Analysis of natural language user requirements is an important activity since imprecision in this stage causes errors in later stages. Requirements imprecision is at least an order of magnitude more expensive to correct when undetected until late software engineering stages [3]. Thus, focusing on improving the precision of the elicited user requirements in the first cycle is one of the ambitious aims of software engineering [4]. One of the main causes of imprecision is the ambiguity of natural languages used to express the user requirements [5]. To minimize ambiguity, a number of best practices in writing requirements documents have been proposed by experts [6-9]. These practices include:
•   Maintain terminological consistency and clarity by re-stricting action and actor descriptions to terms that are clearly defined in a glossary.
•   Do not use different phrases to refer to the same entity (For example, do not use Order Processing System and Order Entry System to refer to the same system).
•   Avoid using phrases, such as “easy to use”, whose meaning is subjective and leads to ambiguity.
•   Write each requirement as a single separate sentence.
•   Write complete sentences rather than bulleted buzz phrases.
•   Write complete active-voice sentences which clearly specify the actor/agent and the action.
•   Write requirements sentences in a consistent fashion using a standard set of syntaxes with each syntax-type corresponding to and signaling different kinds of requirements.
•   Associate a unique identifier with each requirement.
While these practices are relatively easy to state and under-stand, it seems fairly difficult for requirements engineers to consistently apply them throughout requirements docu-ments with thousands of requirements. Thus, some tools in the literature have been developed to help users adhere to best practices in writing requirements documentation. This also simplifies the automatic analysis of requirements documents written in natural language and allows generating warning messages when the requirements do not conform to best practices [9].
One of the problems that have been overlooked in the litera-ture is the problem of software requirements translation. Stakeholders typically speak their native languages, while requirements documents are typically written in English and software programs are typically developed in English-like programming languages. The problem is that the translation of software requirements from the native language of stakeholders to English introduces further ambiguities. Whenever errors are discovered in later stages of the software engineering process, suggested modifications of the software requirements result. To negotiate these modifications with the stakeholders, modified requirements need to be translated between English and the native language of the stakeholders back and forth. This can introduce more ambiguities that complicate the problem even more.
To address this problem, we suggest implementing systems that helps users adhere to best practices in writing require-ments documents using different natural languages. This simplifies analyzing requirements documents in the natural language of the stakeholders. By specifying the mappings between the different developed systems, we allow translation of software requirements between different natural languages while minimizing ambiguities. In this paper, we introduce the Arabic Requirements Analysis Tool (ARAT) system that has been designed to handle software requirements in Arabic. The reason for selecting the Arabic language is that it is the official language of hundreds of millions of people in the Middle East and North Africa. It is expected that a large number of these targeted users would benefit from ARAT. We also specify mappings between our system and a similar system called Requirements Analysis Tool (RAT) [9]. RAT handles requirements written in English. Mappings simplify translation of natural language requirements between English and Arabic, while minimizing ambiguities.
The paper is organized as follows: Section 2 describes related research in the literature. Section 3 describes the RAT system and Section 4 describes the ARAT system and how the Arabic requirements are analyzed using it. Section 5 provides examples that illustrate the mappings between the English syntaxes in the RAT system and the Arabic syntaxes in the ARAT system. The examples also illustrate how translation is performed between English and Arabic requirements accordingly while minimizing ambiguities. Finally, Section 6 provides conclusions and directions for future research.
2   REALTED WORK
Many tools in the literature have been developed to automati-cally analyze natural language software requirements. Lami [10] and Hussain et al. [11] developed systems that can auto-matically detect potential imprecision in natural language software requirements through indicators such as weak verbs. But, these systems don’t assist in correcting any imprecision.
Another approach in the literature attempts to avoid the introduction of imprecision while the software requirements are being written by imposing the use of natural language patterns. Some of these have focused on developing natural language patterns for specific domains such as database sys-tems [12], scenarios [13], and embedded systems [5]. General purpose systems include Raven [14], which can analyze use cases. Jain et al. [9] developed the general-purpose RAT system that imposes the use of specific natural language patterns that help users adhere to best practices in writing software requirements in different situations and can provide useful advices.
The REAS system [15] attempts to integrate these two ap-proaches intelligently to exploit their advantages and avoid their disadvantages, but cannot help in the translation of re-quirements. Thus, the prposed system emulates the RAT sys-tem and uses the analogy ti simplify the translation of re-quirements between Arabic and English while minimizing ambiguities.


Read More: Click here...

326
Quote
Author : Surendra bilouhan, Prof.Roopam Gupta
International Journal of Scientific & Engineering Research Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract—In this paper, we consider the problem of discovery of information in a densely deployed Wireless Sensor Network (WSN), where the initiator of search is unaware of the location of target information. We propose a protocol: Increasing Ray Search (IRS), an energy efficient and scalable search protocol. The priority of IRS is energy efficiency and sacrifices latency. The basic principle of this protocol is to route the search packet along a set of trajectories called rays that maximizes the likelihood of discovering the target information by consuming least amount of energy. The rays are organized such that if the search packet travels along all these rays, then the entire terrain area will be covered by its transmissions while minimizing the overlap of these transmissions. In this way, only a subset of total sensor nodes transmits the search packet to cover the entire terrain area while others listen. We believe that query resolution based on the principles of area coverage provides a new dimension for conquering the scale of WSN. We compare IRS with existing query resolution techniques for unknown target location such as Round Robin Search. We show by simulation that, performance improvement in total number of transmitted bytes, energy consumption, and latency with terrain size
Index Terms—Wireless sensor networks, energy efficiency, scalability, CSMA, Sensor Sim , SIR, Low-power optimization , transmission strategy.

1. INTRODUCTION
A sensor network is a group of specialized transducers with a communications infrastructure intended to monitor and record conditions at diverse locations. Commonly monitored parameters are temperature, humidity, pressure, wind direction and speed, illumination intensity, vibration intensity, sound intensity, power-line voltage, chemical concentrations, pollutant levels and vital body functions.
A sensor network consists of multiple detection stations called sensor nodes, each of which is small, lightweight and portable. Every sensor node is equipped with a transducer, microcomputer, transceiver and power source. The transducer generates electrical signals based on sensed physical effects and phenomena. The microcomputer processes and stores the sensor output. The transceiver, which can be hard-wired or wireless, receives commands from a central computer and transmits data to that computer. The power for each sensor node is derived from the electric utility or from a battery. This paper provides an analytical model for the study of energy consumption in multihop wireless embedded and sensor networks where nodes are extremely power constrained. Low- power optimization techniques developed for conventional ad hoc networks are not sufficient as they do not properly address particular features of embedded and sensor networks. It is not enough to reduce overall energy consumption, it is also important to maximize the lifetime of the entire network, that is, maintain full network connectivity for as long as possible. This paper considers different multihop scenarios to compute the energy per bit, efficiency and energy consumed by individual nodes and the network as a whole. The analysis uses a
detailed model for the energy consumed by the radio at each node. Multihop topologies with equidistant and optimal node spacing are studied. Numerical computations illustrate the effects of packet routing, and explore the effects of coding and medium access control. These results show that always using a simple multihop message relay strategy is not always the best procedure.

2.BACKGROUND
Recent advances in sensor technology (in terms of size, power consumption, wireless communication and manufacturing costs) have enabled the prospect of deploying large quantities of sensor nodes to form Wireless Sensor Networks (WSN). These networks are created by distributing large quantities of usually small, inexpensive sensor nodes over a geographical region of interest with a view to collect data relating to one or more variables. These nodes are primarily equipped with the means to sense, process and communicate data to other nodes and ultimately to a remote user(s). WSN nodes can also have mobility capabilities which enable them to move around and roam the region of interest to harvest information. Figure 1 shows a typical sensor node hardware architecture. Sensor nodes may cooperate with their neighbors (within communication range) to form an ad-hoc Network. WSN topologies are generally dynamic and decentralized. Figure 2 gives a general overview of a WSN. Fig2.Execution phase of sensor signal
 
BACKGROUND AND RELATED WORK
Understanding the performance and behavior of sensor networks requires simulation tools that can scale to very large numbers of nodes. Traditional network simulation environments, such as ns2 [6], are effective for understanding the behavior of network protocols, but generally do not capture the operation of endpoint nodes in detail. Also, while ns2 provides implementations of the 802.11 MAC/PHY layers, many sensor networks employ nonstandard wireless protocols that are not implemented in ns2. A number of instruction-level simulators for sensor network nodes have been developed, including At emu [11] and Simulavr [20]. These systems provide a
very detailed trace of node execution, although only At emu provides a simulation of multiple nodes in a networked environment. The overhead required to simulate sensor nodes at the instruction level considerably limits scalability. Other sensor network simulationenvironments include PROWLER [21], TOSSF [19] (based on SWAN [14]), Sensor Sim [17], and SENS [23]. Each of these systems provides differing levels of scalability and realism, depending on the goals for the simulation environment. In some cases, the goal is to work at a very abstract level, while in others, the goal is to accurately simulate the behavior of sensor nodes. Few of these simulation environments have considered power consumption. Sensor- Sim [17] and SENS [23] incorporate simple power usage and battery models, but they do not appear to have been validated against actual hardware and real applications [18]. Also, SensorSim does not appear to be publically available. Several power simulation tools have also been developed for energy profiling in the embedded systems community.
Performance Metrics Used
•Number of transmitted bytes: Average number of bytes transmitted by all the nodes in the network for finding the target information. As the message formats are not uniform across protocols, we measured the number of bytes transmitted instead of the number of messages transmitted.
•Number of transmitted and received bytes:Average number of transmitted and received bytes by all the nodes in the network for finding the target information.
•Energy consumed: The total energy consumed by all the nodes in the network for finding the target information.
•Latency: Time taken to find the target information, i.e., the time difference between, the time at which the search is initiated by the sink node by transmitting the search packet, and the time at which the search packet is received by the target node.
•Probability of finding the target information:Probability of finding the target information is a measure of the success probability of the search protocols. It is also a measure of non determinism of the search protocols.
4.Method of Reducing  Power Consumption(WSN)
A method for reducing power consumption in a wireless sensor network is provided. An optimized path destined for a sink node is set using a common channel in which first and second nodes use a CSMA scheme. A first channel is set and transmission/reception slots for packet transmission/reception arc allocated in the first channel. A packet is transmitted to the second node through a first transmission slot using a TDMA scheme. When a packet is not received from the second node through a first reception slot within a first set amount of time, the first reception slot is allowed to transition to an inactive state. The first node is one of the sink node, at least one parent node, and at least one child node of the parent node, and the second node is one of child nodes of the first node.

Read More: Click here...

327
Others / Molecular Biocoding of Insulin – Amino Acid Gly
« on: August 17, 2011, 01:37:49 am »
Quote
Author : Lutvo Kurić
International Journal of Scientific & Engineering Research Volume 2, Issue 5, May-2011
ISSN 2229-5518
Download Full Paper : PDF

Abstract - The modern science mainly treats the biochemical basis of sequencing in bio-macromolecules and processes in medicine and biochemistry. One can ask weather the language of biochemistry is the adequate scientific language to explain the phenomenon in that science. Is there maybe some other language, out of biochemistry, that determines how the biochemical processes will function and what the structure and organization of life systems will be? The research results provide some answers to these questions. They reveal to us that the process of sequencing in bio-macromolecules is conditioned and determined not only through biochemical, but also through cybernetic and information principles. Many studies have indicated that analysis of protein sequence codes and various sequence-based prediction approaches, such as predicting drug-target interaction networks (He et al., 2010), predicting functions of proteins (Hu et al., 2011; Kannan et al., 2008), analysis and prediction of the metabolic stability of proteins (Huang et al., 2010), predicting the network of substrate-enzyme-product triads (Chen et al., 2010), membrane protein type prediction (Cai and Chou, 2006; Cai et al., 2003; Cai et al., 2004), protein structural class prediction (Cai et al., 2006; Ding et al., 2007), protein secondary structure prediction (Chen et al., 2009; Ding et al., 2009b), enzyme family class prediction (Cai et al., 2005; Ding et al., 2009a; Wang et al., 2010), identifying cyclin proteins (Mohabatkar, 2010), protein subcellular location prediction (Chou and Shen, 2010a; Chou and Shen, 2010b; Kandaswamy et al., 2010; Liu et al., 2010), among many others as summarized in a recent review (Chou, 2011), can timely provide very useful information and insights for both basic research and drug design and hence are widely welcome by science community. The present study is attempted to develop a novel sequence-based method for studying insulin in hopes that it may become a useful tool in the relevant areas.
Index Terms-Amino Acid Gly, Human Insulin, Insulin Model, Insulin Code.

 
1 INTRODUCTION
The biologic role of any given protein in essential life processes, eg, insulin, depends on the positioning of its component amino acids, and is understood by the „positioning of letters forming words“. Each of these words has its biochemical base. If this base is expressed by corresponding discrete numbers, it can be seen that any given base has its own program, along with its own unique cybernetics and information characteristics.

 Indeed, the sequencing of the molecule is determined not only by distin biochemical features, but also by cybernetic and information principles. For this reason, research in this field deals more with the quantitative rather than qualitative characteristcs of genetic information and its biochemical basis. For the purposes of this paper, specific physical and chemical factors have been selected in order to express the genetic information for insulin.Numerical values are them assigned to these factors, enabling them to be measured. In this way it is possible to determine oif a connection really exists between the quantitative ratios in the process of transfer of genetic information and the qualitative appearance of the insulin molecule. To  select these factors, preference is given to classical physical and chemical parameters, including the number of atoms in the relevant amino acids, their analog values, the position in these amino acids in the peptide chain, and their frenquencies.There is a arge numbers of these parameters, and each of their gives important genetic information. Going through this process, it becomes clear that there is a mathematical relationship between quantitative ratios and the qualitative appearance of the biochemical „genetic processes“ and that there is a measurement method that can be used to describe the biochemistry of insulin.

2 METHODS
Insulin can be represented by two different forms, ie, a discrete form and a sequential form. In the discrete form, a molecule of insulin is represented by a set of discrete codes or a multiple dimension vector. In the sequential form, an insulin molecule is represent by a series of amino acids according to the order of their position in the chains 1AI0.
Therefore, the sequential form can naturally reflect all the information about the sequence order and lenght of an insulin molecule. The key issue is whether we can develop a different discrete method of representing an insulin molecule that will allow accomodation of partial, if not all sequence order information? Because a protein sequence is usually represented by a series of amino acids should be assigned to these codes in order to optimally convert the sequence order information into a series of numbers for the discrete form representation?
3 Expression of Insulin Code Matrix- 1AI0

The matrix mechanism of Insulin, the evolution of biomacromolecules and, especially, the biochemical evolution of Insulin language, have been analyzed by the application of cybernetic methods, information theory and system theory, respectively. The primary structure of a molecule of Insulin  is the exact specification of its atomic composition and the chemical bonds connecting those atoms.

Read More:  Click here...

Pages: 1 ... 20 21 [22]