Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - ijser.editor

Pages: 1 ... 4 5 [6]
Author : Roxanna Ast , Azade Fadavi Roodsari
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract—aim of this research is to study the prevalence of eating disorder amongst female students of Tonekabon University. The community being studied is the female students of Tonekabon University.300 students were randomly selected and requested to complete the ‘eating attitude test-26’.

Index Terms— Eating disorders, Female students, Tonekabon University

Eating disorder presents a serious problem these days. Every year the number of people suffering from anorexia and bulimia increase and consequents of the disease may be health or even life threatening.[1]
This disorder presents a significant problem among ado-lescent and young women in many westernized countries and is associated with nervous, physical and psychiatric problems.[2]
Anorexia nervosa is a psychological and physical condi-tion of semi starvation in which individuals weigh 85% or less of what would ordinary be there healthy body weight, resulting in physical impairments and in the 90% of patients who are females Cessation of menses.
This condition is to due to highly restricted food intake, often accompanied by excessive exercise and sometimes purging by self-induced vomiting, laxative use or other means.
These behaviors are usually related to obsessional and a perfectionist thinking that focuses on a distorted body image and fear of becoming fat.[3]
Feeling normal fullness after eating is felt as discomfort and experienced as a failure of control, moral weakness and a source of great guilt. These perfectionists are failing in their major (anorexia) project and will redouble their effort by eating nothing for a day or even restricting their water intake. This can precipitate death through cardiac arrest, particularly if they are also vomiting the little intake they do allow.[4]
The association between anorexia nervosa and depression has long been recognized by clinicians.[5]
 A study in Sweden using structured interview for DSM-lll-R criteria found that 85% of patients with anorexia nervosa (AN) had a depressive disorder.[6]
About half of sufferers eventually develop binge-eating episodes – that is, periodic decontrol over eating on inca-pacity to satiate. [7]

Bulimia nervosa (BN) is a condition in which individuals binge, eat large calories, up to 2000 at a time or more and then purge themselves of what they have eaten usually by forcing themselves to vomit and sometimes by means of laxatives, diet pills, diuretic pills or excessive exercising.
These behaviors occur at least several times per week for months on end, the condition is usually related to over concern with ones weight and shape, and is accompanied by feeling of shame, disgust and being out of control.[8]
Like anorexia nervosa, Bulimia was recognized to occur as early as the 17th century. [9]
According to Wilson [10]to be diagnosed with bulimia nervosa  individuals must experience episodes of binge- eating ‘at least twice a week’ on average, for three to six months.
In addition to the primary eating disorders, several other conditions occur among individuals with psychiatric disorders that may markedly affect eating behavior and weight. For instance, individuals with severe depression experience an increase in appetite and food cravings. Patients with psychic delusions due to Schizophrenia or other conditions may think food is poisoned and refuse to eat.[11]
Recent studies have found that EDNOS (Eating Disorder Not Otherwise Specified) is the most common eating disorder, diagnosis both in outpatients and inpatients settings. Underweight patients that do not report over-evaluation of shape and weight are a distinctive and scarcely studied subgroup of EDNOS.
Their self-evaluation is largely or exclusively based on their abilities to control their eating purge.[12]
The incidence of eating disorders in females has been extensively studied in both anorexia nervosa and bulimia nervosa.[13]
In a study of 105 patients with eating disorders, Braun [14] found that the life time prevalence of any offensive disorder was 41.2 % in anorectic restrictors, 82% in ano-rectic bulimics, 64.5% in patients with bulimia nervosa and 78% in patients with bulimia nervosa with a past history of anorexia nervosa.
According to Treasure [15] the present time prevalence of all eating disorders is about 5%.cultural, social and interpersonal elements can trigger onset and change in networks can sustain the illness.
Although it is clear that anorexia nervosa occurs in men as well as women, and in younger as well as in older people, few studies report incidence rate for males or for people beyond the age of 35.
The majority of male incidence rate reported was below 0.5 per 100,000 populations per year. [16]
Studies have reported the female to male ratio to be around 11 to 1.[17]
On an overall female rate of 15.0 per 100,000 population per year, Lucas [18] reported a rate of 9.5 for 30-39-year-old women,5.9 for 40-49-year-old women,1.8 for 50-59-year-old and 0.0 for women aged 60 and over .
According to a research by Casper [19], women who had recovered from anorexia nervosa rated higher on risk avoidance, displayed greater restraint in emotional ex-pression and initiative, and showed greater conformance to authority than age-matched normal women.
Lucas [20] found that the age-adjusted individuals rates of AN in females 15-24-year-old showed a highly signifi-cant linear increasing trend from 1935-1989,with an esti-mated rate of increase of 1.03 per 100.000 person per ca-lendar year.

Read More:

Electronics / Haptic Training Method Using a Nonlinear Joint Control
« on: April 23, 2011, 08:41:04 am »
Author : Jarillo-Silva Alejandro, Domínguez-Ramírez Omar A., Parra-Vega Vicente
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper :

Abstract— There are many research works on robotic devices to assist in movement training following neurologic injuries such as stroke with effects on upper limbs. Conventional neurorehabilitation appears to have little impact on spontaneous biological recovery; to this end robotic neurorehabilitation has the potential for a greater impact. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. This paper consider the contribution on a haptic training method based on kinesthetic guidance scheme with a non linear control law (proxybased second order sliding mode control) with the human in the loop, and with purpose to guide a human user´s movement to move a tool (pen in this case) along a predetermined smooth trajectory with finite time tracking, the task is a real maze. The path planning can compensate for the inertial dynamics of changes in direction, minimizing the consumed energy and increasing the manipulability of the haptic device with the human in the loop. The Phantom haptic device is used as experimental platform, and the experimental results demonstrate.

Index Terms—Diagnosis and rehabilitation, haptic guidance, sliding mode control, path planning, haptic interface, passivity and control design.

IN  the least decade the number of patients who have suffere accidents stroke or traumatic brain injuries have increase considerably [1]. The central nervous system damage can lea to impaired movement control upper extremities, which ar facing major difficulties in relation to the activities of daily life. Several studies showed that the rehabilitation therapy which is based on motion-oriented tasks repetitive, it help to improve the move-ment disorder of these patients [2], [3]. Unfortunately repeatability therapy requires consistency, time for physicians and therefore money.
Conventional neurorehabilitation appears to have little impact on impairment on spontaneous biological recovery. Robotic neurorehabilitation has potential for a greater impact on impairment due to an easy deployment, its applicability across of a wide range of motor impairment, its high measurement reliability, the capacity to deliver high dosage of training protocols and high intensity in the exercises. This situation economic rehabilitation. With the purpose of enhance the relationship between outcome and the costs of rehabilitation robotic devices are that being introduced in clinical rehabilitation [4], [5]. Rehabilitation using robotic devices not only has had an important contribution in this area but also has introduced greater accuracy and repeatability of rehabilitation exercises. Accurate measurement quantitative parameter using robotic instrumentation is a tool that accomplishes the goal of the patient’s recovery.

As care is decentralized and moves away the inpatient settings to homes, the availability of technologies can provide effective treatment outside, the acute care of the hospital would be critical to achieve sustainable man-agement of such diseases. In the field of neuromotor rehabilitation, skilled clinical managers and therapists can achieve remarkable results without using technological tools or with rudimentary equipment, but such precious human capital is in very short supply and, in an case, is totally insufficient to sustain the current demographic.
Robots are particularly suitable for both rigorous testing and application of motor learning principles to neurorehabilitation. Computational motor control and learning principles derived from studies in healthy subjects are introduced in the context of robotic neurorehabilitation. There is an increasing interest in using robotic devices to provide rehabilitation therapy following neurologic injuries such as stroke and spinal cord injury. The general paradigm to be explored is to use a robotic device to physically interact with the participant’s limbs during movement training, although there is also work that uses robot that do not physically contact the patients [6].

The Problem
Some of the problems presented by a motor rehabilitation of patients who have suffered a brain injury, is the time rehabilitation takes place, the cost that this entails, poor information that the specialist has to determine the diagnosis of a patient in rehabilitation, and the lack of platforms for rehabilitating patients who cannot assist to the hospital and require continued rehabilitation.

Our Proposal
One technique used to solve the problem of generating rehabilitation platforms using robotic systems is haptic guidance, this technique is based on the use of haptic devices, which allow human-machine interaction. These haptic devices are programmed under certain considerations, such as considering the safety of the patient during the interaction, the physiology of the patient in order to generate tracking trajectories that allow proper rehabilitation, and so on. In this paper is proposed to generate a trajectory based on the solution of mazes in 2D and the haptic device guides the patient under a control law, which is designed with certain characteristics that allow coupling among the patient, the device and the trajectory. The haptic device is equipped with optical encoders, this allows obtaining data such as position, velocity, acceleration, force and torque, these data in combination with other data allow to the specialist to generate a clinical diagnosis, which has therapeutic support based on the patient’s movements in eac exercise performed on the platform.

In section 2, we introduce the human-haptic interaction, including the haptic scheme, the dynamic model of the haptic device and the guidance control law implemented in the experimental platform. The description task and the path planning for guiding to the experiments are given in section 3; experimental results are discussed in 4. Finally, we presen the conclusions in section 5.

Read More :

Author : Suman Khakurel, Ajay Kumar Ojha, Sumeet Shrestha, Rasika N. Dhavse
International Journal of Scientific & Engineering Research, Volume 1, Issue 3, December-2010
ISSN 2229-5518
Download Full Paper -

Abstract— Robotics and automation engrosses designing and implementation of prodigious machines which has the potential to do work too tedious, too precise, and too dangerous for human to perform. It also pushes the boundary on the level of intelligence and competence for many forms of autonomous, semi- autonomous and tele-operated machines. Intelligent machines have assorted applications in medicine, defence, space, under water exploration, disaster relief, manufacture, assembly, home automation and entertainment. Prime motive behind this project is to design and implement in hardware, a mobile controlled robotic system for maneuvering DC motors and remotely controlling the electric appliances. Mobile platform is to be in the form of a robot capable of standard locomotion in all directions. The robot can be operated in two ways; either by a call made to the mobile phone mounted on the robot or by communication with a computer through a wireless module. Signals from the receiver mobile phone are transformed into digital outputs with the assistance of a decoder. Microcontroller processes these decoded outputs and generates required signal to drive the DC motors. Outputs from the microcontroller are processed by the motor driver to get sufficient current for smooth driving of DC motors and electronic equipments. Consequently, motion of the robot can be regulated and the electric appliances can be controlled from a remote locality. Various levels of automation can be attributed to the robot by suitably modifying the programme loaded on the microcontroller chip.

The robot, integrated with a mobile phone, forms the crux of this project. This integrated architecture is controlled by a supplementary mobile phone, which initiates the call. Once the call is associated, any button pressed corresponds to a unique tone at the other end. The tone is termed as ‘Dual Tone Multiple Frequency’ (DTMF), which is perceived by the robot with the help of a cellular phone stacked in it. The received tone is fed into the DTMF decoder (CM8870), which decodes the DTMF tone into its equivalent binary. Binary output from the decoder is consequently administered by the microcon-troller (P89C61X2). P89C61X2 is pre-programmed to take necessary decisions corresponding to the given set of binary inputs. Output from P89C61X2 is provided to the drivers L293D and ULN2003. The former of which acts as a regulator to drive the DC motor while; the latter can be provided to drive the electrical appliances.

Cellular phone generating the call acts as a remote control obviating the need for construction of superfluous receiver and transmitter units and thus can be used for tele-control of electronic appliances. Similar work has been done by Mr. D. Manojkumar and his project group taking industrial application into consideration. However, this project focuses on the tele/remote control of the electronic appliances and provides the detailed analysis of the methodology, components, algortihm used and the results.

Read Complete Paper:

Networking / Fiber Optics Based Parallel Computer Architecture
« on: January 22, 2011, 06:18:17 am »
International Journal of Scientific & Engineering Research, Volume 1, Issue 2, November-2010
ISSN 2229-5518

Computer systems that use optical fiber, in particular the Parallel Sysplex architecture from IBM. Other applications do not currently use optical fiber, but they are presented as candidates for optical interconnect in the near future, such as the Power- Parallel supercomputers which are part of the Advanced Strategic Computing Initiative (ASCI). Many of the current applications for fiber optics in this area use serial optical links to share data between processors, although this is by no means the only option. Other schemes including plastic optics, optical backplanes, and free space optical interconnects Towards the end of the paper, we also provide some speculation concerning machines that have not yet been designed or built but which serve to illustrate potential future applications of optical interconnects. Because this is a rapidly developing area, we will frequently cite Internet references where the latest specifications and descriptions of various parallel computers may be found.

Computer engineering often presents developers with a choice between designing a computational device with a single powerful processor (with additional special-purpose coprocessors) or designing a parallel processor device with the computation split among multiple processors that may be cheaper and slower. There are several reasons why a designer may choose a parallel architecture over the simpler single processor design. Before each reason, and other categorizing methods in this paper we will have a letter code, A, which we will use to categorize architectures we describe in other sections of the paper.

1. Speed - There are engineering limits to how fast any single processor can compute using current technology. Parallel architectures can exceed these limits by splitting up the computation among multiple processors.
2. Price - It may be possible but prohibitively expensive to design or purchase a single processor machine to perform a task. Often a parallel processor can be constructed out of off-the-shelf components with sufficient capacity to perform a computing task.
3. Reliability - Multiple processors means that a failure of a processor does not prevent computation from continuing. The load from the failed processor can be redistributed among the remaining ones. If the processors are distributed among multiple sites, then even catastrophic failure at one site (due to natural or man-made disaster, for example) would not prevent computation from continuing.
4. Bandwidth - Multiple processors means that more bus bandwidth can be processed by having each processor simultaneously use parts of the bus bandwidth.
5. Other - Designers may have other reasons for adding parallel Processing not covered above.
Current parallel processor designs were motivated by one or more of these needs. For example, the parallel Sysplex family was motivated by reliability and speed, the Cray XMP was primarily motivated by speed, the BBN butterfly was designed with bandwidth considerations in mind, and the transputer family was motivated by price and speed. After a designer has chosen to use multiple processors he must make several other choices like processors. Number of processors, network topology
The product of the speed of the processors and the number of processors is the maximal processing power of the machine (for the most part unachievable in real life). The effect of network topology is subtler.

Network topologies control communication between machines. While most multiprocessors are connected with ordinary copper-wired buses, we believe that fiber optics will be the bus technology of the future. Topology controls how many computers may be necessary to relay a message from one processor to another. A poor network topology can result in bottlenecks where all the computation is waiting for messages to pass through a few very important machines. Also, a bottleneck can result in unreliability with failure of one or few processors causing either failure or poor performance of the entire system.
Four kinds of topologies have been popular for multiprocessors. They are
•  Full connectivity using a crossbar or bus. The historic C.mmp processor used a crossbar to connect the processors to memory (which allowed them to communicate). Computers with small numbers of processors (like a typical parallel Sysplex system or tandem system) can use this topology but it becomes cumbersome with large (more than 16) processors because every processor must be able to simultaneously directly communicate with every other. This topology requires a fan in and fan out proportional to the number of processors, making large networks difficult.
• Pipeline where the processors are linked together in a line and information primarily passes in one direction. The CMU Warp processor was a pipelined multiprocessor and many of the first historical multiprocessors, the vector processors, were pipelined multiprocessors. The simplicity of the connections and the many numerical algorithms that are easily pipelined encourage people to design these multiprocessors. This topology requires a constant fan in and fan out, making it easy to lay out large numbers of processors and add new ones.
• Torus and Allied topologies where an N processor machine requires √N processors to relay messages. The Goodyear MPP machine was laid out as a torus. Such topologies are easy to layout on silicon so multiple processors can be placed on a single chip and many such chips can be easily placed on a board. Such technology may be particularly appropriate for computations that are spatially organized. This topology also has constant fan in and fan out. Adding new processors is not as easy as in pipelined processors but laying out this topology is relatively easy. Because of the ease of layout sometimes this layout is used on chips and then the chips are connected in a hypercube.
• Hypercube and Butterfly topologies have several nice properties that have lead to their dominating large-scale multiprocessor designs. They are symmetric so no processor is required to relay more messages than any other is. Every message need only be relayed through log (N) processors in an N processor machine and messages have multiple alternate routes, increasing reliability under processor failure and improving message routing and throughput. Transputer systems and the BBN butterfly were some of the first multiprocessors that adapted this type of topology. This topology has a logarithmic fan out and that can complicate layout when the size of the processor may grow over time. There is an alternative topology called cube-connected cycles that has the same efficient message passing properties as the hypercube topology but constant fan out, easing layout considerably.
• Exotic - There are a variety of less popular but still important topologies one can use on their network.
The more efficient and fast the bus technology is, the simpler the topology can be. A really fast bus can simply connect all the processors in a machine together by using time multiplexing giving INI slots for every possible connection between any two of the N processors.

The primary computing task for the machine under consideration has a major effect on the network topology. Computing tasks fall into three general categories.
• Heavy computational tasks - these tasks require much more computation than network communication. Some examples of this task are pattern recognition (SETI), code breaking, inverse problems, and complex simulations such as weather prediction and hydrodynamics.
• Heavy communication tasks - these tasks involve relatively little computation and massive amounts of communication with other processors and with external devices. Message routing is the classic example of these tasks. Other such tasks are data base operations and search.
• Intermediate or mixed tasks - these tasks lie between the above or are mixtures of the above. An example of an intermediate task is structured simulation problems, such as battlefield simulation. These simulations require both computation to represent the behavior and properties of the objects (like tanks) and communication to represent interaction between the objects. Some machines may be designed for a mixture of heavy computation and heavy communication tasks.
Historically, supercomputers focused on heavy computation tasks, particularly scientific programming, and mainframes focused on heavy communication tasks, particularly business and database applications.

Read Complete Paper:

Author : V.Vijayakumari, N. Suriyanarayanan
International Journal of Scientific & Engineering Research, Volume 1, Issue 2, November-2010
ISSN 2229-5518

India and China are, and will remain, the leading coun-tries in terms of the number of people with diabetes mellitus in the year 2025. Among the 10 leading coun-tries in this respect, five are in Asia. Although only a moderate increase in the total population in China is expected in the next 25 years, China is estimated to contribute almost 38 million people to the global burden of diabetes in the year 2025. India, due to its immense population size and high diabetes prevalence, will contribute 57 million [1]and [2].  These figures are based on estimated population growth, population ageing, and urbanization, but they do not take into account changes in other diabetes-related risk factors.

So, Diabetic screening programmes are necessary in addressing all of these factors when working to eradicate preventable vision loss in diabetic patients. When performing retinal screening for Diabetic Retinopathy [3] some of these clinical presentations are expected to be imaged. Diabetic retinopathy is globally the primary cause of blindness not because, it has the highest incidence and it often remains undetected until severe vision loss occurs. Advances in shape analysis, the development of strategies for the detection and quantitative characterization of blood vessel changes in the retina are of great importance. Automated early detection of the presence of exudates can assist the oph-thalmologists to prevent the spread of disease more efficiently.

Direct digital image acquisition using fundus cameras combined with image processing and analysis techniques has the potential to enable automated diabetic retinopathy screening. The normal features of fundus images include optic disk, fovea and blood vessels. Exudates and haemorrhages are the main abnormal features which is the leading cause of blindness in the working age population.
Optic disk is the brightest [4] part in the normal fundus images which can be seen as a pale, round or vertically slightly oval disk. Finding the main components in the fundus images helps in characterizing detected lesions and in identifying false positives. Abnormality detection in images is found to play an important role in many real life applications [5] suggested  neural network approach for the detection and classification of exudates. A decision support frame work for deducing the presence or absence of DR are developed and tested [6]. The detection rule is based on binary-hypothesis testing problem which simplifies the problem to yes/no de-cisions. The results suggest that by biasing the classifier towards DR detection, it is possible to make the classifier achieve good sensitivity.

2.1 Feature Extration
Here, in this method we use the concept that in normal retinal images the optic disc is the brightest part and next to it comes the exudates. So once after detecting the optic disc, the centre point is determined for extraction of vari-ous features in the image. Then the optic disc is removed from the image, thus we are now left with exudates as the next brightest region. Here again we can apply Binary Image [7] and proper threshold value is set and the exudates can be easily identified from the test image. The results are shown in figures 1 and 2.

Download Complete Paper -

2.2 Template Matching
For The concept behind this method is that, a normal and healthy retinal image is taken and it is kept as the refer-ence to isolate the abnormalities in the test image. This reference image acts as the template. Both the reference image and test images are converted from RGB to GRAY levels and then pixel by pixel both the images are com-pared. During comparison, the additional objects present in the test image get isolated and they are clearly visible in the output. If the test image is normal, then while com-parison it gets cancelled as there is no difference of pixel value between the two, where as in the test image with exudates, the optic disc gets cancelled and only exudates are separated in the output. and is shown in figure 3 to 5

Download Complete Paper -

2.3   Minimum Distance Discriminant Classifier
Color information has shown to be effective for le-sions detection under certain conditions. On the basis of color information, the presence of lesions can be preliminarily detected by using MDD (Minimum Distance Discriminant) classifier based on statistical pattern recognition techniques.

Download Complete Paper -

2.4   Enhanced MDD Classifier
This image works on the RGB co-ordinates rather than spherical co-ordinates. In the Minimum Dis-tance Discriminant (MDD) Classifier method, the centre of class is found using a training set and hence remains fixed. But this may cause problem because of difference in image illumination and their average intensity. So a method is employed such that the centre of class (Cyell and Cbgnd) varies dynamically depending on the image.

Download Complete Paper -

Engineering, IT, Algorithms / The Insulin Bio Code - Standard Deviation
« on: January 22, 2011, 06:00:04 am »
Author : Lutvo Kurić
International Journal of Scientific & Engineering Research, Volume 1, Issue 2, November-2010
ISSN 2229-5518

The biologic role of any given protein in essential life processes, eg, insulin, depends on the positioning of its component amino acids, and is understood by the "positioning of letters forming words“. Each of these words has its biochemical base. If this base is expressed by corresponding discrete numbers, it can be seen that any given base has its own program, along with its own unique cybernetics and information characteristics.

Indeed, the sequencing of the molecule is determined not only by distin biochemical features, but also by cybernetic and information principles. For this reason, research in this field deals more with the quantitative rather than qualitative characteristcs of genetic information and its biochemical basis. For the purposes of this paper, specific physical and chemical factors have been selected in order to express the genetic information for insulin.Numerical values are them assigned to these factors, enabling them to be measured. In this way it is possible to determine oif a connection really exists between the quantitative ratios in the process of transfer of genetic information and the qualitative appearance of the insulin molecule. To  select these factors, preference is given to classical physical and chemical parameters, including the number of atoms in the relevant amino acids, their analog values, the position in these amino acids in the peptide chain, and their frenquencies.There is a arge numbers of these parameters, and each of their gives important genetic information. Going through this process, it becomes clear that there is a mathematical relationship between quantitative ratios and the qualitative appearance of the biochemical „genetic processes“ and that there is a measurement method that can be used to describe the biochemistry of insulin.

The sample of an insulin can be represented by two different forms: one is the discrete form and the other is the sequential form. In the discrete form, a insulin is represented by a set of discrete codes or a multiple dimension vector. In the sequential form, an insulin is represent by a series of amino acids according to the order of their position in the chains 1AI0. Therefore, the sequential form can naturally reflect all the information about the sequence order and lenght of an Insulin. The crux is: can we develop a different discrete form to represent an insulin that will allow accomodation of partial, if not all, sequence-order information? Since a protein sequence is usually represented by a series of amino acid codes, what kind of numerical values should be assigned to these codes in order to optimally convert the sequence-order information into a series of numbers for the discrete form representation?
How functioning of biochemistry is determined through cybernetic information principles, will be discussed further in next section.

The matrix mechanism of Insulin, the evolution of biomacromolecules and, especially, the biochemical evolution of Insulin language, have been analyzed by the application of cybernetic methods, information theory and system theory, respectively. The primary structure of a molecule of Insulin  is the exact specification of its atomic composition and the chemical bonds connecting those atoms.

Read Complete Paper:

Author : S. Ranichandra, T. K. P. Rajagopal
International Journal of Scientific & Engineering Research, Volume 1, Issue 2, November-2010
ISSN 2229-5518

Many business and economic situations are concerned with a problem of planning activity. In each case, there are limited resources at the disposal and the problem is to make use of these resources so as to yield the maximum production or to minimize the cost of production, or to give the maximum profit, etc. Such problems are referred to as the problems of constrained optimization.

LPP is a technique for determining an optimum schedule of interdependent activity in view of the available resources. The term "programming" means "planning" which refers to the process of determining a particular plan of action from amongst several alternatives. The general form of a LPP is described as follows. Let z be a linear function on Rn defined by
z = c1x1 + C2X2 + .. . + cnxn                     … (a)
where cj’s are constants. Let (aij) be an m x n real matrix and let {b1,b2.. . bm} be a set of constants such that
a11X1 + a12X2 +.... + a1nXn>= or<=or = or> or<b1
a21X1 + a22X2 + ... + a2nXn >= or<=or = or> or<b2
am1x1 + am2X2 + . . . + amnxn >= or<=or = or> or<bm

and finally let
Xj>=0;j = l,2,3,...n                 … (c)

The problem of determining an n-tuple (x1, x2, . . . xn) which makes z a minimum (or maximum) and which satisfies (b) and (c) is called the general LPP.

The objective of the GAs is to find an optimal solution to a problem. Of course, since GAs is heuristic procedures, they are able to find very good solutions for a wide range of problems. GAs work by evolving a population of individuals over a number of generations. A fitness value is assigned to each individual in the population, where the fitness computation depends on the application.

The main advantages of Genetic Algorithms are:
•   They are adaptive
•   They have intrinsic parallelism
•   They   are   efficient   for   complex problems

This paper provides a GA based solution to LPP. GAs have been applied to many diverse areas such as Function optimization, VLSI circuit design, Job Shop scheduling, Bin packing, Network design, Transportation problems and etc.

Genetic Algorithm was invented by Professor John Holland at the University of Michigan in 1975. It was then made widely popular by Professor David Goldberg at the University of Illinois. The original Genetic Algorithm and its many variants are collectively known as Genetic Algorithms. GAs are general-purpose search techniques based on principles inspired from the genetic and evolution mechanisms observed in natural systems and populations of living beings. Their basic principle is the maintenance of a population of solutions to a problem {genotypes) in the form of encoded information that evolve in time. The evolution is based on the laws of natural selection (survival of the fittest) and genetic information     recombination     within     the population. The evolving population samples the search space and accumulates knowledge about the good and bad quality areas and recombining this knowledge to form solutions with optimal performance to the specific problem[l].
At first a population of M solutions does a generated at random, encoded in string of symbols (preferably binary strings) resemble natural chromosomes. Each member of the population is then decoded to a real problem solution and a "fitness" value is assigned to it by a quality function that gives a measure of the solution quality. When the evaluation is completed, individuals from the population are selected in pairs to replicate and form "offspring" individuals (i.e., new problem solutions). The selection is performed probabilistically. When two parents are selected their symbol strings are combined to reproduce an "offspring" solution using genetic-like operators. The main operators used are Crossover and Mutation. Crossover simply combines the parent symbol strings forming a new chromosome string that inherits solution characteristics from both parents. Mutation covers this need by injecting new information in the produced string. The injection is done by randomly altering symbols of the new chromosome. Generally mutation is considered as a secondary but not useless operator that gives a non-zero probability to every solution to be considered and evaluated.
When M new solution strings are produced, they are considered as a new generation and they totally replace the "parents" in order for the evolution to proceed. Many generations are needed for the population to converge to the optimum or a near-optimum solution, the number increasing according to the problem difficulty.

Read Complete Paper:

Authors : Amandeep Singh, Manu Bansal
International Journal of Scientific & Engineering Research, Volume 1, Issue 1, October-2010
ISSN 2229-5518
Cryptography includes two basic components: Encryption algorithm and Keys. If sender and recipient use the same key then it is known as symmetrical or private key cryptography. It is always suitable for long data streams. Such system is difficult to use in practice because the sender and receiver must know the key. It also requires sending the keys over a secure channel from sender to recipient [4]. The question is that if secure channel already exist then transmit the data over the same channel.

On the other hand, if different keys are used by sender and recipient then it is known as asymmetrical or public key cryptography. The key used for encryption is called the public key and the key used for decryption is called the private key. Such technique is used for short data streams and also requires more time to encrypt the data [3]. To encrypt a message, a public key can be used by anyone, but the owner having private key can only  decrypt it. There is no need for a secure communication channel for the transmission of the encryption key. Asymmetric algorithms are slower than symmetric algorithms and asymmetric algorithms cannot be applied to variable-length streams of data. Section 1 includes the introduction of cryptography. Section 2 describes the cryptography techniques. Section 3 includes the analysis and implementation of DES Algorithm using Xilinx software. Conclusion and Future work has been included in Section 4.

Symmetric Cryptography
Asymmetric Cryptography

Data Encryption Standard (DES) is a cryptographic standard that was proposed as the algorithm for the secure and secret items in 1970 and was adopted as an American federal standard by National Bureau of Standards (NBS) in 1973. DES is a block cipher, which means that during the encryption process, the plaintext is broken into fixed length blocks and each block is encrypted at the same time. Basically it takes a 64 bit input plain text and a key of 64-bits (only 56 bits are used for conversion purpose and rest bits are used for parity checking) and produces a 64 bit cipher text by encryption and which can be decrypted again to get the message using the same key. Additionally, we must highlight that there are four standardized modes of operation of DES: ECB (Electronic Codebook mode), CBC (Cipher Block Chaining mode), CFB (Cipher Feedback mode) and OFB (Output Feedback mode). The general depiction of DES encryption algorithm which consists of initial permutation of the 64 bit plain text and then goes through 16 rounds, where each round consists permutation and substitution of the text bit and the inputted key bit, and at last goes through a inverse initial permutation to get the 64 bit cipher text.

Read Complete Research Article on

Author : Mueen Uddin, Azizah Abdul Rahman
International Journal of Scientific & Engineering Research, Volume 1, Issue 1, October-2010
ISSN 2229-5518

Data centers are the building blocks of any IT business organization, providing capabilities of centralized storage, backups, management, networking and dissemination of data in which the mechanical, lighting, electrical and computing systems are designed for maximum energy efficiency and minimum environmental impact [1]. Data centers are found in nearly every sector of the economy, ranging from financial services, media, high-tech, universities, government institutions, and many others. They use and operate data centers to aid business processes, information management and communication functions [2]. Due to rapid growth in the size of the data centers there is a continuous increase in the demand for both the physical infrastructure and IT equipments, resulting in continuous increase in energy consumption. Data center IT equipment consists of many individual devices like Storage devices, Servers, chillers, generators, cooling towers and many more. But Servers are the main consumers of energy because they are in huge number and their size continuously increases with the increase in the size of data centers. As new servers are being added continuously into data centers without considering the proper utilization of already installed servers, it will cause an unwanted and unavoidable increase in the energy consumption, as well as increase in physical infrastructure like over-sizing of heating and cooling equipments. This increased consumption of energy causes an increase in the production of green house gases which are hazardous for environmental health. Hence it not only consumes space, energy, but also cost environmental stewardship. Virtualization technology is now becoming an important advancement in IT especially for business organizations and has become a top to bottom overhaul of the computing industry. Virtualization combines or divides the computing resources of a server based environment to provide different operating environments using different methodologies and techniques like hardware and software partitioning or aggregation, partial or complete machine simulation, emulation and time sharing [3].

It enables running two or more operating systems simultaneously on a single machine. Virtual machine monitor (VMM) or hypervisor is a software that provides platform to host multiple operating systems running concurrently and sharing different resources among each other to provide services to the end users depending on the service levels defined before the processes [4]. Virtualization and server consolidation techniques are proposed to increase the utilization of underutilized servers so as to decrease the energy consumption by data centers and hence reducing the carbon footprints [4]. Section 2 provides a detailed background of the problem and emphasizes the need for implementing virtualization technology to save energy and cost. Section 3 describes the solution of the problem and proposes a methodology of categorizing the resources of data center into different resource pools, and analysis of the results to prove the benefits of server consolidation. Section 4 describes the process of implementing virtualization technology in a data center. In the last conclusions and recommendations are given.

Read Complete Article :

Author : Paul Jeffery Marshall
International Journal of Scientific & Engineering Research, Volume 1, Issue 1, October-2010
ISSN 2229-5518

Millions of financial data transactions occur online every day of the year 24 hours a day 7 days a week and bank cyber crimes take place every day when bank information is compromised. Skilled criminal hackers can manipulate a financial institution’s online information system, spread malicious bank Trojan viruses that allow remote access to a computer, corrupt data, and impede the quality of an information system’s performance. If sensitive information regarding commercial and personal banking accounts is not better protected, cyber-thieves will continue to illegally access online financial accounts to steal trillions of dollars plus sensitive customer information globally. Audit of bank information technology systems, ethics and policy requirements for bank information security systems, awareness of risk potential, continuity of financial institution information systems all should be high on the list of federal & state regulators and banking board of director’sagendameetings. One major real world cyber crime directed at any specific financial institution can severely take down a domestic and global financial network. Banks and Savings & Loans is identified as financial institutions and both are custodians of not only their customer’s money, but even more so a financial institution is responsible for their customer’s personal and legacy data. Examples of information that financial institutions are the custodian of records for their commercial and personal banking customers is: day-to-day transactions including deposits, withdrawals, balance amount, social security number, birth date, loan information, partnership agreements related to a loan, year-to-date statements and a host of other extremely sensitive financial information. All the above mention records, transactions and sensitive information is events that occur online usually more than 50 percent of the time. Cyber crooks, network hackers, cyber pirates, internet thieves is an emerging crime category of criminals and threat to online banking information security systems. According to reports $268 million dollars was stolen online from financial institutions, 2009 cyber-robbery of financial institutions escalated to $559 million dollars ( The efforts used to hi-jack financial institutions was Banking Trojans that piggy-back legitimate customer bank accounts to steal passwords, fraudulent wire transfers, and hackers working from the inside to compromise the information security system of an financial institution, in other words; an inside job.

In age where technology has outpaced the law regarding banking cyber crimes many online pirates make it their fulltime work to challenge bank information security systems to find a point of entry into an information system in order to access bank data and steal money. Customers can be clueless about cyber crimes until it is too late and all their money has disappeared from their account. When a potential customer walks through the door of a financial institution to open a basic checking or saving account the customer is asked and required to provide all kinds of sensitive information like social security number, driver license number, and sign an affidavit that authorizes the financial institution to obtain a credit report to check the customer’s current credit history and there after every six months before an account is open. Then on top of that requirement; the new customer is asked by the financial institution to trust them with all that sensitive information. Illustrated below are four scenarios and consequences of bank cyber crimes.
Complete Article on

My research for this project paper lead me to the formation of what I believe to be a need for what I will call the CARDINAL RULES of Information Security related to all industries including financial institutions. CARDINAL

RULES of Information Security is as follow:

1. Unprotected Information Systems is a Business Crime
2. Lack of Information Security Policy is Unacceptable
3. Audit and Compliance routinely to Identify Information Security Shortfalls
4. Risk Management Analysis Strengthens Information and System Security
5. Strong Virus Protection Policy help protect against Network Vulnerabilities and Threats

What is at stake when sensitive information is compromised online and all roads lead back to the custodian of the information? In an age where hackers and online information bandits keep 24 hour vigilance as cyber intruders with intent on thief and crime; no information system is completely a safe zone. The best offence against cyber criminals who seek to compromise online system security is defense. Stakeholders who are responsible for online financial data must have a plan, policy, and protection related to information security. In my opinion, CARDINAL RULES of Information Security should be adopted into the by-laws of all business models who expect to do online e-Commerce business in the future. Cyber threats and attacks are real, many go undetected, they occur every day, and will be on the rise in coming years. The facts are clear; the custodian of online information has the responsibility for the security of the data.
Complete Article on

  • Performance Analysis of a 16-User 2.5 Gigabit Optical-CDMA Using Wavelength/Time Codes
  • Three Layered Hierarchical Fault Tolerance Protocol for Mobile Agent System
  • Analysis of Routing Protocols for Highway Model without Using Roadside Unit and Cluster
  • Use of Frequency Modulation Technique to improve security system
  • Damage-Based Theories of Aging and Future Treatment Schemes
  • Methods are used to handle overloading of information in Usenet
  • Trust Computing Models For Enhancing Mobile Agent Security:A Perspective Study
  • Image Processing For Biomedical Application
  • Performance Improvement in Optical Burst-Switching Networks
  • A Role of Query optimization in Relational Database
  • Vitamin-C Rich Aonla (Emblica officinalis Gaertn.) Based Blended Ready-to-Serve Beverages
  • An Effective Way for Wind Farm Planning
  • Importance of Intrusion Detection System (IDS)
  • Scheduling High Performance Data mining Tasks On Weka4WS Grid Framework
  • An Effiecient Modeling Technique for Heart Sounds and Murmurs
  • A Comparative study on Breast Cancer Prediction Using RBF and MLP
  • Artificial Intelligence-Robots, Avatars (current applications) and the fall of Human Mediator
  • Image Processing For Biomedical Application
  • A Performance based Multi-relational Data Mining Technique
  • Oscillation Properties of Solutions for Certain Nonlinear Difference Equations of Third Order
  • A Distributed Administration Based Approach for Intrusion Detection in Mobile Ad Hoc Networks
  • Face Modeling using segmentation technique
  • Cloud Computing
  • Media used as a inspirational Factor for Future Women Candidates in Politics
  • CYBER SOCIAL NETWORKS AND SOCIAL MOVEMENTS                                   
  • An Efficient Decision Tree Algorithm Using Rough Set Theory
  • Search Engine Technique Using New Ranking Algorithm for Web Mining
  • Secure Wireless Network System against Malicious Rogue Threats
  • Design and Implementation of Mobile and Internet Product Access Information and Its Administration System
  • Ten Commandments: Prevention of Data Loss
  • Implementation and Analysis of RSA on GSM Network
  • A novel fast version of particle swarm optimization method applied to the problem of optimal capacitor placement in radial distribution systems
  • Shake-down Satellites  on core-level regions of the  XPS  for europium(III) compounds
  • Review Of Ant Colony Optimization Algorithms On Vehicle Routing Problems And Introduction To Estimation-Based ACO
  • Demographic Analysis of Investment Decisions and Perception of People in India
  • A Comparative Study on Improving the Latency Time of File Access Using Standard Backpropagation Neural Networks
  • Quantized Conductance of One Dimensional Quantum Wires
  • GPS Based Voice Alert System for the Blind
Reference -

Research Paper Published (December 2010)

Mobile Controlled Robots for Regulating DC Motors and their Domestic Applications[Full-Text]
Suman Khakurel, Ajay Kumar Ojha, Sumeet Shrestha, Rasika N. Dhavse

Robotics and automation engrosses designing and implementation of prodigious machines which has the potential to do work too tedious, too precise, and too dangerous for human to perform. It also pushes the boundary on the level of intelligence and competence for many forms of autonomous, semi- autonomous and tele-operated machines. Intelligent machines have assorted applications in medicine, defence, space, under water exploration, disaster relief, manufacture, assembly, home automation and entertainment. Prime motive behind this project is to design and implement in hardware, a mobile controlled robotic system for maneuvering DC motors and remotely controlling the electric appliances. Mobile platform is to be in the form of a robot capable of standard locomotion in all directions. The robot can be operated in two ways; either by a call made to the mobile phone mounted on the robot or by communication with a computer through a wireless module. Signals from the receiver mobile phone are transformed into digital outputs with the assistance of a decoder. Microcontroller processes these decoded outputs and generates required signal to drive the dc motors. Outputs from the microcontroller are processed by the motor driver to get sufficient current for smooth driving of dc motors and electronic equipments. Consequently, motion of the robot can be regulated and the electric appliances can be controlled from a remote locality. Various levels of automation can be attributed to the robot by suitably modifying the programme loaded on the microcontroller chip.


An FPGA Based Digital Controller for AC Loads Utilizing Half and Full Wave Cycle Control[Full-Text]
Shaiyek Md. Buland Taslim, Shaikh Md. Rubayiat Tousif

Voltage control for AC loads can be performed by controlling phase and cycles of AC voltages reaching load end. Cycle control tends to be more beneficial, as it reduces the amount of harmonic frequency in the circuit compared to phase control circuit. When a wave is passed in cycle control circuit, then either an entire half cycle or a complete full cycle is passed, hence eliminating the sharp change in the wave which is typical in phase control, causing harmonic frequency in the circuit. Cycle control is the most convenient process to control the output voltage for reducing RFI and this is the very approach that has been utilized in designing this project. This project utilized a voltage comparator to generate a digitized AC voltage and an SCR (Silicon Controlled Rectifier) to supply the required AC cycles to the load. The central controller was a digital controller, implemented in a Xilinx Spartan - 2 FPGA. The digital controller utilized a Finite State Machine (FSM) that took in digital AC signal and the desired percentage of load voltage. With this information it passed a certain number of half or full cycles to the load, at the same time ensuring that, number of these AC cycles satisfied users percentage requirement. The digital controller was modeled in VHDL (Hardware Description Language), synthesized with XST tool, placed and routed in a Xilinx Spartan- 2 FPGA (xc2S50 � pq208) in Xilinx ISE 9.1i WebPack design environment. The placed and routed designed was implemented in the Pegasus FPGA board from Digilent containing the above mentioned FPGA. The simulations and the outputs of the implemented hardware accorded with the expected outputs.


Multilevel Inverters: A Comparative Study of Pulse Width Modulation Techniques[Full-Text]
B.Urmila, D.Subbarayudu

The multilevel inverter topology gives the advantages of usage in high power and high voltage application with reduced harmonic distortion without a transformer. This paper presents a comparative study of nine level diode clamped inverter for constant Switching frequency of sinusoidal Pulse width Modulation and sinusoidal Natural Pulse width Modulation with Switching frequency Optimal Modulation.Index Terms� Multicarrier Pulse Width Modulation, diode clamped inverter, Switching frequency optimal Pulse Width Modulation, Sub-Harmonic Pulse Width Modulation, Constant switching frequency, Sinusoidal Natural Pulse Width Modulation, Sinusoidal Pulse Width Modulation, multilevel converter, multilevel inverter Total harmonic distortion.


Detecting Malicious Nodes For Secure Routing In MANETS Using Reputation Based Mechanism[Full-Text]
Santhosh Krishna B.V, Mrs.Vallikannu M.E

Mobile ad-hoc networks (MANETS) are prone to a number of security threats. We incorporate our distributed reputation protocol within DSR and perform extensive simulations using a number of scenarios characterized by high node mobility, short pause time and highly sparse network in order to evaluate each of the design choices of our system. We focus on a single and multiple blackhole attacks but our design principles and results are applicable to a wider range of attacks such as grayhole, flooding attacks. Our implementation of blackhole comprises active routing misbehavior and forwarding misbehavior. We design and build our prototype over DSR and test it in NS-2 in the presence of variable active blackhole attacks in highly mobile and sparse networks.


A Fault-tolerant Scheme for Routing Path Re-establishment for reliable communication in Heterogeneous Networks[Full-Text]
R.S. Shaji, Dr. R.S. Rajesh, B Ramakrishnan

In heterogeneous environments, devices accessible in different networks may help to provide new opportunities for utilizing new services when they are connected efficiently. In mobile ad hoc networks, it is very difficult in maintaining the links among the devices, because of the frequent change in the density of mobile devices, their various medium of access nature and their mobility. In this paper, we have considered various aspects of the occurrence of fault in the routing path by predicting in earlier. We have tested the path maintenance procedure using three fault occurring scenarios. Ours is a novel routing scheme named SFUSP (Self-eliminating Fault-tolerant based Un-interrupted reliable Service switching mobile Protocol) which is basically a proactive scheme with added functionalities like clustering and self-elimination. It is specially designed for establishing reliable route in the heterogeneous networks and senses the path link break in advance. Performance evaluation is done for routing metrics and fault-tolerant metrics compared with other protocols like AODV, DSR and OLSR under various mobility models like Random Waypoint, Brownian and Manhattan for different mac layers.


On Common Fixed Point For Compatible mappings in Menger Spaces[Full-Text]
M. L. Joshi, Jay G. Mehta

In this paper the concept of compatible map in menger space has been applied to prove common fixed point theorem. A fixed point theorem for self maps has been established using the concept of compatibility of pair of self maps.In 1942 Menger has introduced the theory of prob-abilistic metric spaces in which a distribution function was used instead of non-negative real number as value of the metric. In 1966, Sehgal initiated the study of contraction mapping theorems in probabilistic metric spaces. Since then several generalizations of fixed point Sehgal and Bharucha-Reid, Sherwood, and Istrat-escu and Roventa have obtained several theorems in probabilistic metric space. The study of fixed point theorems in probabilistic metric spaces is useful in the study of existence of solutions of operator equations in probabilistic metric space and probabilistic functional analysis. In 2008, Altun and Turkoglu proved two common fixed point theorems on complete PM-space with an implicit relation.


Sintered Properties of Aluminium Alloy for better Nano Tool Products[Full-Text]  Download
Raji.K, S. Alfred Cecil Raj

The most important property of aluminium is its low specific gravity (2.7), high electrical and thermal conductivities, high ductility and corrosion resistance in various media. Aluminium has a face centered cubic crystal lattice whose constant depends upon its degree of purity. Moreover the mechanical properties of pure aluminium are not very high though it posses good ductility. When aluminium is added with copper, nickel, iron, silicon the mechanical properties will be increased. The present investigation explains how the mechanical properties of Al-Cu composition are increased and the effect of sintering on the above composition dominating. One of the advantages, of P/M is , metals and alloys in any proportion can be mixed together to manufacture articles of any desired shapes, in this respect process is not governed by the phase rule1 which is applicable to alloys manufactured by melting.


Computer Aided Screening for Early Detection of Breast Cancer using BRCA Gene as an Indicator[Full-Text]
S.Ranichandra, T. K. P. Rajagopal

A mass of breast tissue that is developing in an abnormal, uncontrolled way is the cancerous breast tumor. The early detection of breast cancer is a key for survival because of its association with augmented treatment options. Mammography screening and MRI are some of the existing breast cancer detection methods. MRI has problem of resulting more number of false positives. Mammogram has disadvantages like expensive, false positives for patients with dense breast tissues, detects only if tumor size bigger than 5mm and painful. Hence there is a need to develop more convenient and accurate method. In this proposed approach, we analyzed gene expression patterns in blood cells for detecting the breast cancer in the early stage. BRCA gene is a tumor suppressor gene which all people have. The BRCA DNA sequences from patients are generated by PCR method and used as input in the local sequence alignment program which is the implementation of Smith waterman algorithm. It compares the patient's gene sequence with the reference BRCA gene sequence to determine the cancer risk at a very early stage.


General Pseudoadditivity of Kapur's Entropy Prescribed by the Existence of Equilibrium[Full-Text]
Priti Gupta, Vijay Kumar

In this paper, the concept of composibility of the total channel capacity of the system of channels composed of two independent subsystems of channel is discussed. This system of channels is a function of entropies of the subsystems. Here, we show that the generalized entropies, the Tsalli's entropy, the normalized Tsalli's entropy and the Kapur's entropy are compatible with the composition law and defined consistently with the condition of equilibrium and satisfies pseudo additivity.


Research Paper Published (November 2010)

Exudates Detection Methods in Retinal Images Using Image Processing Techniques[Full-Text]
V.Vijayakumari, N. Suriyanarayanan

Exudates are one of the most common occurring lesions in diabetic retinopathy. Exudates can be identified as areas with hard white or yellowish colors and varying sizes, shapes and locations near the leaking capillaries within the retina. The detection of exudates is the major goal. For this the pre-requisite stage is the detection of optic disc. Once the optic disc is found certain algorithms could be used to detect the presence of exudates. In this paper few methods are used for the detection and the performance of all the methods are compared.


A Survey of Frequency and Wavelet Domain Digital Watermarking Techniques[ Full-Text ]
Dhruv Arya

Due to improvement in imaging technology and the ease with which digital content can be reproduced and manipulated there is a strong need for a digital copyright mechanism to be put in place. There is a need for authentication of the content as well as the owner. It has become easier for malicious parties to make scalable copies of copyrighted content with any compensation to the content owner. Digital Watermarking is being seen as a potential solution to this problem. Till date many different watermarking schemes have been proposed. This paper presents a comprehensive survey of the current techniques that have been developed and their effectiveness.


FPGA-Based Design of Controller for Sound Fetching from Codec Using Altera DE2 Board[ Full-Text ]
A.R.M. Khan, A.P.Thakare, S.M.Gulhane

The trend in hardware design is towards implementing a complete system, intended for various applications, on a single chip. In order to implement the any speech application in Altera DE2 board a controller is designed to control the CODEC and acquire the digital data from it. This paper presents an experimental design and implementation of the controller using the specification given by the Philips for I2C protocol & DSP mode of operation of CODEC on cyclone-II EP2C35F72C6 FPGA in Altera DE2 board . A controller was designed using VHDL language, which performs the two operations: I2C protocol operation to drive the Wolfson Codec WM8731, sound fetching from Wolfson Codec WM8731 to FPGA in DSP mode. Altera Quartus II 9.0 sp2 web Edition is used for the synthesis of the VHDL logic on FPGA and ModelSim- Altera 6.5b (Quartus II 9.1) Starter Edition is used for the simulation of VHDL logic.


The Insulin Bio Code - Standard Deviation[Full-Text]
Lutvo Kuri

This paper discusses cyberinformation studies of the amino acid composition of insulin, in particular the identification of scientific terminology that could describe this phenomenon, ie, the study of genetic information, as well as the relationship between the genetic language of proteins and theoretical aspect of this system and cybernetics. The result of this research show that there is a matrix code for insulin. It also shows that the coding system within the amino acidic language gives detailed information, not only on the amino acid �record�, but also on its structure, configuration and its various shapes. The issue of the existence of an insulin code and coding of the individual structural elements of this protein are discussed. Answers to the following questions are sought. Does the matrix mechanism for biosynthesis of this protein function within the law of the general theory of information systems, and what is the significance of this for understanding the genetic language of insulin? What is the essence of existence and functioning of this language.


Fiber Optics Based Parallel Computer Architecture[Full-Text]

Computer architecture is the conceptual design and fundamental operational structure of a computer system. It's a blueprint and functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory. In this paper, we present an overview of parallel computer architectures and discuss the use of fiber optics for clustered or coupled processors.Presently, a number of computer systems take advantage of commercially available fiber optic technology to interconnect multiple processors, thereby achieving improved performance or reliability for a lower cost than if a single large processor had been used. Optical fiber is also used to distribute the elements of a parallel architecture over large distances; these can range from tens of meters to alleviate packaging problems to tens of kilometers for disaster recovery applications.


Resource Allocation using Genetic Algorithms[Full-Text]
S. Ranichandra, T. K. P. Rajagopal

A genetic-based algorithm can be used to allocate the resources in production units through LPP models. LPP concentrates on the resource allocations in a more meaningful way so as to make maximum production or to gain the maximum profit or to minimize the cost of production. There are conventional procedures existing, to solve LPPs. But, in this paper an attempt is made to solve the problem unconventionally - that is, with the help of Genetic Algorithms (GAs). Here, the drawbacks of the conventional systems are removed. The benefits of the new system are blindness to auxiliary information, unique algorithm and simple computerization. The new algorithm is tested with a set of sample problems and the results are found to be satisfactory.


Promoting e-Governance through Right to Information: A Case-study of India[Full-Text]
Shalini Singh

Reinventing government has been a dominant theme since 1990s, wherein governments world over are attempting to improve the systems of public service delivery. Rapid strides made in the field of Information and Communication Technology (ICT) have facilitated the reinvention of governments and prepared them to serve the needs of a diverse society. In other words, the information age has redefined the fundamentals and transformed the institutions and mechanisms of service delivery forever. The vision is articulation of a desire to transform the way the government functions and the way it relates to its constituents. The concept of electronic governance, popularly called e-governance is derived from this concern.


Ideals in Group algebra of Heisenberg Group[Full-Text]
M. L. Joshi

In spectral theory ideals are very important. We derive the relation between non commutative and commutative algebra by a transformation which is associated to the semi-direct product of groups. We obtain and classify the ideal in L1 -algebra of Heisenberg group.


Aayushi Jangid

In today�s world when hacking and computer data robbery and theft are common phenomena, it is very important to protect data and information that is sent over a particular network. And that is where the need for cryptography arises. Cryptography is the science of writing the data or information in a secret code. It involves encryption and decryption. The data that can be understood without any special efforts is called as the plaintext. This data can be converted into the secret code and this process is called as the encryption. This encrypted data is called as the cipher text. This encrypted text can be converted back into the plaintext by a key and this process is called as the decryption. Thus, cryptography consists of both, the encryption and the decryption process.


Research Titles Submitted for Review (October 2010)

Online Banking: Information Security vs. Hackers Research Paper[ Full-Text ]
Paul Jeffery Marshall, University of Houston

In this paper I will discuss four scenarios regarding cyber crimes specifically directed at financial institutions and give specific examples. Also, I will discuss the malicious Zeus and URLzone bank malware Trojans that is currently causing security issues and threats to some financial institutions. Expected results from study and research is to bring awareness of increased activity of cyber-attacks directed at financial institutions, call for a global alliance against cyber-pirates, and point out that financial institution's have a responsibility to protect and secure information as custodians of their customer's sensitive/legacy data online.


Server Consolidation: An Approach to Make Data Centers Energy Efficient and Green[ Full-Text ]
Mueen Uddin, Azizah Abdul Rehman

Data centers are the building blocks of IT business organizations providing the capabilities of centralized repository for storage, management, networking and dissemination of data. With the rapid increase in the capacity and size of data centers, there is a continuous increase in the demand for energy consumption. These data centers not only consume a tremendous amount of energy but are riddled with IT inefficiencies. All data center are plagued with thousands of servers as major components. These servers consume huge energy without performing useful work. In an average server environment, 30% of the servers are “dead” only consuming energy, without being properly utilized. Their utilization ratio is only 5 to 10 percent. This paper focuses on the use of an emerging technology called virtualization to achieve energy efficient data centers by providing a solution called server consolidation. It increases the utilization ratio up to 50% saving huge amount of energy. Server consolidation helps in implementing green data centers to ensure that IT infrastructure contributes as little as possible to the emission of green house gases, and helps to regain power and cooling capacity, recapture resilience and dramatically reducing energy costs and total cost of ownership.


FPGA Implementation of Optimized DES Encryption Algorithm on Spartan 3E[ Full-Text ]
Amandeep Singh, Ms. Manu Bansal

Data Security is an important parameter for the industries. It can be achieved by Encryption algorithms which are used to prevent unauthorized access of data. Cryptography is the science of keeping data transfer secure, so that eavesdroppers (or attackers) cannot decipher the transmitted message. In this paper the DES algorithm is optimized using Xilinx software and implemented on Spartan 3E FPGA kit. The paper deal with the various parameters such as variable key length, key generation mechanism, etc used in order to provide optimized results.


Adaptive Fault Tolerant Routing in Interconnection Networks: A Review[ Full-Text ]
Dr.M.Venkata Rao,A.S.K.Ratnam

A multi-processor / computer systems are connected by varieties of interconnection networks. To enable any non-faulty component (Node / Link ) to communicate with any other non-faulty component in an injured interconnection network, the information on component failure is to be made available to non-faulty components, so as to route messages around the faulty components. In this paper we have reviewed to adaptive routing schemes proposed by Dally and Aloki , Glass and Ni ,and also the implementation details of reliable router. Moreover , it is proved that these schemes of routing messages via shortest paths with high probability and the expected length of routing path is very close to that of shortest path.


MUCOS RTOS for Embedded Systems[ Full-Text ]
A Purushotham Reddy, A R Reddy and R Elumalai

Real-time operating system (RTOS) is a very useful tool for developing the application on embedded boards with least software development effort. Though number of RTOS products are available in the market, µC/OS-II is a freeware with minimum facility and more popular among the hobbyist, researchers and small embedded system developers. The µC/OS-II supports preemptive scheduling which is not efficient with respect to processor utilization. As a result, this may lead to missing deadline of the task assigned and hence may cause system failure. In this paper, a Rate Monotonic scheduling (RM), which is a better scheduling method when compared to preemptive technique, is implemented on µC/OS-II and its operation in terms of task execution and processor utilization are discussed. This paper presents the RM Analysis (RMA) on µC/OS-II with two different types of hardwares: 1) Low end microcontroller i.e 8051 based system and 2). High end system based on ARM9 is used. Two tasks 1). Advanced encryption standard (AES) and 2) Message display on text and graphic display are taken to demonstrate the RMA. The software tools Keil IDE, SDCC compiler and Phillips Flash Magic are used for implementation of tasks on 8051 embedded development board. The ARM developers Suite v1.2 and DNW are used for implementation of tasks on ARM9 development board. In addition to the above said tasks additional tasks like Real time clock interface, graphical LCD interface and UART interface for communication with computer is also implemented. The scaled-version of µc/os-ii with multiple tasks uses 4 kB of flash and 512 bytes of RAM in 8051 board. The entire MUCOS RTOS in ARM9 with multiple tasks uses 29.23 kB of flash and 596.36 kB of RAM. The results obtained indicate optimum utilization of processor with RMA scheduler for realizing low cost software for developing the application on embedded boards with least software development effort.


Appasami.G, Karthikeyan.S

In this paper we present a framework to manage the distributed and heterogeneous databases in grid environment Using Open Grid Services Architecture – Data Access and Integration (ODSA-DAI). Even though there is a lot of improvement in database technology, connecting heterogeneous databases within a single application challenging task. Maintaining the information for future purpose is very important in database technology. Whenever the information is needed, then it refers the database, process query and finally produces the result. Database maintains the billion of information. User maintains their information in different database. So whenever they need, they collect it from different database. User cannot easily collect their information from different database without having database knowledge. The current database interfaces are just collecting the information from many databases. The Intelligent Knowledge Based Heterogeneous Database using OGSA-DAI Architecture (IKBHDOA) provides solution to the problem of writing query and knowing technical details of Database. It has intelligence to retrieve the information from Different Sets of Database based on user’s inputs.



In this paper we propose the Swarm intelligence (SI) and Free flight environment. Swarm Intelligence, which is a type of artificial intelligence based on the collective behavior of decentralized, self-organized systems. SI systems are typically made up of a population of simple agents or boids interacting locally with one another and with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random. On the other way Free flight is a developing air traffic control method that uses no centralized control (e.g. air traffic controllers). Instead, parts of airspace are reserved dynamically and automatically in a distributed way using computer communication to ensure the required separation between aircraft. There are some cases where the conflicts occur in free flight environment leading to chaos. This is been detected and resolved using the Swarm Intelligence which also helps in optimizing the path, finding the shortest path and also in air traffic control.


Alamelumangai.N, Dr. DeviShree.J

Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. The adaptive neuro-fuzzy inference system (ANFIS) is an effective tool that can be applied to induce rules from observations. In this paper, the ultrasound image restoration contaminated with speckle noise is investigated in nonlinear passage dynamics of order 2. This paper uses the pi-shaped MF and other parameters include the training epochs, the number of MFs for each input, the optimization method, the type of output MFs, and the over-fitting problem, are investigated. For comparison with the proposed technique, this paper simulates the adaptive mean, weighted mean and median filters, etc. It is observed that the quality in terms of mean square error (MSE) of the proposed method is multiple times better for speckle noise than that derived using any other conventional filtering techniques.


Debabrata Sarddar, Shovan Maity, Arnab Raha, Ramesh Jana, Utpal Biswas, M.K. Naskar

Mobility management, integration and interworking of existing wireless systems are important factors to obtain seamless roaming and services continuity in Next Generation Wireless Systems (NGWS).So it is important to have a handoff scheme that takes into account the heterogeneity of the network. In this work we propose a handoff scheme which takes handoff decision adaptively based on the type of network it presently resides and the one it is attempting handoff with through some predefined rules. It also relies on the speed of the mobile terminal to make a decision of the handoff initiation received signal strength (RSS) threshold value. Finally simulations have been done to show the importance of taking these factors into account for handoff decisions rather than having a fixed threshold value of handoff for different scenarios.


Agent-Based CBR for Decision Support System[ Full-Text ]
Mythili Devi Nagaiah

The aim of this paper is to describe about Case Based Reasoning (CBR) which is based on agents and the implementation in Decision Support System (DSS). The Introduction Section gives a introduction about Data Mining, integration of Data Mining concepts with CBR, and defines the characteristics and process cycle of CBR. The Second Section describes about the Agents, DSS, and Agent-Based DSS. The Third Section describes about the CBR in Decision Support System. The Fourth Section describes the CBR (Agent-Based) for Decision Support System and Interaction between CBR Agents and components of Decision Support System. The Final section gives a conclusion about the paper.


A Comparative Study for Performance Meas-urement of Selected Security Tools[ Full-Text ]
Mr. B.V.Patil, Dr. Prof. M. J. Joshi, Mr. H. N. Renushe

Today’s enterprise networks are distributed to different geographical locations and applications are more centrally located, this technological enhancement offers new flexible opportunities also measure security threats poses in the networks. These threats can external or Internal, external threats divided as hacking, virus attack, Trojans, worms etc. These threats can be minimized using number of network security tools and antivirus software, but all are not equally compatible for each type of attack hence the study is undertaken.This research paper highlights the performance of antivirus software using the number of parameters such as installation time, size, memory utilised, boot time, user interface launch time and full system scan time etc.

Pages: 1 ... 4 5 [6]