Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - IJSER Content Writer

Pages: 1 [2] 3 4 ... 22
16
Engineering, IT, Algorithms / New Approach for Detecting Intrusions
« on: February 18, 2012, 02:22:32 am »
Quote
Author : Mohammed Chennoufi, Fatima Bendella
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— This paper describes how multi-agent systems can help to solve a complex problem such as security and more precisely intrusion detection. Intrusion Detection System (I.D.S) is a component of the security infrastructure designed to detect violations of security policy. Most of the intrusions can be localized either by considering of models "pattern" of user activities (non-behavioral approach) or by considering the audit log (behavioral approach). False positives and false negatives are considered as the major disadvantages of these approaches. We consider that good I.D.S should respond to the characteristics of intelligent agents such as autonomy, distribution and communication.
For this we suggest a new approach based on multi-agent systems (M.A.S), which incorporates the characteristics of intelligent agents (automatic learning of new attacks) so that decisions taken by the system are the result of a work group of agents and makes IDS more flexible and reliable. This approach is applied to a large data source and requires a previous work (pretreatment).

Index Terms— Security, attack, I.D.S, K.D.D, M.A.S, MLP, cognitive agent, learning.

1   INTRODUCTION                                                                     
When the Internet was created, the main challenge was to enable data transmission. This objective was achieved, but at the expense in accordance with the security of users and data of organizations. They agree to take the risk because the security is difficult which makes their computer systems vulnerable to attacks. Various tools   to prevent these attacks or reduce their severity, but no solution can be considered satisfactory and complete. The I.D.S is one of the most effective tools to detect I ntrusions or attempted intrusions by user behavior or by the recognition of attacks from the stream of the network data. This last is to locate abnormal and suspected activities on the analysed target (network or host) [1].

Various methods and approaches have been adopted for the design of intrusion detection systems.
Our objective is to design an intelligent tool capable of detecting new intrusions while trying to solve one main problem of IDS which is the very large amount of data. For this, we suggest a new approach based on multi-agent systems (M.A.S), which incorporates the features of intelligent agents (learning new attacks). Our approach is applied to the data source KDD 99 Knowledge Discovery and Data Mining [2]. 

This article is organized as follows: in the first section, we present intrusion detection systems and  their link with the SMA. In the second section, we discuss previous work with the scenario method. The third section is devoted to the presentation of our architecture based on  M.A.S with a pre-processing module of our comprehensive data and a supervised learning of our cognitive agent. A conclusion and an outlook are presented in the fourth section.

2   INTRUSION DETECTION SYSTEM
An intrusion detection system is a tool that identifies abnormal activity on the analyzed target and to have prevention on the risks of intrusion. They are designed to analyze large volumes of data [3]. There are two main approaches to detect intrusions [4] [5] [6].
1)   The behavioural approach (Anomaly Detection).
2)   The non-behavioural approach (scenario).
The first approach is based on the assumption that the exploitation of a break in the system requires abnormal use of the latter and thus unusual behaviour of the user. The second approach relies on knowledge of techniques used by attackers to obtain typical scenarios. The best known and most easily understood method in this approach is pattern matching. It is based on pattern search (string or byte sequences) in the data stream.
For the advantages and disadvantages of each approach we have table 1.

TABLE 1
 COMPARISON BETWEEN THE TWO APPROACHES.

2.1  Different types of IDS
  The intrusion detection system or IDS can be classified into three major categories according to whether they are committed to monitor
   - Network IDS or NIDS (Network based IDS).
   - System IDS or HIDS (Host based IDS).
   - Hybrid IDS (NIDS and HIDS).
   NIDS: are tools that analyze network traffic, they generally include a sensor that listens on the network segment to    be   monitored and an engine that performs traffic analysis to detect signatures attacks or differences facing the reference model.
   HIDS: Their mission is to analyze system logs, control access to system calls and check file integrity. HIDS can rely on these auditing features, clean or not the operating system, for integrity checking, and generate alerts. They are unable to detect attacks exploiting the weaknesses of the intellectual property system stack, usually by denial of service as a SYN flood or other.
     So a hybrid is ideal, all by improving the basic algorithms of detection and minimizes false positives, to identify complex attack scenarios. We can classify IDS according to various criteria. These can be used to select the most appropriate to the IDS needs. Some classifications are based on the behaviour of the IDS, some of their information sources; another classification based on their frequency of use of IDS with active or passive response is given.
2.2  Related works   
 To adapt to changing security needs due to changes in networks, new intrusion detection systems must offer features such as adaptability, flexibility, distribution, autonomy, communication and cooperation. If we compare these characteristics with the different properties of intelligent agents (autonomy, adaptability, responsiveness,), it is very clear that SMA is very appropriate to the problem of intrusion detection [7][8][9][10]. Many attacks are caused by abnormal behavior of network elements, hence the need to distribute the IDS functionalities to several entities.
In [11] the author has designed a multi-agent system for intrusion detection. This model is based on several layers according to a hierarchical model, extra and intranet.He worked on a scenario approach, his model is based on reactive agents. It does not detect new attacks.
In [12] the author has designed an architecture based on 4 well distributed agents. The approach used is based on the host, its security model an asymmetric cryptography.This key exchange between hosts can be broken if the attacker has a  depth knowledge on cryptography.
Detter [13] uses an architecture based on the network by placing a agent motor at each location. It is made of layers distributed to operate over arrange of distributed agent engines. This architecture takes advantage of the mobile agent paradigm to implement a system capable of an efficient and flexible distribution of tasks of analysis, monitoring, and the integration of existing detection techniques.
In [14] the authors suggest to extend their system   with a model of case-based reasoning for learning new attacks. They propose to integrate to different agents (With the exception of the agent manager for Security Policy) a learning function based on the resemblance and similarities  between past attacks and new attacks. Their model did not produce a result.
Brahimi [15] has developed an IDS based on mobile agents and on data Manning, where an update to the signing table is performed by data mining.
In [16] the author uses the approach NIDS. Its architecture is based on a simulation based on KDD. He used an algorithm through reinforcement to detect new attacks, but his model did not give good results. There is a risk of convergence on unbalanced K.D.D.
Raoui [17] has developed an IDS on a distributed platform based on the M.A.S. He used two types of reactive agents to detect known attacks and cognitive agents to detect unknown attacks (one agent detects viruses, Trojans ... the other).

Read More: Click here...

17
Quote
Author : Prof. Mohammed Yunus, Dr.J. Fazlur Rahman
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Ceramic coated components have the advantage features of both metal and ceramics, i.e. good toughness, high hardness and wear resistance. Despite their outstanding characteristics, ceramic materials are not used in many cases due to high cost of machining. A major drawback to engineering applications of ceramic is due to their brittleness and fracture toughness, which makes them difficult and costly to machine.
 In order to study the precision machining processes of ceramics like grinding and lapping, experiments were conducted to find out the influence of various cutting parameters on surface quality production as well as grinding and lapping performance.
 Surface grinding of ceramic coating materials was done on samples specimens coated with Alumina (Al2O3), Alumina- Titania (Al2O3-TiO2)  and Partially stabilized zirconia (PSZ )using diamond and CBN (cubic boron nitride) grinding wheels. Based on the experimental results, the influence of cutting parameters, namely, cutting force, surface roughness and bearing area characteristics were evaluated and optimum machining conditions have been suggested for better performance in precision machining of ceramic coating materials.

Index Terms— Bearing Area Characteristics, Cutting force, Diamond and CBN grinding wheel, Grinding and Lapping, Optimizations of Cutting parameters, Precision Machining Processes Surface roughness. 

1   INTRODUCTION                                                                      
WITH  the projected wide spread applications of ceramic coating materials, it is necessary to develop an appropriate technology for their efficient and cost effective machining processes[5],[7], [9] and  [11 ]. Grinding of ceramics is a difficult task, as it is generally associated with cracking, splintering and delamination of surfaces. Conventional processes and tools are not generally suited for the ma-chining of ceramics. Standard machining tools can be used with optimization of machining parameters in operating conditions [ 13-14 ]. Various precision machining techniques that could be adopted for efficient machining of ceramic materials, namely grinding and lapping processes are studied in detail for the parametric influence of various cutting parameters of precision machining of ceramic coating materials.
     Ceramic coated components used in industrial applications, generally require post treatments like heat treatment and surface finishing by precision machining [1-8]. Good surface finish and high efficiency in machining to meet the demands of tight tolerances are generally achieved by grinding, lapping and polishing like precision machining processes [3].

Grinding of ceramics is a difficult task, as it is generally associated with cracking of surface. In order to study the effect of precision measuring processes [15], experiments were conducted to check the machining parameters like sur-face quality, grinding forces etc. on ceramic coated compo-nents for different machining conditions.
   The main object of this study is to evaluate the behavior of A, AT, PSZ, Super-Z alloy and ZTA ceramic coating materials subjected to different grinding conditions. The performance was evaluated by machining grinding force [20-22], surface finish [17] and bearing area characteristics [18] and also oil film retainability characteristics [15].

2 EXPERIMENTAL PROCEDURE
Three different commercially available ceramic coating powder materials namely, Alumina (Al2O3), Alumina-Titania (Al2O3-TiO2), Partially Stabilized Zirconia (PSZ) were used for the preparation of coatings [10-14]. A 40 KW Sulzer, Metco plasma spray system with 7MB gun is used for this plasma spraying of coatings. Mild steel plates of 50x50x6 mm were used as substrate to spray the ceramic oxides. They were grit blasted, degreased and spray coated with a 50 to 100 microns Ni Cr Al bond coat. The above ceramic materials were then plasma sprayed using optimum spray parameters.
2.1 Precision Machining of Ceramic Coating
Grinding: Using diamond and CBN grinding wheels [20] with the surface grinding trials were conducted with the grinding conditions mentioned below in table 1. Machining trials were conducted on different ceramic coated specimens (A, AT and PSZ ceramic coatings).

The main object of this study is to evaluate the behavior of
A, AT and PSZ ceramic coatings subjected to different grind-ing conditions. The performance [18] was evaluated by mea-suring
1.   Grinding force (Normal and Tangential force)
2.   Surface finish produced which also includes the bearing area characteristics
3.   Oil retainability characteristics.

2.2  Force Measurement
The normal grinding force (Fn) and the tangential force (Ft) were measured using grinding dynamometer [17] and [21].The ground samples were measured for different surface finish parameters such as Ra, Rt, and tp using Taylor Hobson’s stylus tracing profilometer.
2.3 Lapping
     A circular disc of 200 mm in diameter made of bright steel and a pin of 6mm diameter were coated with NiCrAlumel bond coat of thickness 75μm and subsequently coated with different coaing materials namely, Alumina (A), Alumina-Titania (AT), Partial Stabilized Zirconia (PSZ), Super-Z alloy and ZTA [19-20].
     Samples were initially ground to achieve pre lapping finish and then further lapped under the conditions mentioned above in table 2. The process variables were lapping time (ranging 5 to 25 minutes) and size of the diamond abrasives in lapping [18].
     The lapped discs were thoroughly cleaned and measurement in respect of surface finish was made using Taylor Hobson’s surface finish profilometer.

2.4 Oil retainability test
The oil retainability of ceramic coated surfaces was estimated using SAE 120 lubricating oil. In order to explain the oil retain ability of ceramic coated surfaces, the coated plates (specimens) were ground and lapped to half the area of the plate and the rest half left as it is. Oil droplets were then put on these two parts of the specimen and left untouched for 3 hours. The specimens with the oil droplets were then observed further increase in diameter using a travelling microscope and oil spreadability on ground and lapped surfaces were studied.

3 RESULTS AND DISCUSSION
3.1   Results of Grinding
It has been noticed that, during the grinding of ceramic coat-ings, grinding forces were found to be varying considerably with increasing grinding speed. Also, it is observed that, with CBN wheels grinding force components (Ft and Fn) were found to be higher when compared to diamond grinding wheel as shown in figures1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 30, 31, 32 and 33. Besides, it is noticed that, the increase in the depth of grinding, generally improved the surface finish. During trials on grinding of ceramic coatings, grinding velocity, a range of 10 – 15 m/sec and depth of grinding 30µm were assessed to be more critical. It is also observed grinding of Alumina-Titania (AT) and Partially stabilized zirconia (PSZ) ceramic coatings with diamond wheel gave better surface finish.

3.2   Results of Lapping
It is concluded that, surface finish of lapped ceramic coat-ings, improved with lapping time and remains constant after 15 minutes of lapping time in case of AT and PSZ, whereas in case of Alumina (A), it attains saturation after 20 minutes of lapping time. It is also seen among the coatings that, AT could be lapped better than the other two.
     Bearing area characteristics of sprayed and ceramic coatings with diamond wheels are shown in figures 36, 37, 38, 39, 40, 41, 42, 43, 44 and 45. It is observed there is no much variation in the bearing area characteristics of coated ceramic surfaces subjected to diamond wheel grinding, but AT exhibits faster tendency to attain cent percent tp area. With CBN grinding, rapid improvement in bearing area characteristics of Alumina is observed due to improved grinding of brittle materials (like Alumina). This is due to grinding forces associated with CBN grinding wheels, CBN grains, cuts the ceramics relatively cooler (CBN is thermally more conductive)  compared to diamond wheel. It is also seen that, diamond wheel is more sensitive to grinding condition and with CBN wheel it is possible to go for higher depth of grinding , because of better thermal properties of CBN.

Read More: Click here...

18
Quote
Author : Salau, T.A.O., Ajide, O.O
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— This study utilized combination of phase plots,time steps distribution and adaptive time steps Runge-Kutta and fifth order algorithms to investigate a harmonically Duffing oscillator.The object is to visually compare fourth and fifth order Runge-Kutta algorithms performance as tools for seeking the chaotic solutions of a harmonically excited Duffing oscillator.Though fifth order algorithms favours higher time steps and as such faster to execute than fourth order for all studied cases.The reliability of results obtained with fourth order worth its higher recorded total computation time steps period.

Keywords— Algorithms, Chaotic Solutions, Duffing Oscillator, Harmonically Excited, Phase Plots, Runge-Kutta and Time Steps

1   INTRODUCTION                                                                     
Extensive literature study shows that numerical technique is very important in obtaining solutions of differential equations of nonlinear systems.The most common univer-sally accepted numerical techniques are Backward differential formulae, Runge-Kutta and Adams-Bashforth-Moulton. According to Julyan and Oreste in 1992, Runge-Kutta family of algorithms remain the most popular and used methods for integration. In numerical analysis, the Runge-Kutta methods can be classified as important family of implicit and explicit iterative methods for the approximations of solutions of ordinary differential equations. Historically, the Runge-Kutta techniques were developed by the German mathematicians C.Runge and M.W. Kutta. The combination of the two names formed the basis of nomenclature of the method known as Runge-Kutta. The relevance of Runge-Kutta algorithms in finding solutions to problems in nonlinear dynamics cannot be overemphasized. Quite a number of research efforts have been made in the numerical solutions of nonlinear dynamic problems. It is usual when investigating the dynamics of a continuous-time system described by an ordinary differential equation to first investigate in order to obtain trajectories. Julyan and Oreste (1992) were able to elucidate the dynamics of the most commonly used family of numerical integration schemes (Runge-Kutta methods). The study of the authors showed that Runge-Kutta integration should be applied to nonlinear systems with knowledge of caveats involved. Detailed explanation was provided for the interaction between stiffness and chaos.The findings of this research revealed that explicit Runge-Kutta schemes should not be used for stiff problems mainly because of their inefficiency. According to the authors, the best alternative method is to employ Backward differentiation formulae methods or possibily implicit Runge-Kutta methods.

The conclusions drawn from the paper elucidated the fact that dynamics is not only interested in problems with fixed point solutions, but also in periodic and chaotic behaviour.
 The application of bifurcation diagrams in the chaotic study of nonlinear electrical circuits has been demonstrated (Ajide and Salau, 2011). The relevant second order differential equations were solved for ranges of appropriate parameters using Runge-Kutta method.The solutions obtained from this method were employed to produce bifurcation diagrams. This paper showed that bifurcation diagram is a useful tool for exploring dynamics of nonlinear resonant circuit over a range of control parameters. Ponalagusamy 2009 research paper focused on providing numerical solutions for system of second order robot arm problem using the Runge-Kutta sixth order algorithm. The precised solution of the system of equations representing the arm model of a robot has been compared with the corresponding approximate solutions at different intervals.  The results and comparison showed that the efficiency of numerical integration algorithm based on the absolute error between the exact and approximate solutions. The implication of this finding is that STWS algorithm is not based on Taylor’s series and it is an A-stable method. The dynamics of a torsional system with harmonically varying drying friction torque was investi-gated by Duan and Singh (2008). Nonlinear dynamics of a single degree of freedom torsional system with dry friction is chosen as a case study. Nonlinear system with a periodically varying normal load was first formulated. This is followed by re-formulation of a multi-term harmonic balance method (MHBM). The reason for this is to directly solve the nonlinear time-varying problem in frequency do-main. The feasibility of MHBM is demonstrated with a periodically varying friction and its accuracy is validated by numerical integration using fourth order Runge-Kutta scheme. The set of explicit third order new improved Runge-Kutta (NIRK) method that just employed two function evaluations per step has been developed (Mohamed et al, 2011). Due to lower number of function evaluations, the scheme proposed herein has a lower computational cost than the classical third order Runge-Kutta method while maintaining the same order of local accuracy. Bernardo and Chi-Wang (2011) carried out a critical review on the development of Ruge-Kutta discontinuous Galerkin (RKDG) methods for nonlinear convection dominated problems. The authors combined a special class of Runge-Kutta time discretizations that allows the method to be nonlinearly stable regardless of its accuracy with a finite element space discretization by discontinuous approximations that incorporates the idea of numerical fluxes and slope limiters coined during the remarkable development of high reso-lution finite difference and finite volume schemes. This review revealed that RKDG methods are stable, high-order accurate and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions.The review showed its immense applications in Navier-Stokes equations and Hamilton-Jacobian equations. This study no doubt has brought a relief in computational fluid dynamics.This technique has been mostly employed in analyzing Duffing oscillator dynamics.The Duffing oscillator has been described as a set of two simple coupled ordinary differential equations to solve . Runge-Kutta method has been extensively used for numerical solutions of Duffing oscillator dynamics. Salau and Ajide (2011) investi-gated the dynamical behaviour of a Duffing oscillator using bifurcation diagrams. The authors employed fourth order Runge-Kutta method in solving relevant second order differential equations. While the bifurcation diagrams obtained revealed the dynamics of the Duffing oscillator, it also shows that the dynamics depend strongly on initial conditions. Salau and Oke (2010) showed how Duffing equation can be applied in predicting the emission characteristics of sawdust particles. The paper explains the modeling of sawdust particle motion as a two dimensional transformation system of continous time series. The authors employed Runge-Kutta algorithm in providing solution to Duffing’s model equation for the sawdust particles. The solution was based on displacement and velocity perspec-tive. The findings of the authors showed a high profile feasi-bility of modeling sawdust dynamics as emissions from band saws. The conclusion drawn from this work is that the finding no doubt provides advancement in the knowledge of sawdust emission studies.
     Despite this wide application of Runge-Kutta method as a numerical tool in nonlinear dynamics, there is no iota of doubt that a research gap exists. Available literature shows that a research which compares the performance of different order (Second, Third, Fifth, Sixth e.t.c.) of Runge-Kutta has not been carried out. The objective of this paper is to visually compare fourth and fifth order Runge-Kutta algorithms performance as tools for seeking the chaotic solutions of a harmonically excited Duffing oscillator. 

2 METHODOLOGY
2.1 Duffing Oscillator
The studied normalized governing equation for the dynamic behaviour of harmonically excited Duffing system is given by equation (1) 

                              (1)
In equation (1);  represents respectively displace-ment, velocity and acceleration of the Duffing oscillator about a set datum. The damping coefficient is . Amplitude strength of harmonic excitation, excitation frequency and time are respectively ,  and t.  Francis (1987), Dowell (1988) and Narayanan and Jayaraman (1989b) proposed that the combination of  , = 0.21  and    or   , = 0.09  and   parameters leads to chaotic behaviour of harmonically excited Duffing oscilla-tor.This study utilized adaptive time steps Runge-Kutta algorithms to investigate equation ( 1) over one hundred and fifty excitation starting with a time step of ( Excitation Period/1000 ). The phase plot was made with the stable solutions from the last fifty (50) excitation period calculations.

Read More: Click here...

19
Quote
Author : Sangu Ravindra , Dr.V.C.Veera Reddy, Dr.S.Sivanagaraju, Devineni Gireesh Kumar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— The shunt active power filter has proved to be a useful device to eliminate harmonic currents and to compensate reactive power for linear/nonlinear loads. This paper presents a novel approach to determine reference compensation currents of the three-phase shunt active power filter (APF) under distorted and/or imbalanced source voltages in steady state. The proposed approach is compared with three reviewed shunt APF reference compensation strategies. Results obtained by simulations with Matlab and Simulink show that the proposed approach is more effective than the reviewed approaches on compensating reactive power and harmonic/neutral currents of the load, even if the source voltages are severely distorted and imbalanced. In addition, the proposed approach yields a simpler design of the shunt APF controller.

Index Terms— Shunt active power filter, Voltage source converters, Linear and nonlinear loads, PI Controllers.
 
1   INTRODUCTION                                                                      
THE use of shunt active power filters (APF) to eliminate harmonic currents and to compensate reactive power for linear/nonlinear loads has attracted much attention since the late 1970s Fig. 1 shows the schematic diagram of a three-phase four-wire shunt APF, where the APF senses the source voltages and load currents to determine the desired compensation currents. Akagi proposed the instantaneous reactive power theory (i.e.,p-q theory) for calculating the reference compensation currents required to inject into the network at the connected point of the nonlinear load. Since then, the theory has inspired many works dealing with active power filter compensation strategies One of the pecu-liar features of a shunt APF is that it can be designed with-out active energy source units, such as batteries, or in other forms in its compensation mechanism. In other words, an ideal APF does not consume any average real power supplied by the source. To accomplish this function, it requires an effective reference compensation strategy for both reactive power and harmonic/neutral current compensation of the load. Up to date, most reference compensation current strategies of the APF are determined either with or without reference-frame transformations. For instance, the theory proposed and requires transformation of both source voltages and load currents from the a-b-c reference frame to the alpha-beta reference frame to determine the APF reference compensation currents in the three-phase three-wire system. For applications of the APF in a three-phase four-wire system, extended the theory to handle the zero-sequence power compensation with a more complicated controller design. In the authors proposed the generalized instantaneous reactive pow   er theory in the reference frame for harmonic and reactive power compensation.The advantages of the proposed approach are that no reference-frame transformation is required and a simpler APF controller design can be achieved.
      A synchronous reference frame method for obtaining the load currents at the fundamental frequency, which will be the desired source currents. The APF reference compensation currents are then determined by subtracting the fundamental components from the load currents. Proposed an algorithm in the reference frame for maintaining ideal three-phase source currents when the source voltages are amplitude-imbalanced. In theory, the aforementioned approaches work very well on harmonic and/or reactive power compensation for nonlinear loads under ideal source voltages. However, if the source voltages are imbalanced and/or distorted, the generated APF ref-erence compensation currents are discrepant and the desired balanced/ sinusoidal source currents cannot be maintained .Among many approaches for determining the APF reference compensation currents, one of the mainstreams is to maintain sinusoidal source currents supplying average real power to the load. With the use of sinusoidal source current strategy, it is proved that the APF can have better performance than other strategies .To achieve full compensation of both reactive power and harmonic/neutral currents of the load, this paper presents a novel approach to determine the shunt APF reference com-pensation currents, even if the source voltages and load cur-rents are both imbalanced and distorted. The proposed ap-proach is similar to those presented; it is an -reference-frame-based method and is categorized as a sinusoidal source current strategy. In the paper, a brief review of the three approaches proposed in first described. Next, the theory of the proposed strategy is presented. The Matlab/Simulink simulations are then followed to compare the usefulness of the proposed method and the reviewed approaches.

II. SHUNT ACTIVE POWER FILTER
 
Fig.1. Schematic diagram of three phase four wire shunt active power filter with linear & nonlinear loads
     Active filters are implemented using a combination of passive and active (amplifying) components, and require an outside power source. Operational amplifiers are frequently used in active filter designs. These can have high Q, and can achieve resonance without the use of inductors. However, their upper frequency limit is limited by the bandwidth of the amplifiers used. Multiple element filters are usually constructed as a ladder network. These can be seen as a continuation of the L,T and π designs of filters. More elements are needed when it is desired to improve some pa-rameter of the filter such as stop-band rejection or slope of transition from pass-band to stop-band.
 
Fig.2. Circuit of shunt active power filter with IGBTs

    A three-phase system feeding an inverter load has been selected to study the performance of the APF system. It has been observed that due to the non-linear characteristics of power electronics loads the THD’s of source current and terminal voltage fall well below the IEEE-519 standard and in principle APF system is used to inject a current equal in magnitude but in phase opposition to harmonic current to achieve a purely sinusoidal current wave inphase with the supply voltage. Figure 1 shows the single-line diagram of a simple power system with APF system ON. The heart of the APF system is the IGBT based voltage source inverter (VSI). A dc capacitor is used to deliver power for the VSI. For the successful operation of APF, capacitor voltage should be at least 150 % of maximum line-line supply voltage. Since the PWM VSI is assumed to be instantaneous and infinitely fast to track the compensation currents, it is modeled as a current amplifier with unity gain.

A.   DETERMINATION OF APF REFERENCE COMPENSA-TION CURRENTS
 
     The proposed compensation strategy of the active power filter is based on the requirement that the source currents need to be balanced, undistorted, and in phase with the positive-sequence source voltages. The goals of the shunt APF control are: 1) unity source power factor at positive-sequence fundamental frequency; 2) minimum average real power consumed or supplied by the APF; 3) harmonic current compensation; and 4) neutral current compensation. Therefore, the active power filter must provide full compensation (i.e., harmonic/neutral currents and reactive power) for the nonlinear load. To achieve these goals, the desired three-phase source currents must be in phase with the positive-sequence fundamental source voltage compo-nents.

  Fig: 3 Method of generating pulses to IGBTs

III. VOLTAGE SOURCE CONVERTERS
A.   VSC BASED TRANSMISSION
  The fundamentals of VSC transmission operation may be explained by considering each terminal as a voltage source connected to the AC transmission network via a three-phase reactor. The two terminals are interconnected by a DC link, as schematically shown in Fig.4
 
  Fig: 4 Basic VSC transmission systems
    Fig: 5 shows a phasor diagram for the VSC converter con-nected to an AC network via a transformer inductance. The fundamental voltage on the valve side of the converter trans-former, i.e. UV(1), is proportional to the DC voltage as been expressed in Eq(1).
           UV(1) = kuUd                           ---------- (1)
      The quantity ku can be controlled by applying additional number of commutation per cycle, i.e. applying pulse with modulation (PWM). Using the definition of the apparent power and neglecting the resistance of the transformer results in the following equations for the active and reactive power: The active and reactive power will in the following be defined as positive if the powers flow from the AC network to the converter. The phase displacement angle δ will then be positive if the converter output voltage lags behind the AC voltage in phase.

Fig:5 Phasor diagram of VSC and direction of power flows

B.   OUTER ACTIVE AND REACTIVE POWER AND VOL-TAGE LOOP
 
Fig: 6 Overview diagram of the VSC control system
 
     The active power or the DC voltage is controlled by the control of δ and the reactive power is controlled by the con-trol of the modulation index (m). The instantaneous real and imaginary power of the inverter on the valve side can be expressed in terms of the dq.

Read More: Click here...

20
Quote
Author : R. Sampath Kumar, S. Sumithra and R. Radhakrishnan
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - In this paper a procedure for the construction and selection of the independent mixed sampling plan using MAPD and MAAOQ as quality standards with Continuous Sampling plan of the type CSP-1 (c=2) as attribute plan is presented. Tables are constructed for the selection of parameters of the plan when MAPD and MAAOQ are given. Practical applications of the sampling plan are also discussed with suitable example.

Key words and Phrases: Maximum allowable percent defective, maximum allowable average outgoing quality, Operating characteristic.
AMS (2000) Subject classification Number: Primary: 62P30, Secondary: 62D05.

1. Introduction
 A variety of plans and procedures have been developed for special sampling situation involving both measurements and attributes. Each is tailored to do a specific job under prescribed circumstances. They range from a simplified variables approach to a more technically complicated combination of variables and attribute sampling called mixed sampling plans.
       Mixed sampling plans are of two types, namely independent and dependent plans. Independent mixed plans do not incorporate first sample results in the assessment of the second sample. Dependent mixed plans combine the results of the first and second samples in making a decision if a second sample is necessary.
 Mixed Sampling Plan (MSP) was first developed by Schilling (1967) for the case of single sided specifications, standard deviation known by assuming an underlying normal distribution for measurements. Dodge (1943) provided the concept of continuous sampling inspection and introduced the first continuous sampling plan. Dodge (1947) outlined several sampling plans for continuous production, originally referred to as the random – order and later designated as CSP-1 plan by Dodge and Torrey (1951). The desirability of developing a set of sampling plans indexed with p* has been explained by, Soundararajan (1975), Kandasamy (1993) studied in designing of various types of continuous sampling plans. Suresh and Ramkumar (1996)discussed about the use of MAAOQ for the selection of sampling plans. Radhakrishnan (2002) constructed various continuous sampling plans indexed through MAAOQ and mentioned its advantage over AOQL. Devaarul (2003), Sampath Kumar (2007), Radhakrishnan and Sampath Kumar (2006, 2007, 2009), Radhakrishnan et.al (2010) have made contributions to mixed sampling plans for independent case. Radhakrishnan et.al and (2009) studied mixed sampling plan for dependent case.
 In this paper, using the operating procedure of mixed sampling plan (independent case) with CSP-1 (c=2) as attribute plan, tables are constructed for the mixed sampling plan indexed through MAPD and MAAOQ. Suitable suggestions are also provided for the future.

2. Glossary of Symbols
    The symbols used in this paper are as follows
       P    :  submitted quality of lot or process
       p*   :  maximum allowable percent defective (MAPD)
       :    probability of acceptance for lot quality ‘pj’
         :  probability of acceptance assigned to first stage for       
                 percent defective ‘pj’
       :  probability of acceptance assigned to second stage
                  for percent defective ‘pj’
       k    :  variable factor such that a lot is accepted
                 if
       f     :  the rate of inspection(=1/n)
        i     :  number of consecutive units are found conforming
       n1     :  sample size for the variable sampling plan
       n2     :  sample size for the attribute sampling plan = (1/f)
                 units
3. Formulation of Mixed Sampling Plan with CSP-1 (c=2) as Attribute Plan
        The development of mixed sampling plans and the subsequent discussions are limited only to the lower specification limit ‘L’. By symmetry a parallel discussion can be used for upper specification limits also It is suggested that the mixed sampling plan with CSP–1(c=2) in the case of single sided specification (L), standard deviation (σ) known can be formulated by the parameters i, n1, n2 and k.

      Mixed sampling procedure suggested by Schilling (1967) is slightly modified and presented in this paper and this procedure is to be adopted separately for each time period fixed by the manufacturer. By giving the values for the parameters an independent plan for single sided specification, σ known would be carried out as follows:
   Determine the parameters with reference to ASN and OC curves
   Take a random sample of size n1 from the lot assumed to be large during the time period‘t’ (may be an hour / a shift / a day / a week …). [This is the modification suggested in this paper over Schilling (1967)]
   If a sample average ,  accept the lot
   If the sample average , apply the operating procedure of CSP-1 (c=2)
Operating Procedure of CSP-1(c=2) plan
Step 1: At the outset, inspect 100% of the units consecutively in the order of production until i successive conforming units are found.
Step 2: When i units in succession are found conforming discontinue 100% inspection and inspect f (=1/n) units until total of (c+ 1) sampled units are found nonconforming.
Step 3: When the number of nonconforming sampled units reaches (c+1), revert to 100% inspection as in step 1.
Step 4: Correct or replace all nonconforming units found with conforming units.   
Step 5: At the end of the time t switch back to variable sampling plan for the units produced.

4. Construction and Selection of Mixed Sampling Plan having CSP-1 (c=2) as attribute plan indexed through MAPD and MAAOQ
              Maximum allowable percent defective (MAPD) is the quality level that corresponds to the point of inflection of the OC curve. It is the quality level at which the second order derivative of the OC function Pa(p) with respect to p is zero ans the third order not equal to zero. When some specific value for a characteristic or group of characteristics is designated, the continuous sampling plan will have a tendency to accept product during periods of sampling if the submitted quality is upto MAPD and if the submitted quality is beyond MAPD, the sampling plan will have a tendency to submit the product for screening. The inflection point (p*) is obtained by using d2Pa(p)/dp2 = 0 and d3Pa(p)/dp3 ≠ 0.

Read More: Click here...

21
Quote
Author : Ali, M. A. M.
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract- Experiment was carried out during 2009 and 2010 in two honey bee apiaries belong to Agricultural Extension Department, Ministry of Agriculture, Riyadh, Kingdom of Saudi Arabia. The results showed that there were three species of bee-eaters belonging to family (Meropidae) in the considered location, European bee-eater (Merops apiaster Linnaeus, 1758); Olive bee-eater (Merops superciliosus Linnaeus, 1766) and Green bee-eater (Meropa orientalis Latham, 1802). The three species recorded were migratory and they were found in the apiaries during two seasons (spring and autumn). In spring, they were first appearance in the bird-trapping nets on March 28, 2009, and on April 02, 2010, meanwhile, they were last time in the bird-trapping nets on May 01, 2009 and April 20, 2010. In autumn season, they appeared in the considered apiaries on September 23, 2009 and October 11, 2010, meanwhile, the last time they were appeared in the apiaries on November 04, 2009 and November 01, 2010. Results suggest that the direction of bird-trapping nets significantly affected the number of trapped bee-eaters, the placing of bird-trapping nets in the east side of the apiary and above the apiary led to a significant trapping of  more of bee-eaters than when placed in west, north and south directions. In addition, no significant difference was found in trapped bee-eaters during the two inspection periods, in the morning (9 am) and in the evening (5 pm).

Key words- Honey bees, Apis mellifera, bee-eating birds, Merops, Merobidae, European bee-eater (Merops apiaster), Olive bee-eater (Merops superciliosus), Green bee-eater (Merops orientalis), definition, survey, monitoring, bird-trapping nets.
 
1 INTRODUCTION
Bee-eating birds belong to the genus Merops constitute a characteristic part of the bird fauna. Most species are to be found in the savanna biotope and are approximately equally distributed through the tropical part of the continent. The staple diet consists of hymenopters (Order: Hymenoptera), principally honey bees which are captured in flight. The morphological and ecological differences within the group are remarkably small with consideration to the large number of species (18 African). [1] stated that the family: Meropidae includes 24 species, which are divided among seven genera. However, [2] has proposed a reduction from seven to only three genera. They are widely distributed and listed as a problem to beekeepers and beekeeping endeavors in many parts of the world, particularly in Africa and Asia, where they preying on bee yards and in conjunction with honey bee queen-rearing operations [3], [4], [5], [6], [7], [8], [9]. [10] reported that the bee-eaters particularly dangerous to the beekeeping operation because of the tendency of some of the species to attack bees in an apiary in flocks of up to 250 birds. They are found throughout the temperate and tropical areas of the old world; most species are migratory, at least on a local basis [11]. [12] observed 47 species bee-eating birds near the apiaries in Thailand, over 47 species observed nine of the m consumed honey bees, but Merops leschenaulti and M. orientalis ate appreciable numbers. [13] stated that the birds Nectarina asiatica and Merops orientalis, were major predators of A. mellifera in India. [14] studied bee-eaters at sites in southern and central Slovakia, samples of pellets and food remains revealed the presence of 1786 prey objects from over 160 insect species. Although diet diversity was high, honey bees were (28.2-42.4%) and bumble bees, Bombus spp. (16.1-39.5%), constituted the main part of the diet at all sites. He also concluded that of the honey bees (Apis mellifera) caught, 53.5% were drones and 46.5% were workers. [15] found that the birds preyed upon drones extremely sporadically and not in a specific way. Hence, their findings had decisive consequences for apiculture, especially for the evolution of drone accumulation in congregation areas.
European bee-eaters (Merops apiaster) are migratory, diurnal birds that spend most of their time foraging for food. They have a broad distribution covering much of Europe and Africa with range estimates up to 11,000,000 square km. These migratory birds can be found as far north as Finland and range as far south as South Africa, extending east into some Asiatic countries as well. Most commonly, European bee-eaters usually breed and nest in southern Europe, then migrate south during autumn and winter [16], [17]. They may cause significant damage to a hive if they prey upon the queen [18], meanwhile, [19] listed European bee-eaters as a species of least concern by IUCN. Although their numbers have been declining over the past decade, the population (480,000 to 1,000,000 breeding individuals) is still well above any level of threat. [20], [21] found that European bee-eater (M. apiaster) has been documented to live up to 5.9 years in the wild, and [22], [23] stated that mixed colonies of European bee-eaters and blue-cheeked bee-eaters can be found foraging together without competition because of minimal diet overlap.
Green bee-eater (Meropa orientalis) feed on flying insects and can sometimes be nuisance to beekeepers [24], their preferred prey was mostly beetles followed by hymenopterans, but Orthopterans appear to be avoided [25], they are sometimes known to take crabs [26]. Like most other birds, they regurgitate the hard parts of their prey as pellets [27], an endoparasitic nematodes (Torquatoides balanocephala) that live in the gizzard has been found [28].
Different methods were used to protect honey bee (A. mellifera) colonies against predation by Merops sp. These methods included scaring the birds by drum beating and stone pelting loud noises, scarecrows scaring the birds, including various sound-producing devices and recorded, amplified distress calls made by an injured bee-eater; shooting the birds, poisoning, killing some of them and using net made from nylon [8], [12], [29], [30]. [31] applied three control measures to protect honey bee (Apis mellifera) colonies against predation by Merops orientalis. These included (A) scaring the birds by drum beating and stone pelting, (B) killing some of them and (C) keeping the colonies in poplar (Populus deltoides) plantations. The latter practice was found to be the most effective, and measures at A and B were completely ineffective.
The aim of the presence study was to identify the species of bee-eating birds attack honey bee colonies in the Central Region of Saudi Arabia. It also aimed to monitor the bee-eater for recording their appearance in the apiaries in the considered locations, and the time when they spend in the area. It was planned to evaluate the efficiency of the direction of bird-trapping nets for trapping the bee-eaters to protect honey bees from their attack.

2 MATERIALS AND METHODS
Experiment was carried out during 2009 and 2010 in two apiaries belong to Agricultural Extension Department, Ministry of Agriculture, Kingdom of Saudi Arabia. The honey bee colonies in each apiary are placed in permanent chute (tenet) made from steel columns and its ceiling made from isolated sheets, to keep honey bee colonies from very high air temperature during summer season. Each apiary was about 75 meters in length and 7 meters in width. The number of honey bee colonies in each apiary was 120 colonies of indigenous bees (A. mellifera jemenitica Ruttner). Each colony was about seven frames covered with adult bees and about three frames of brood. The distance between the two apiaries was about 600 m.

2.1 Preparation and Setting the Bird-Trapping Nets In The Apiaries
Sixteen bird-trapping nets made from black nylon, each one gauge 15 meters in length and 2 meters in width were used for survey, monitoring, trapping the bee-eating birds and for evaluation the direction of nets for trapping them. Eight bird-trapping nets were used in each apiary; they were placed and distributed in each apiary in five directions as follows:
1.   Two were placed in front of the apiary (East direction).
2.   Two were placed behind the apiary (West direction).
3.   One was placed in left side of the apiary (North direction).
4.   One was placed in right side of the apiary (South direction).
5.   Two were placed above the apiary (above the chute).
The bird-trapping nets were left in the apiary for trapping bee-eating birds for two years, and they were renewed when damaged. Birds caught were removed from the nets continually as soon as they trapped; they were collected, counted and tabled two times/ day, in the morning (9 am) and in the evening (5 pm) during the presence of the bee-eating birds in the apiaries.

2.2 Species of Bee-Eating Birds Attack Honey Bee Colonies In The Considered Area
The bee-eating birds caught in the bird-trapping nets were described and identified depending on the following characteristics: the shape of the body; length and weight of the bird; the shape of the beak; colours of the feathers; the face, chin, throat, chest, flanks and belly, the shape and colour of eyes, mandible, crown, nape, tail, legs, and the shape of the feet, length of wing [32], [33], [21], [34], [35], [36].

2.3 Survey And Monitoring The Bee-Eating Birds In The Considered Area
Survey and monitoring the bee-eating birds during four seasons (spring, summer, autumn and winter seasons) for two years were studied by recording their first appearance and their numbers were found in the bird-trapping nets, as well as their last date they were found in the nets.

Read More: Click here...

22
Electronics / Designing Solar Three-Wheeler for Disable People
« on: February 18, 2012, 02:13:52 am »
Quote
Author : Md. Shahidul Islam, Zaheed Bin Rahman, Nafis Ahmad
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—Mobility of the physically disabled or crippled people is a great concern of the society. It is really difficult to realize the problems and sorrows of a physically disabled/crippled person who is partially or fully dependent on others or confining himself in a wheel chair with limited mobility. This paper provides idea of currently available three-wheelers for disabled people and proposes a new improved design of a solar powered three-wheeler suitable for countries like Bangladesh which is an under developed country with huge disabled/crippled people from war, accidents and diseases. This three-wheeler is operated by solar power and suitable for outdoor use. Solar power option enables the disabled people to use it at any place, even in remote areas where there is no electricity. A general survey had been conducted on disabled people using wheel chairs and manual three-wheelers and the opinions of the experts working with the disabled people are also taken in to consideration to identify the needs and requirement for designing the solar three-wheeler. The proposed solar three-wheeler is meant to match and exceed the conventional three-wheeler’s facilities with a more intelligent and efficient design. A solar panel to produce solar electricity, a battery system for preserving electric power, an efficient motor, cushion seat, all terrain tires are used for this solar three-wheeler. Due consideration and attention is given to better maneuverability, effective use of solar energy, biomechanics and comforts, increased suspensions, all terrain traffic ability, ease of use etc. while designing this solar three-wheeler for physically disabled people of  the country.

Index Terms— Design, Solar three-wheeler, Disabled people

1    INTRODUCTION                                                                      
A lot of difficulties and hassles involved with the mobility of the physically disabled people in the society. It has been observed that physically disabled people are basically using some assistive devices like, crutches, artificial limbs or legs etc. and manual wheel chairs or three-wheelers for their day-to-day movements. But, these wheel chairs or three-wheelers are crude or of inefficient in design; not very much suitable for outdoor use or common terrain in the country like Bangladesh. Undoubtedly these manual wheel chairs or the three-wheelers are the blessings for crippled people but in the question of humanity, it is just “to add insult to injury”. Because, commonly found manual wheel chair or three-wheeler has a basic problem that the occupant must use physical force to turn the wheels. This action is physically stressful, can result in muscle and joint pain and degradation, torn rotor cuffs, repetitive stress injury, and carpal tunnel syndrome; which causes secondary injury or further disability[3].

Again, Use of conventional energy sources and rapid devel-opment of the present world have some bad impacts on the surroundings and environment, like depleting limited energy resource, damage of ecosystem, environment pollution, global warming and so on. As a result, Designing for Environment (DFE) is the crying need of the time and it is very much necessary to develop environment friendly equipment or transport for better living, for better world.

Considering the overall prevailing situation, development of a solar three-wheeler for disabled people is a vital effort where solar energy and its advantages are taken into account. A solar three-wheeler could be a stand-alone system; it will be self-operated and independent in nature, using unending solar energy from the sun. It is powered by solar energy from attached solar panel at top, exposed to sunlight. It can take us off the grid; can be used in a place where there is no electricity.

The transport idea concerned here is a solar power operated three-wheeler with light structure of moderate height-width and weight, which suits to Bangladeshi terrain. Also due emphasis is given to biomechanics, comforts, safety etc. while designing the seat for the solar three-wheeler. These features give greater stability, better maneuver ability, better mobility and comforts over the available manual three-wheelers. In a sense, a solar three-wheeler can be the complete solution for the transportation of physically disabled people of the country. Use of available resources (for components such as pipes for chassis/body, wheels, bearing etc. from the local market) and simplicity in designing result cost economy. The battery used, motor and solar panels are also very much available in the markets. All these features make the solar three-wheeler a very cost effective and environment friendly transport for the daily use of the disabled people.

2 PRESENT SCENARIO
Since the liberation war of Bangladesh in 1971, a large number of people have become disabled and vulnerable to the country. The prevalence of disability is believed to be high for basic reasons relating to over population, extreme poverty, illiteracy, social security, lack of awareness of traffic rules and above all lack of medical/health care and services. By latest, the numbers of disabled people are increasing rapidly due to increasing rate of road accidents and other relevant diseases [1]. In the present social hardship, other family members often fail to pay proper attention and care to the disabled people. So, their lifestyle improvement and independency is very much important and urgent too.

Wheel chair   Three-wheeler
Length:97 cm   Length:145 cm
Width:56 cm   Width: 97 cm
Height:71 cm   Height: 86 cm
Rear Wheel Dia.:51 cm   Rear Wheel Dia. :68cm
Front wheel Dia.:15cm to 20cm   Front Wheel Dia.:68cm
Although disability is a major social and economic phenomenon in Bangladesh, there is very little reliable data available on this issue, especially in the absence of a comprehensive national survey on disabiled persons. Basing on some data collected by the prevailing NGOs and international organizations, it is commonly believed that almost 10% of the total populations of Bangladesh are disabled and 2.5% of which are crippled or physically disabled [2]. Presently available wheel chairs are basically for indoor use or short distance movement and the manual three-wheelers for outdoor use. But those are not very suitable for use and having lot of technical drawbacks.

The numbers of crippled/ disabled people are quite alarming but the numbers of wheel chair or three-wheeler users are not big and mostly found them in hospitals and residents, espe-cially in urban areas. It is revealed from the survey that the numbers of present wheel chair/three-wheeler users are neg-ligible in all over the country. It happens because the presently available wheel chairs or three-wheelers are manual and not very suitable for outdoors use or for the roads around the country. The roads even in the cities are not very smooth or there is no lane for wheel chairs/ three-wheelers. Most of the cases the roads are very rough and narrows with some other limitations. Commonly Bangladesh has plain land and roads in the villages are of rough condition where peoples are moving with their bicycles, rickshaws etc. City roads are better where people move by rickshaws, auto- rickshaws, busses, taxi-cabs, cars etc.

There are some NGOs like, Centre for Disability De-velopment (CDD), Centre for Rehabilitation of the Paralyzed (CRP) etc. working for the physically disabled people and producing some manual wheel chairs and three-wheelers, somehow suitable for outdoor use. But again the problems remain, as the users need to use the physical force to drive it. Photographs of the presently available wheelchairs and three-wheelers in Bangladesh are shown in Figure 1.

            (a) Wheelchair             (b) Three-wheeler        (c) Three-wheeler 
Figure 1. Different types of available wheelchair and three- wheelers

The common size and basic dimensions of the presently available wheel chairs and three-wheelers are shown in Table 1.

Table 1: Basic dimensions of the presently available wheel chairs and three-wheelers

The main problems with the presently available three-wheelers are that these are manual, needs physical force to drive it; causes secondary injury such as upper extremity repetitive strain injuries, vibration exposure injuries, pressure ulcers, accidental injuries etc. There are lots of technical drawbacks of this manual three-wheeler too. However, the main drawbacks can be pointed as; crude or of inefficient design, biomechanics are not well considered, lack of safety measures, over weight and size, not suitable for outdoor use and no shed for protection to the user against adverse environment like sunshine, rain etc.

3 PROSPECT OF SOLAR THREE-WHEELER

Scarcity of energy is a common problem in all over the globe due to lack of conventional energy sources. So, Eco-friendly renewable energy like solar power can be the alternative and solve the power problem to some extent. Availability of solar energy radiation is the most vital consideration for designing and development of a solar system or solar equip-ment at any location on the earth. Rated solar radiation power received by the earth surface is (global radiation flux) 1000 W/m˛ (AM 1.5, sun at about 48  from overhead position). Availability of solar energy radiation in all over the country is very much encouraging for developing a solar three-wheeler for disabled people in Bangladesh. The geo-location of Bangladesh is in favor of receiving highest amount of solar radiation round the year. It is situated between 20.30-26.38 degrees north latitude and 88.04 - 92.44 degrees east, which is an ideal location for solar energy utilization. Solar radiation mapping shows that the daily average solar radiation varies between 4-6.5 kWh/m˛. Maximum amount of radiation is available on the month of March-April and minimum on December-January [5].

Read More: Click here...

23
Quote
Author : Raji T.O, Oyewola O.M, Salau T.A.O
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—  Fuel flexibility and capacity to burn broad spectrum of fuels at high combustion efficiency with minimum emissions of greenhouse gases are few of the key advantages fluidized bed combustion technology has over other existing combustion technology. This report examines the design, development and testing of an experimental model Bubbling fluidized bed combustor. Three unique features to enhance performance of this system were suggested and comprehensively discussed; inert bed’s temperature regulating unit, an integrated unit that enable Fluidizing air pre-heating as well as Biomass feeding pipe’s cooling and segmentations of the Combustor body into modules /partitioning of these modules into lower and upper section. The results of the test run with Palm kernel shell and Coconut shell show that the system performance is enhanced and that the temperature is well regulated  as observed in the thermal distribution. It is therefore proposed that the present Bubbling Fluidized bed combustor could be beneficial to development of commercial sizes for power generation in Nigeria and Africa sub region.

Index Terms:- Bubbling, Fluidized Bed, Biomass, combustion, design analysis, Experimental model, enhancement, Performance, Renewable energy,
 
1. INTRODUCTION
Bubbling fluidized bed combustor (BFBC) have different components functioning in unison to burn wide variety of fuels in an efficient and environmentally friendly manner. It employed strong stream of fluidizing air with approach velocity Vo such that Vo is greater than the minimum fluidizing velocity Umf and less than the full fluidization velocity Uff ; Umf≤Vo ≤Uff .; at this stage the fluidization regime is  characterized by bubbles formation and vigorous mass turbulence, the bed particles  exhibits property of fluid and assumes appearance of a boiling liquid;  the bed at this point is said to be in Bubbling Fluidized Stage.  This fluidization characteristics and the selected feed rate are essentially the basic criteria that determined the dimension of any BFBC and capacity of its auxiliary equipment e.g. Blower, the Biomass feeder, cyclone separator etc. When Vo<Umf  (minimum fluidizing velocity)  the bed material remain a fixed bed (packed bed), at the other extreme when Vo≥Ut (terminal velocity) the bed mobilizes and transition to Circulating Fluidized Bed (CFBC) occurred see fig 1.

Raji, T,O is currently pursuing  PhD degree program  in the Department of Mechanical engineering, University of Ibadan, Ibadan. Nigeria. Email: rhadtrust2@gmail.com

Oyewola O.M  is  a Reader and the current Acting Head; Department of mechanical engineering, University of Ibadan, Ibadan. Nigeria. Email:  ooyewola@yahoo.com

SalauT.A.O is a senior lecturer in the same Department.   Email:   tao.salau@mail.ui.edu.ng
 
Fixed, Bubbling & Fast Fluidized Beds: As the velocity of a gas flowing through a bed  of particles increases, a value is reached when the bed fluidizes and bubbles form as in a boiling liquid. At higher velocities the bubbles disappear; and the solids are rapidly blown out of the bed and must be recycled to maintain a stable system.
Fig.1 A schematic drawing shows transition from packed bed to circulating bed.[15].

Fluidized Bed Combustion technology (FBC) has been shown to be a versatile technology capable of burning practically any waste combinations with low emissions ([1],[4]) it has gone beyond  being a mere idea to a proven technology for efficient combustion of difficult to burn wastes and biomass.
                          Gas cyclone
Flue gas analyzer

Biomass feeder                        Upper section   2900
                                                          hinges
                     G

Distributor plate                       lower section       
air in via G

           Ф150

   nine thermocouples (T1 – T9) arranged    axially along the combustor body.         
G   Fluidizing air pre-heater/Biomass feeding    pipe’s cooling attachment.
Lower section is module 1& 2
Upper section is module 3, 4& 5
Fig 3: Schematic drawing of the developed BFBC.
Biomass resources like woods, grasses, plant and animal wastes are the leading sources of energy generation in Nigeria contributing about 37% of energy demand. With annual turnover of 144million tonnes/year [3] it is particularly popular among the rural dwellers and small section of urban populace who generally employ method of open air burning of the biomass, which limit the thermal efficiency of the combustion to the lowest possible. Apart from firewood which is used for domestic cooking other agricultural and silvicutural wastes like Coconut shell, Oil palm solid wastes, cassava sticks, maize stems etc, are generally left wasted in the farm. One of the key agricultural crops in Nigeria is palm tree. It is found predominantly in southern Nigeria especially in the wet rain forests and savannah belt. It also exists in the wet parts of North central Nigeria, in areas like Southern Kaduna, Kogi, Kwara, Benue, Niger, Plateau, Taraba and Nasarawa States as well as the Federal Capital Territory (FCT) [17]. Solid waste from palm tree comprises of empty fruit branches (EFB), palm press fibres (PPF) and palm kernel shell (PKS) this waste collectively account for 48% of the original palm fruit branches, PKS alone account for 8% [4]. In Nigeria virtually every part of this wonder tree is traditionally useful for one thing or the other, however PKS is not been maximally utilized, only an insignificant portion of it is used for cooking or domestic processing vast majority of it is left unused in the farm creating environmental nuisance, since it could not rot and is useless for agricultural cultivation. It is worthy of note that even the use of EFB and PPF as local broom and domestic cooking fuel is fast reducing with modernization, as plastic brooms and modern way of cooking is now taking predominant share. Considering about 2.5million hectares of palm trees cultivated yearly [17], a huge quantity of PKS and other palm waste components which could otherwise be used for energy generation is  wasted, a huge loss considering the aggregate energy generation possible if such biomass could be fired with appropriate technology.
The potential of agricultural waste as fuel for energy generation has been investigated by many researchers. Srinivasa Rao et al [1] investigated the effect of secondary air injection on combustion efficiency of sawdust in a BFBC with an enlarged disengagement section, maximum combustion efficiency of  99.2% efficiency was observed at 65% excess air. Suthum P [4] examined the characteristics of palm waste when combusted in BFBC with modularly constructed combustion body of diameter 150mm. The study showed that oil palm waste could be burnt successfully in a BFBC, it was discovered that the relationship between excess air and combustion efficiency is such that CE increases with EA; reach a maximum value for a particular feed rate, then starts to fall: this was explained with the fact that beyond the maximum point the EA promotes higher elutriation of unburnt fuels particle. A maximum CE of 92.47 was achieved at 50% excess air. Rosyida P et al[2] reported that the use of air staging is beneficial to reduction of CO emission when palm waste is combusted in a BFBC, a maximum combustion efficiency of 89%  was achieved for palm fiber. Achieving high CE when biomass is used as fuel is not always the norm for instance an investigation conducted by [16] achieved less than 32% thermal efficiency in several experiments using inclined grate burner to combust PKS.
The foregoing results confirmed that Biomass could be combusted at higher efficiency and with lower emission of NOx and CO in BFBC than in conventional combustion technology such as grate burner. 
Furthermore it could be seen from the above examples that each literature employed BFBC with different modification for instance from the suggestion that increased residence time promote volatile combustion, [1] employed a BFBC with an  enlarged freeboard section to achieve more than 99% CE of sawdust. Clearly it could be seen from  all the examples cited, that modifications to BFBC were employed to optimize its performance; a confirmation that further modifications and features may be imperative to making FBC a more efficient and more environmentally friendly method for combusting fuels. 
Three performance enhancing features targeted at addressing potential problems of BFBC were examined in this work.

2.  PERFORMANCE ENHANCING FEATURES
The features discussed here are added as alternative solution to key issues normally associated with BFBC especially when Biomass is used as fuel. Features suggested and examined in this work include the following:
i. Inert Bed Temperature Regulating Unit (ITRU); In BFBC, temperature is generally and deliberately kept below 950oC, the bed temperature being always lower (often 650-800oC); this is to limit formation of atmospheric NOx and to prevent ash fusion a condition that is detrimental to fluidization of inert particles.  Conventional approach employed water cooled coil to limit the bed temperature to acceptable level, water cooled coil immersed in bed apart from being costly, impose additional technical complication and could potentially affect fluidization characteristic of inert bed; in the present work an electronic feedback system was employed, it senses the temperature of the inert bed and via an electro-mechanical mechanism, controls the biomass feeder and fluidizing air supply as necessary. The ITRU comprises of Temperature controller, a type-K thermocouple, and two 40Amps contactors. The circuit is constructed in such a way that the biomass feeder motor is de-activated and activated as necessary to ensure the inert bed temperature is maintained at the preset temperature. See fig 6b.

Read More: Click here...

24
Electronics / Solving Blasius Problem by Adomian Decomposition Method
« on: February 18, 2012, 02:11:07 am »
Quote
Author : V. Adanhounme, F.P. Codo
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - Using the Adomian decomposition method we solved the Blasius problem for boundary-layer flows of pure fluids (non-porous domains) over a flat plate. We obtained the velocity components as sums of convergent series. Furthermore we constructed the interval of admissible values of the shear-stress on the plate surface.

Index Terms - Convergent series, Decomposition technique, Fluid flow, Shear-stress.

Nomenclature
1.       velocity in the x-direction
2.         velocity of the free stream
3.       velocity in the y-direction
4.       horizontal coordinate
5.       vertical coordinate
6.       viscosity coefficient
7.        density
8.           kinematic viscosity of the fluid

1  INTRODUCTION
The problem of flow past a flat plate is one of interesting problems in fluid mechanics which was first  solved by Blasius [5]  by assuming a series solutions . Later, numerical methods were used in [7]  to obtain the solution of the boundary layer equation. In [2]  the first derivative with respect to     of the velocity component in the    direction at the point    for the Blasius problem is computed numerically for the estimation of the shear-stress on the plate surface. Later in [9]  one solved the problem above by assuming a finite power series where the objective is to determine the power series coefficients.

The purpose of this study is to obtain the solutions for the Blasius problem for two dimensional boundary layer using the Adomian decomposition technique and to compute the admissible values of the shear-stress on the wall, imposing the constraint on the first derivative with respect to of the velocity component in the    direction at the point .

2   MATHEMATICAL MODEL
The physical model considered here consists of a flat plate parallel to the   - axis with its leading edge at   and infinitely long down  stream  with constant component    of the velocity.For the mathematical analysis we assume the properties of the fluid such as viscosity and conductivity, to a first approximation, are constant. Under these assumptions the basic equations required for the analysis of the physical phenomenon are the equations of continuity and motion. According to the Boussinesq approximation these equations get the following expressions [2]

                (1)
             (2)
with the boundary conditions imposed on the flow in [2]
  ,  ,         (3)
Where   is a stream function related to the velocity components as:
 ,                         (4)   

3  ANALYTICAL SOLUTION  and CONVERGENCE RESULTS
In this section we provide the analytical solutions,i.e.the fluid velocity components as sums of convergent series using the Adomian  decomposition technique and compute the admissible values of the shear-stress on the plate surface.
Consider the stream function     defined by
  ,        (5)
Where   is a function three times continuously differentiable on the interval   and   a constant positive real. Then  (1) and (2) with (3) are transformed as
  ,  (6)
where  stands for 

Definition 3.1
The problem (6) is called the Blasius problem for boundary-layer flows of pure fluids (non-porous domains) over a flat plate.
Let us transform (6) into the nonlinear integral equation. For this purpose, setting  we can write the equation in (6) as
             (7)
Multiplying by  and integrating the result from   to  we reduce (7) to
  where        (8)
Integrating three times (8) from   to   and taking into account the boundary conditions in (6) we reduce  (8) to the nonlinear integral equation
  (9)
 
which is a functional equation
  where            (10)
       
             
Here  is a nonlinear operator from a Hilbert space  into  . In [4] Adomian has developed a decomposition technique for solving nonlinear functional equation such as (10). We assume that (10) has a unique solution.The Adomian technique allows us to find the solution of (10) as an infinite series   using the following scheme:
 
 , where
 
The proofs of convergence of the series   and  are given below.Without loss of generality we set  and we have the following scheme:
           
By induction, we have
 
 i.e.     , 
 
where the  are real numbers.Then we obtain

 
  We arrive at the following result
  Lemma 4.1
   The admissible values of the shear-stress  on the plate surface obtained in (20) belong to the open interval
          (22)
   for each given value of  and for the given approximation precision depending on     .

5  CONCLUSION
In this paper,we have investigated the analytical solutions for the Blasius problem which are the sums of convergent series, using the Adomian decomposition technique. Then we estimated the error by approximating the exact values of the shear-stress on the plate surface obtained in this paper by the approximate values of the shear-stress obtained in [2].Doing so, we constructed the interval of admissible values of the shear-stress on the plate surface.

Read More: Click here...

25
Quote
Author : Arka Ghosh
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Financial forecasting is an example of a signal processing problem which is challenging due to Small sizes, high noise, non-stationarity, and non-linearity,but fast forecasting of stock market price is very important for strategic business planning.Present study is aimed to develop a comparative predictive model with Feedforward Multilayer Artificial Neural Network & Recurrent Time Delay Neural Network for the Financial Timeseries Prediction.This study is developed with the help of historical stockprice dataset made available by GoogleFinance.To develop this prediction model Backpropagation method with Gradient Descent learning has been implemented.Finally the Neural Net ,learned with said algorithm is found to be skillful predictor for non-stationary noisy Financial Timeseries.

Key Words—.  Financial Forecasting,Financial Timeseries Feedforward Multilayer Artificial Neural Network,Recurrent Timedelay Neural Network,Backpropagation,Gradient descent.
 
I.   INTRODUCTION
Over past fifteen years, a view has emerged that computing based on models inspired by our understanding of the structure and function of the biological neural networks may hold the key to the success of solving intelligent tasks by machines like noisy time series prediction and more[1]. A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: Knowledge is acquired by the network through a learning process and interneuron connection strengths known as synaptic weights are used to store the knowledge[2]. Moreover, recently the Markets have become a more accessible investment tool, not only for strategic investors but for common people as well. Consequently they are not only related to macroeconomic parameters, but they influence everyday life in a more direct way. Therefore they constitute a mechanism which has important and direct social impacts. The characteristic that all Stock Markets have in common is the uncertainty, which is related with their short and long-term future state. This feature is undesirable for the investor but it is also unavoidable whenever the Stock Market is selected as the investment tool. The best that one can do is to try to reduce this uncertainty. Stock Market Prediction (or Forecasting) is one of the instruments in this process. We cannot exactly predict what will happen tomorrow, but from previous experiences we can roughly predict tomorrow. In this paper this knowledge based approach  is taken.

The accuracy of the predictive system which is made by ANN can be tuned  with help of different network architectures. Network is consists of input layer ,hidden  layer & output layer of neuron, no of neurons per layer can be configured according to the needed result accuracy & throughput,there is no cut & bound rule for  that.the network can be trained by using sample training data set,this neural network model is very much useful for mapping unknown functional dependencies between different input & output tuples.In this paper two types of neural network architecture,feed forward multilayer network & timedelay recurrent network is used for the prediction of the  NASDAQ stock price.A comparative error study for both network architecture is introduced in this paper.

In this paper gradient descent backpropagation learning algorithm is used for supervised  training of  both  network architectures. The back propagation algorithm was developed by Paul Werbos in 1974 and it is rediscovered independently by Rumelhart and Parker. In backpropagation  learning  atfirst the network weight is selected as random small value then the network output is calculated & it is compared with the desired output,difference between them is defined by error .The goal of efficient network training is to minimize this error by monotonically tuning the network weights by using gradient descent method.To compute the gradient of error surface it takes mathematical tools & it is a iterative process.

ANN is a powerful  tool  widely used in soft-computing techniques for forecasting  stock price.The first stock forecasting approach was taken by White,1988 ,he used IBM daily stock price to predict the future stock value[3].When developing  predictive model for forecasting Tokyo stock market , Kimoto, Asakawa, Yoda, and Takeoka 1990  have reported onthe effectiveness of alternative learning algorithms and prediction methods using ANN[4]. Chiang, Urban, and Baldridge 1996 have used ANN to forecast the end-of-year net asset value of mutual funds[5]. Trafalis (1999) used feed-forward ANN to forecast the change in the S&P(500) index. In that model, the input values were the univariate data consisting of weekly changes in 14 indicators[6].Forecasting of daily direction of change in the S&P(500) index is made by Choi, Lee, and Rhee 1995[7]. Despite the wide spread use of ANN in this domain, there are significant problems to be addressed. ANNs are data-driven model (White, 1989[8]; Ripley, 1993[9]; Cheng & Titterington, 1994[10]), and consequently, the underlying rules in the data are not always apparent (Zhang, Patuwo, & Hu, 1998[11]). Also, the buried noise and complex dimensionality of the stock market data makes it difficult to learn or re-estimate the ANN parameters (Kim & Han, 2000[12]). It is also difficult to come with an ANN architecture that can be used for all domains. In addition, ANN occasionally suffers from the overfitting problem (Romahi & Shen, 2000[13])[14].

II.   DATA ANALYSIS AND PROBLEM DESCRIPTION

This paper develops two comparative ANN models step-by-step to predict the stock price over financial time series, using data available at the website http://www.google.com/finance. The problem described in this paper is a predictive problem. In this paper four predictors have been used with one predictand. The four predictors are listed  below

•Stock open price
•Stock price high
•Stock price low
•Stock close price
•Total trading volume

The predictand is next stock opening price.

All these four predictors of year X are used for prediction of stock opening price of year ( X+1). Whole dataset comprises of 1460 days NASDAQ stock data. Now first subset contains early 730 days data (open,high,low,close,volume) which is the inputseries to the neural network predictor.Second subset has later 730 days data(only open) which is the target series to the neural network predictor.Now the network learns the dynamic relationship between those previous five parameters (open, high, low, close, volume)to the one final parameter(open),which it will predict in future.

A.   Data Preprocessing

Once the historical stock prices are gathered ,now this is the time for data selection for training,testing and simulating the network.In this project we took 4 years historical price of any stock ,means total 1460 working days data.We done R/S analysis  over these datafor predictability(Hurst exponent analysis).Now The Hurst exponent (H) is a statistical measure used to classify time series. H=0.5 indicates a random series while H>0.5 indicates a trend reinforcing series. The larger the H value is, the stronger trend. (1) H=0.5 indicates a random series. (2) 0<H<0.5 indicates an anti-persistent series. (3) 0.5<H<1 indicates a persistent series. An antipersistent series has a characteristic of “mean-reverting”, which means an up value is more likely followed by a down value, and vice versa. The strength of “meanreverting” increases as H approaches 0.0. A persistent series is trend reinforcing, which means the direction (up or down compared to the last value) of the next value imore likely the same as current value. The strength of trend increases as H approaches 1.0. Most economic and financial time series are persistent with H>0.5. Now we took the dataset timeseries having hurst exponent >0.5 for persistency in good predictability.

Read More: Click here...

26
Quote
Author : Saima Munawar, Mariam Nosheen and Dr.Haroon Atique Babri
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Intrusion detection system is a vital part of computer security system commonly used for precaution and detection.It is built for classifier or descriptive or predictive model to proficient classification of normal behavior from abnormal behavior of IP packets. This paper presents the solution regarding proper data transformation methods handling and importance of data analysis of complete data set which is apply on hybrid neural network approaches for used to cluster and classify normal and abnormal behavior to improve the accuracy of network based anomaly detection classifier. Because neural network classes only require the numerical form of data but IP connections or packets of network have some symbolic features which are difficult to handle without the proper data transformation analysis. For this reason, it got non redundant new NSL KDD CUP data set. The experimental results show that indicator variable is more effective as compared to the both conditional probabilities and arbitrary assignment method from measurement of accuracy and balance error rate.
 
Index Terms — ANN, Anomaly Detection, Self Organizing Map, Backpropagation network, Indicator variables, Conditional probability

1   INTRODUCTION                                       
In computer security, network administrators always suggest prevented action for better cure of any system. Intrusion Detec-tion Systems (IDS) are classified in to three categories which are host-based, network-based and vulnerability-assessment [1].Signature based detection and anomaly detection model are two basic models of intrusion detection. In signature based, it is only used to detect attack through known intrusions and it cannot be detected novel behavior. It is specially used in commercial tools and it has to update new attacks in database.The anomaly intrusion detection can be resolved these limitation of signature based and used to detect new attack via searching abnormality [2], [3]. Anomaly detection issues have numerous possibilities that are yet unexplored [4]. Network and computer security is significant issues of every security demanded organization. Prevention, detection and response are three basic foundation of network security.For this purpose many researchers emphasizes on preventive action over the detection and response [5]. For increasing the demand of network security, many devices like firewall and intrusion detection used to contol the abnormal packets accesibility.Basically abnomal packets violate the internet pro-tocol standards and these packets is used to crash the systems [6].So this reason better intrusion detection devices are building for prevention and accurate detection of normal and abnormal packets and to reduce the false alarm rate. IDS are basically de-voted to fulfill this purpose to monitor the system intelligently. As far as the access control points is concerned ,firewall is good but it is not designed to prevent action against intrusions that's why most security experts emphasizes the IDS which is located before and after the firewall [7], [8].Many researchers have been improving intrusion detection systems through different research areas such as statistics, machine learning, data mining, information theory and spectral theory[2], [3] [4].The purpose of this research is to provide the hybrid learning of artificial neural network base design approach for anomaly intrusion detection classifier system.There is unable to directly handle the symbolic features of IP data set so that It is considered that there are two data transformation methods indicator variable and conditional probabilities which are effective to improve the classifier performance, it is processed through hybrid technique self organizing map and backpropagation of neural network.The data transformation is processed on selecyive nine features of IP NSL data set.It is prepared for anomaly detection classifier which is used for LAN security.

Five sections are presented in this research. Section 2 is back-ground literature of the related research processes. Section 3 pro-vides the detail analysis of proposed research methodology, algorithms of SOM and BPN and their training and testing results are discussed. Section 4 provides detail analysis of experimental results of the research and comparison between two methods effect the performance of classifier. In last, section 5 presents conclusion and discussed the future direction of this domain.

2   RELATED STUDY
2.1 Hybrid learning use in misuse and anomaly detection

Hybrid approaches have been used to resolve the anomaly intru-sion detection problems. Hamdan et.al [9] comparison four tech-niques of supervised learning of support vector machine and neural network self organizing map and fuzzy logic of unsuper-vised learning techniques. It is only proposed descriptions of theses techniques but did not include the methodology and nu-merical analysis of all these applied techniques. Other approach artificial immune system is used for detection and self organizing is used for classification. It is emphasized the higher level information output rather than the low level for more beneficial to security analyst to analyzing reports. The KDD CUP 1999 data set is used as input for specially focused on two types of attacks which is denial-of-service and user-to-root attacks [10].M.bahrololum et .al [11] presented the design approach and it would be used further explanation in future enhancement. It described introduction of SOM and backpropagation algorithm, KDDCUP data set features, training and testing data, experimenting table view. But besides all of these it did not mentioned how to arrange and used this data set in to which software, how to implement experiment, how to apply these techniques on data set and what methods used to evaluate result. It only provided the proposal and discussed some design issue with flow diagrams. Hayoung et al [12] proposed the new labeling methods apply for this domain but in real time system detection, if no correlation build how to detect the normal or anomaly data set but labeling is supervised learning ,again a huge analysis will require for the correlation between the features. But it did not provided the labeling time only described the detection time but in real time system total time is require for the completion of all processes.

2.2 Analysis and Data Transformation Processes

The data analysis and preprocessing is core part of the artificial neural network architecture for processing and analysis of accu-rate result. Anomaly detection has been paying attention of many researchers during the last decade. Due to this reason many researcher not only considered the new algorithms but also taking analysis of data set used for training and testing classifier. The KDD CUP 99 data set is mostly used for intrusion detection problems. It has 41 features. There are three basic features which are individual TCP connections feature, content features, and traffic features which include 7 symbolic and 34 continuous attributes [13]. Tavallaee et.al presented the detail and critical review of KDDCUP99 data set. It is discussed the problems in KDD CUP99 data set and resolved two issue of KDD CUP 99 data set which affects the performance and poor evaluation in anomaly detection approaches. It proposed new data set NSL-KDD, which include selected records and remove redundancy of records of KDD CUP 99.The form of this data set is ARFF (attribute relation file format).The authors claimed that this data set will help researcher for solving anomaly detection problems [14].Preprocessing apply before processing of neural networks algorithms because these algorithms require the quantitative data instead of qualitative information.The most commonly conversion method used is arbitrary assignment but criticizing of this method, three other approaches is using for machine learning algorithms. E.Hernandez et.al presented three methods for symbolic features conversion apply on KDD CUP data set. It described all these techniques in detail and also described the comparison of these techniques have been applied on different feed forward neural network and support vector machine. They claimed that these three conversion methods improve the prediction ability of the classifier. These methods are using for preprocessing (symbolic attributes convert in to numeric form) which is indicator variable, conditional prob-abilities and SSV (Separability split value) criterion based method [15].

3. PROPOSED EXPERIMENTAL METHODOLOGY
This section is divided into theses main processes which are data Analysis, preprocessing, modeling of clustering and classification and performance evaluation.

3.1 Data Analysis
NSL KDD CUP data set is reasonable and improves the evalua-tion.This data set is offline and it is provided for anomaly detec-tion classification research to better evaluation of classifier.It also gives the consistent and getting more comparable results [14], [17], [18].

3.2 Feature selection
It is difficult to select the important feature for detecting and classification between normal packets and attacks. More research work is doing for selection of feature on anomaly detection problems.The basic question is how many types of features are selective for improving the classification rate and to relate which types of attacks. In this research, first basic 9 attributes of individual tcp connections are used. It consist of duration, types of protocol, services, source bytes, destination bytes, flag, land, wrong fragment and urgent. These features have 3 symbolic and 5 continues attributes. The protocol and services are most important features to detect the attacks [13], [14].The main purpose to select these features because it has maximum number of symbolic features instead of others for handling symbolic features preprocessing.

3.3 Preprocessing
 The given input data set has symbolic and continuous attributes. These data set need to be converted in to numerical form for processing on neural network algorithms. Researchers are finding best data transformation techniques applying on selected features for improving the performance of classifiers. The main purpose is to show how different preprocessing methods affect the accuracy of different tasks of machine learning simulation.Besides the modification of algorithm, it also important to consider data transformation and feature selection methods according to the demand of any machine learning and training.The details of data transformation methods are used in this research which is given below.

Read More: Click here...

27
Quote
Author : A.Raimy, K.Konate, NM Bykov
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— In this paper we present  a technique of efficacy improvement of speech signal compression algorithm without individual features speech   production loss. The compression in this case means to delete, from the digital signal, those quantization steps that can be predicted. We propose to  decrease the number of those quantization steps using a modified linear predication algorithm with variable order. That allows to decrease compression time and save computer resource. 

Index Terms— speech signal compression, quantization steps, linear predication algorithm, computer resource.

1   INTRODUCTION                                                                     
THE task of efficient representation of speech signal is one of the vital tasks in speaker identification problems. For example, an automatic speaker recognition system is installed on a LAN or WAN server, which authorizes a ter-minal to access the network according to the voice of the subscriber. There are two ways of processing information in this case:
1) get the identity features of the speaker from the speech signal on the subscriber’s terminal and transfer them to the server for a decision regarding the possibility of admission;
2) compress the speech signal, without loosing the information about the speaker’s identity, in the form of a password wav-file, and transfer it across the network to the server, where the identification procedure is carried out.
One of the advantages of the first approach is the reduction of  the  transmission time over the network. Its main  drawbacks  are that  it  reduces the confidentiality the  speaker identification procedures,  and there is a need to install on the terminals a system for a primary analysis and description of  the speaker signals features. Thus, the second approach is more effective for information processing regarding  the number of computations that are required for the compression, and the use of  ASP-technologies for the selection of informative features and for decision-making.

Analysis of known works
According to the well known methods of signal compression and given the statistical characteristics of the speech signal, the parameters of the analog-to-digital converters (ADC) are chosen according to the rules presented in [1, 2]: the discretization frequency is determined by the upper limit frequency of the signal, the  quantification range  – by the dispersion, the quantification step - by the signal to noise ratio and the required precision. Since the speech signal is not stationary, the parameters of the ADC are chosen approximately using the most catastrophic situation, which is rarely encountered. As a result, the inherent redundancy of the speech signal is com-pleted by the redundancy of the discrete transformation. As a result a new problem arises:  eliminating the ADC’s redundancy. In the numerous variants of pulse modulation  and adaptive coding, which are used today to eliminate encoding redundancy, the sample rate remains constant and equals the Nyquist frequency, and redundancy is eliminated by analyzing the values of neighboring signal samples.

The aim of the research
The aim of the research is to increase the efficiency of the algorithm of speech signal compression without loosing the information related to the personnal peculiarities of the speaker,  by removing those samples that can be predicted.

2 THEORETICAL FOUNDATIONS OF THE PROPOSED METHOD

In this work we propose to reduce the number of signal sam-ples by using the modified method of variable order linear prediction. The peculiarity of the proposed method consists in a two step processing of the speech signal, which allows  reducing the time that is necessary for wav-file compression. The process is carried out in two steps:
 1.Preliminary compression;
        2. Final compression.
At the first stage the wav-file is processed using an original technique, which consists in approximating the speech signal using a polyline, with the possibility to  establish the degree of its deviation from the original signal. At the second stage  the wav-file areas which were not affected during the initial compression procedure are approximated using a polynomial, whose order is determined according to the accuracy that is required to restore the speech signal from the archive file.
Since the speech signal is a continuous function  , whose spectrum  is limited by the upper  frequency  F, it is defined by the succession of his samples, whose time interval is calculated using the following formula:

 .
Thus the signal    can be described as follows:

 ,

where     is the sample function and  assumes discrete value
 
 

For a limited duration  of the speech signal the number of the signal samples is defined by the expression:
         
Taking to account the quazzi stationarity of the signal and also the non critical state of the data collection systems to real time of processing, a method of reduction of the encoding redundancy of the speech signal using the ADC has been developed.
Minimization of the error of restored signal consist in the finding those fixed values of the argument   that ensure convergence of broken plot from the vertices   towards the function  so that for the entire range of argument changing  the absolute error does not exceed permissible values.
The function   in these points can be presented as follows:

  for  ,
                                for  ,
 ,
for ,

where  can be defined as follows :

            ,
 
             
In general:
 ,
where   

                     Approximation error is determined by the remainder term of interpolation formula. In this case, the segment of line in the within the time interval [ ] is defined by the expression:
 
and the remaining member of functions expansion  at the same interval will be:

 
where  - the second derivative of a given function within the interval.
If it is known that  and   are maximal, then
 .
Letting , we get the formula for the sampling interval
 .
Asking the upper frequency of signal bandwidth is defined we can determine the deviation of real signal value from predicted. Based on the above, an algorithm to imple-ment the procedure for pre-compression of voice information was created. It includes following steps:
1. Set level of allowable absolute error of the recovery signal  ;
2. Set the minimum size  of buffer compression;
3. For the current point the coefficient of prediction is determined;
4. If a deviation of the coefficient  , we incorporate current sample in compression buffer, increasing the value   of  buffer counter by 1 and go to Item 3, if the inequality is not fulfilled, then check the buffer counter  : if  then set   and go to to Item 3; if  then compression is full field;
5. If end of wav-file not found, then go to Item 3.
Linear prediction used for the realization of the process of the second step of compression [3,4].The signal is presented in a digital form ,  , where  is number of signal samples, which is obtained by sampling it at a certain frequency F. This signal  ,  ,can be presented as a linear combination of preceding values of the signal and some influence 
 
where   is the amplification coefficient and   is the order of prediction.
Then, knowing the values of signal , the problem reduces to searching the coefficients   and . Concerning the estimate, we will use the least square method assuming the signal   as deterministic.
The values of signal   will be expressed through his estimating values   by the following formula :
 .
Then the predicion error can be described as follows:
 
Using the least square method, the parameters   are selected so as to minimize the average or the sum of squares of the prediction error. In order to find the coefficients , let us use the matrix method [5,7] called as Darbin method.
Calculation of the coefficients of linear prediction and the prediction error is performed by the following algorithm: of coefficients of linear prediction and prediction error is:
1.   The segmentation of the speech signal  at stationary intervals;
2.    For separated intervals, a system of linear equations is formed that is solved by matrix method or by Darbin method using the auto-correlation function (method is selected by user);
3.    The prediction error is calculated.

Read More: Click here...

28
Quote
Author : Er. Manoj Arora, Er. R S Chauhan, Er.Lalit Bagga
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract- In 1959 J. E. Volder presents a new algorithm for the real time solution of the equations raised in navigation system. This algorithm was the best replacement of analog navigation system by the digital. CORDIC algorithm used for the fast calculation of elementary functions like multiplication, division, trigonometric functions, logarithmic function, and various conversions like conversion of rectangular to polar coordinate, conversion between BCD and binary coded information. In the present time CORDIC algorithm have a number of applications in the field of communication, 3-D graphics, signal processing and a lot more. This review paper presents the prototype of hardware implementation of CORDIC algorithm using Spartan –II series FPGA, with constraint to area efficiency and throughput architecture.
 
Index Terms : CORDIC; FPGA; Discrete Fourier Transform (DFT); Discrete Cosine transform (DCT); Iterative CORDIC; Pipelined CORDIC,SVD.
 
1 INTRODUCTION
Co-ordinate Rotation Digital Computer is abbreviated as ORDIC. The main concept of this algorithm is based on the very simple and long lasting fundamentals of two-dimensional geometry. The first description for iterative approach of this algorithm is firstly provided by Jack E. Volder in 1959”[1]”. CORDIC algorithm provides an efficient way of rotating the vectors in a plane by simple shift add operation to estimate the basic elementary functions like trigonometric operations, multiplication, division and some other operations like logarithmic functions, square roots and exponential functions. Most of the applications either in wireless communication or in digital signal processing are based on microprocessors which make use of a single instruction and a bunch of addressing modes for their working. As these processors are costs efficient and offer extreme flexibility but yet are not suited for some of these applications. For most of these applications the CORDIC algorithm is a best suited alternative to that architecture which rely on simple multiply and add hardware. The pocket calculators and some of DSP objects like FFT, DCT, and demodulators are some common fields where CORDIC algorithm is found.
In 1971 CORDIC based computing received attention, when John Walther showed that, by varying a few simple parameters, it could be used as a single algorithm for implementation of most of the mathematical functions. During this period Mr Cochran invent various algorithms and
showed that CORDIC is much better approach for scientific calculator applications. The popularity of CORDIC is enhanced there after mainly due to its potential for efficient and low-cost implementation of a large class of applications which include the generation of trigonometric, logarithmic and transcendental elementary functions; complex number multiplication, eigenvalue computation, matrix inversion, solution of linear systems and singular value decomposition (SVD) for signal processing, image processing, and general scientific computation. Some other popular and upcoming applications are:
1) Direct frequency synthesis, digital modulation and coding for   speech/music synthesis and communication;
2) Direct and inverse kinematics computation for robot manipulation;
3) Planar and three-dimensional vector rotation for graphics and animation.
Although CORDIC algorithm is not a very fast algorithm for use but this algorithm is followed due to its very simple implementation and also the same architecture can be used for all the applications which is based on simple shift- add operation.

2 CORDIC ALGORTHM
CORDIC is acronym for COordinate Rotation DIgital Computer. The CORDIC algorithm is used to evaluate real time calculation of the exponential and logarithmic functions using the iterative rotation of the input vector. This rotation of a given vector (xi, yi) is realized by means of a sequence of rotations with fixed angles which results in overall rotation through a given angle or result in a final angular argument of zero. Fig1.shows all the computing steps involved in CORDIC algorithm.

In the fig.1 “[1]” the angle αi is the amount of rotation angle for each iteration and this rotational angle is defined by the following equation:-

   α I = tan-12-1                                                                                                     (1)

  Fig.1 CORDIC computing step
So this angular moment of vector can easily be achieved by the simple process of shifting and adding. Now if we consider the iterative equation as shown on the next page
      
   xi+1 = xi cos αi – yi sin αi
   yi+1 = xi sin αi + yi  cosαi                                                                    (2)
from equation (1), we can write as

   xi+1 = cos αi (xi– yi  tan αi)
   yi+1 = cos αi (xi  tan αi + yi )
Now here we define scale factor kn which is same as shown below:
   Ki = cos αi   or  1/√(1+2-2i)
So for the above written two equation we can rewrite them as

   xi+1 = (1/√(1+2-2i) ) Ri  cos( αi + θ )            
   yi+1 = (1/√(1+2-2i) ) Ri  cos( αi - θ )                        (3)

OR   xi+1 = ki (xi  - 2-i   yi)
    yi+1 = ki (yi + 2-i  xi )
Now as shown in above equation the direction of rotation may be clock wise or anticlockwise means unpredictable for different iterations so for that ease we define a binary notation di to identify the direction. It  can equal either +1 or -1. So putting di in above equation we get:

              xi+1 = ki (xi  - di 2-i   yi)
    yi+1 = ki (yi + di 2-i  xi)                                                                                (4)

As the value of di depends on the direction of rotation. If we move clockwise then the value of di is +1 otherwise -1.Now these iterations are basically combination of elementary functions like addition , subtraction , shifting and table look up operations and no multiplication and division functions are required in the CORDIC operation.
In CORDIC algorithm, a number of microrotations are combined in different ways to realize some different functions. This is achieved by properly controlling the direction of the successive microrotations. So on the basis of controlling these microrotations we can divide CORDIC in two parts and this control on successive microrotations can be achieved in the following two ways:

Vectoring mode: - In this type of mode the y-component of the input vector is forced to zero. So this type of consideration yields computation of magnitude and phase of the input vector.
Rotation mode: - In the rotation mode θ-component is forced to zero. and this mode yields computation of a plane rotation of the input vector by a given input phase θ0 .

2.1 Vectoring mode
As earlier written the in vectoring mode of CORDIC algorithm the magnitude and the phase of the input vector are calculated. The y-component is forced to zero that means the input vector(x0, y0) is rotated towards the x-axis. So the CORDIC iteration in vectoring mode is controlled by the sign of y-component as well as x-component. Means in the vectoring mode the rotator rotates the input vector through any angle to align the result in the x-axis direction.
So in the vectoring mode the CORDIC equations are:

Read More: Click here...

29
Quote
Author : Savita R.Bhosale Dr. S. L. Nalbalwar and Dr. S.B.Deosarkar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract: In optical code division multiple access (OCDMA) system, many users share the same transmission medium by assigning unique pseudo-random optical code (OC) to each user. OCDMA is attractive for next generation broadband access networks due to its features of allowing fully asynchronous transmission with low latency access, soft capacity on demand, protocol transparency, simplified network management as well as increased flexibility of QoS control and enhanced confidentiality in the network. Hence, in this paper, we proposed a technique using spectral phase encodingin time domain for eight users. This technique is proved to be much effective to handle eight users at 4 Gb/s bit rate for Metropolitan area Network (MAN). Results indicate significant improvement in term Beat Error Rate (BER) and very high quality factor in the form of Quality of Service (QoS). In our analysis, we have used Pseudo Orthogonal (PSO) codes. The simulations are carried out using OptSim (RSOFT).

Keywords:  MAI, OCDMA,  OOC, PSO, QoS, BER ,PON,ISD,CD.
 
1  INTRODUCTION
OPTICAL code division multiple access (OCDMA), where users share the same transmission medium by assigning unique pseudo-random optical code (OC), is attractive for next generation broadband access networks due to its features of allowing fully asynchronous transmission with low latency access, soft capacity on demand, protocol transparency, simplified network management as well as increased flexibility of QoS control [1~3]. In addition, since the data are encoded into pseudo-random OCs during transmission, it also has the potential to enhance the confidentiality in the network [4~6]. Figure1. illustrates a basic architecture and working principle of an OCDMA passive optical network (PON) network. In the OCDMA-PON network, the data are encoded into pseudo random OC by the OCDMA encoder at the transmitter and multiple users share the same transmission media by assigning different OCs to different users.

At the receiver, the OCDMA decoder recognizes the OCs by performing matched filtering, where the auto-correlation for target OC produces high level output, while the cross-correlation for undesired OC produces low level output. Finally, the original data can be recovered after electrical thresholding. Recently, coherent OCDMA technique with ultra-short optical pulses is receiving much attention for the overall superior performance over incoherent OCDMA and the development of compact and reliable en/decoders (E/D) [7~12]. In coherent OCDMA, encoding and decoding are performed either in time domain or in spectral domain based on the phase and amplitude of optical field instead of its intensity.
 
Fig.1. Working principle of an OCDMA network

In coherent time spreading (TS) OCDMA, where the encoding/decoding are performed in time domain. In such a system, the encoding is to spread a short optical pulse in time with a phase shift pattern representing a specific OC. The decoding is to perform the convolution to the incoming OC using a decoder, which has an inverse phase shift pattern as the encoder and generates high level auto-correlation and low level cross-correlations.

2 SIMULATION SET-UP
The encoders use delay line arrays providing delays in terms of integer multiples of chip times. The placement of delay line arrays and the amount of each delay and phase shifts are dictated by the specific of the signatures. PSO matrix codes are constructed using a spanning ruler or optimum Golomb ruler is a (0,1) pulse sequence where the distances between any of the pulses is a non repeating integer, hence the distances between nearest neighbors, next nearest neighbors, etc., can be depicted as a difference triangle with unique integer entries. The ruler-to-matrix transformation increases the cardinality (code set size) from one (1) to four(4)and the ISD (=Cardinality/CD) from 1/26 to 4/32 = 1/8.The ISD translates to bit/s/Hz when the codes are associated with a data rate and the code dimension is translated into the bandwidth expansion associated with the codes as follows:


ISD   =((throughput))/((bandwidth required) )

        =((cardinality*data rate))/((1/Tb)(bandwidth expansion))

       =((n*r*R))/((R)(CD))

       =(n*r)/((CD))

The enhanced cardinality and ISD, while preserving the OOC property, are general results of the ruler-to-matrix transformation.

We can convert the PSO matrices to wavelength/time (W/T ) codes by associating the rows of the PSO matrices with wavelength (or frequency) and the columns with time-slots, as shown in Table I. The matricesM1….M32 are numbered 1…32 in the table, with the corresponding assignment of wavelengths and time-slots. For example, code M1 is (λ1 ; λ1 ; λ3; λ1 ) and M9 is ( λ1,λ4;0;λ7,λ8;0); here the semicolons separate the timeslots in the code. (The codes M1 and M9 are shown in bold numerals.)
We focus on codes like M1 because it shows extensive wavelength reuse, and on codes likeM9 because it shows extensive time-slot reuse. It is the extensive wavelength and time-slot reuse that gives these matrix codes their high cardinality and high potential ISD. Four mode-locked lasers are used to create a dense WDM multi-frequency light source. Pseudo-orthogonal (PSO) matrix codes [3] are popular for OCDMA applications primarily because they retain the correlation advantages of PSO linear sequences while reducing the need for bandwidth expansion. PSO matrix codes also generate a larger code set. An interesting variation is described in [1] where some of the
wavelength/time (W/T) matrix codes can permit extensive wavelength reuse and some can allow extensive time-slot reuse. In this example, an extensive time-slot reuse sequence is used for User 1 (λ1λ3;0;λ2λ4;0).There are four time slots used without any guard-band giving the chip period of 100 ps. Codeset for time spreading is mapped as C1:{0; λ2;0;λ4},C2:{λ1;0;λ3;0}….C8:{λ1; λ2;0;0}.Code set to apply binary phase shift mapped as M1:{ 0;1;0;1;} M2:{1;0;1;0;}….M8:{0;0;1;1;}.(1represents as a π phase shift,0 represents as no phase shift)

TABLE 3
SPE O-CDMA system parameters used for simulation
 
3 PROPOSED SPE O-CDMA SCHEME
1) Lasers (mode locked laser requited to produce 4
     wavelength signal)
2) Encoders consisting of required components like
    fiber delay lines, PRBS, External Modulator,
    multiplexers
3) Multiplexers
4) Optical fiber of 60 km length
5) De multiplexers
6) Decoders corresponding to each encoder
7) Receiver etc
8) BER analyzer
9) Eye Diagram analyzer
10) Signal analyzer

Read More: Click here...

30
Quote
Author : Yash Pal Singh, Rakesh Rathi, Jyoti Gajrani, Vinesh Jain
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— P2P networks are playing an important role in current scenario of unstructured networks. P2P network supports various applications and taking the advantage over the centralize search system .Centralize search systems suffer from the problems of single point of failure, low availability, denial of service attacks. Searching of the required data is a vital issue in the P2P network. Many methods have been implemented for searching in P2P network such as Flooding, Random Walk, Expanding Ring or Iterative deepening, K-Walker Random Walk, Two Level K Walker Random Walk, etc. These methods are based on property of randomness in the network. Some of these generate large traffic while others take long searching time. A probabilistic approach with Two Level K Walker Random Walk for searching has been implemented in this paper and comparative study has been done with other algorithms.

Index Terms— peer to peer, APS, random walk, K-walker, dynamic search, peersim, probabilistic, flooding. 

1   INTRODUCTION                                                                     
AP2P network is a collection of distributed, heterogeneous, autonomous and highly dynamic peers. Participant peer shares a part of its own resources such as processing power, storage capacity software and files. P2Pnetworks are dynamic in nature. Various types   of P2P networks are the Purely Decentralized Systems, Partially Centralized Systems, and Hybrid Decentralized Systems. According to network structure P2P network is classified as Unstructured, Structured and Loosely Structured based on their data location with respect to overlay topology. In the first case data content are totally unrelated to overlay topology. In Structured P2Pnetwork data contents are precisely defined with respect to overlay topology. In this type of network each node has the idea about data content so searching can be done easily. In last one searching can be done on the basis of routing hints. Structured P2Pnetworks are not suitable for highly transient node population where node can leave and join any time. In this paper a probabilistic approach with two levels TTL has been implemented for searching in unstructured P2Pnetwork. In this case we maintain a database and each node has the same probability of its neighbors in the database. First time we perform K-Walker Random walk because every neighbor has the same probability. After termination of the search we increase the probability of the neighbor by some amount on successful path and decrease the probability on unsuccessful path. Next time when any node wants to make query, it will choose the node with highest probability from the database and send a query message to this node. This continues until the desired content is found or TTL expires. In the case of Two Level TTL, those nodes where the search is unsuccessful a new TTL1 will be initialize which will be less than the last TTL, and at every unsuccessful node the query message will be exploded in to K number of threads. This process will be continuing like K Walker Random Walk until the desired content found or TTL1 expires. We proposed a new search algorithm which takes the advantage of these two searching techniques [1]. Implementation of this algorithm and comparative study has been done with other algorithms in the paper.

2 RELATED WORK
Various search protocols have been implemented for Unstructured P2Pnetwork. Basic searching techniques are  blind search and  knowledge base search. Flooding and Random Walk protocols use blind search and Adaptive probabilistic search uses knowledge based search. Mostly protocols are working for file sharing applications on the Internet. Most common examples are Napster, Gnutella, Kazaa and BitTorrent. Gnutella is based on flooding which is used for file sharing application. BitTorrent is also used for. Flooding[5]: In the case of flooding each querying node sends query message to its entire neighborhood. These neighbors also forward this query message to their corresponding neighbors until search is successful or TTL expires. If desire content is very far from querying node then number of message generated for this query will be very large. Iterative Deepening [5]: The idea of iterative deepening is taken from artificial intelligence and used in P2P searching, where the querying node issues BFS searches (in sequence) with increasing depth limits. It terminates the query either when maximum depth limit (D) has reached or result is obtained. Same sequence of depth limits is used by all nodes and same time period W between consecutive BFS searches. Local Indices [5]: In this technique a system wide policy specifies the depths at which the query should be processed. All nodes at depths not listed in the policy simply forward the query to the next depth. Routing indices [5]: Routing indices guide the entire search process like intelligent search. Intelligent search uses information about past queries which have answered by neighbors, While Routing indices stores information about the documents topic and also the number of documents stored in its neighbors. It concentrates on content queries, queries based on the file content rather than file name or file identifier. Dynamic Search [10]: It maintains user define threshold value. If the number of hop count is less than threshold, flooding will be performed otherwise K walker random walk will be perform. This will generate lesser amount of traffic than flooding. K-walker random walk and related schemes [5]: In the standard random walk algorithm, query message( also called walker) is forwarded to one randomly selected neighbor which again randomly chooses one of its neighbors and forwards the query message to selected neighbor and procedure continues until the data is found. One walker is used in standard random walk algorithm and this will reduce the message overhead greatly but causes longer searching delay. In the k-walker random walk algorithm, k walkers are deployed by the querying node which means querying node(source node)forwards k copies of the original message to k neighbors (randomly selected) and at the intermediate node it will forward single copy of the message to one randomly selected neighbor until the search is successful or TTL expires. Each walker takes its own random walk. Each walker talks with the querying node periodically to decide whether it should terminate or not. Soft states are used to forward different walkers for the same query to different neighbors. This algorithm tries to reduce the routing delay. On an average the total number of nodes reached by k random walkers in H hops is the same as the number of nodes reached by one walker in kH hops and this causes routing delay to be k times smaller. A similar scheme is the two-level random walk [7]. In this scheme in level 1, the querying node deploys k1 random walkers from the querying node with the TTL1 and perform random walk at intermediate nodes. At the nodes where TTL1 expires and search is unsuccessful level2 will start, each walker forges k2 walkers with the TTL2. Query is processed by all walkers in the path. For same number of walkers (k=k1+k2), this scheme has longer searching delays than the k-walker random walk but generates less duplicate messages. Another similar approach called “modified random BFS”, where query is forwarded only to a randomly selected subset of neighbors and when query message is received , each neighbor forwards the query to a randomly selected subset of its neighbors  (excluding the querying node). Until the query stop condition is satisfied, same method continues. This approach may visits more nodes and has a higher query success rate than the k-walker random walk. Adaptive Probabilistic Search [5][6]:  It assumes that objects and their copies in the network follows a replication distribution for storage. The number of query requests for each object follows a query distribution. This process does not affect object placement and the P2P overlay topology. APS is based on k-walker random walk and is probabilistic forwarding. The querying node deploys k walkers simultaneously and on receiving the query, each node looks up its local storage for the desired object. The walker stops successfully once the object is found otherwise it continues. Query is forwarded to the best neighbor that has the highest probability value. The probability values are computed based on the results of the past queries and are also updated based on the result of the current query. This query processing continues until all k walkers terminate either success or fail (the TTL limit is reached).

2 TWO LEVELS TTL FOR UNSTRUCTURED P2P NETWORK USING ADAPTIVE PROBABILISTIC SEARCH [1]

Performance of APS algorithm can be increase by using Two-Level Random Walk instead of One-Level Random Walk (K Walker Random Walk). In the case of Two Level Random Walk we generate K1 threads which will be less than K which we have used in K Walker Random Walk. At the edge nodes where TTL1 expires and the search is unsuccessful then second level will start and these K1 threads will exploit in K2 threads subsequently a new TTL2 will be initialized which will be less than TTL1. In this case we are generating fewer threads so chances of collision will decrease than other searching algorithms. Algorithm for the proposed technique is as follows:

Read More: Click here...

Pages: 1 [2] 3 4 ... 22