### Recent Posts

Pages: 1 ... 8 9 [10]
91
##### Electronics / Solving Blasius Problem by Adomian Decomposition Method
« Last post by IJSER Content Writer on February 18, 2012, 02:11:07 am »
Quote
Author : V. Adanhounme, F.P. Codo
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract - Using the Adomian decomposition method we solved the Blasius problem for boundary-layer flows of pure fluids (non-porous domains) over a flat plate. We obtained the velocity components as sums of convergent series. Furthermore we constructed the interval of admissible values of the shear-stress on the plate surface.

Index Terms - Convergent series, Decomposition technique, Fluid flow, Shear-stress.

Nomenclature
1.       velocity in the x-direction
2.         velocity of the free stream
3.       velocity in the y-direction
4.       horizontal coordinate
5.       vertical coordinate
6.       viscosity coefficient
7.        density
8.           kinematic viscosity of the fluid

1  INTRODUCTION
The problem of flow past a flat plate is one of interesting problems in fluid mechanics which was first  solved by Blasius [5]  by assuming a series solutions . Later, numerical methods were used in [7]  to obtain the solution of the boundary layer equation. In [2]  the first derivative with respect to     of the velocity component in the    direction at the point    for the Blasius problem is computed numerically for the estimation of the shear-stress on the plate surface. Later in [9]  one solved the problem above by assuming a finite power series where the objective is to determine the power series coefficients.

The purpose of this study is to obtain the solutions for the Blasius problem for two dimensional boundary layer using the Adomian decomposition technique and to compute the admissible values of the shear-stress on the wall, imposing the constraint on the first derivative with respect to of the velocity component in the    direction at the point .

2   MATHEMATICAL MODEL
The physical model considered here consists of a flat plate parallel to the   - axis with its leading edge at   and infinitely long down  stream  with constant component    of the velocity.For the mathematical analysis we assume the properties of the fluid such as viscosity and conductivity, to a first approximation, are constant. Under these assumptions the basic equations required for the analysis of the physical phenomenon are the equations of continuity and motion. According to the Boussinesq approximation these equations get the following expressions [2]

(1)
(2)
with the boundary conditions imposed on the flow in [2]
,  ,         (3)
Where   is a stream function related to the velocity components as:
,                         (4)

3  ANALYTICAL SOLUTION  and CONVERGENCE RESULTS
In this section we provide the analytical solutions,i.e.the fluid velocity components as sums of convergent series using the Adomian  decomposition technique and compute the admissible values of the shear-stress on the plate surface.
Consider the stream function     defined by
,        (5)
Where   is a function three times continuously differentiable on the interval   and   a constant positive real. Then  (1) and (2) with (3) are transformed as
,  (6)
where  stands for

Definition 3.1
The problem (6) is called the Blasius problem for boundary-layer flows of pure fluids (non-porous domains) over a flat plate.
Let us transform (6) into the nonlinear integral equation. For this purpose, setting  we can write the equation in (6) as
(7)
Multiplying by  and integrating the result from   to  we reduce (7) to
where        (
Integrating three times ( from   to   and taking into account the boundary conditions in (6) we reduce  ( to the nonlinear integral equation
(9)

which is a functional equation
where            (10)

Here  is a nonlinear operator from a Hilbert space  into  . In [4] Adomian has developed a decomposition technique for solving nonlinear functional equation such as (10). We assume that (10) has a unique solution.The Adomian technique allows us to find the solution of (10) as an infinite series   using the following scheme:

, where

The proofs of convergence of the series   and  are given below.Without loss of generality we set  and we have the following scheme:

By induction, we have

i.e.     ,

where the  are real numbers.Then we obtain

We arrive at the following result
Lemma 4.1
The admissible values of the shear-stress  on the plate surface obtained in (20) belong to the open interval
(22)
for each given value of  and for the given approximation precision depending on     .

5  CONCLUSION
In this paper,we have investigated the analytical solutions for the Blasius problem which are the sums of convergent series, using the Adomian decomposition technique. Then we estimated the error by approximating the exact values of the shear-stress on the plate surface obtained in this paper by the approximate values of the shear-stress obtained in [2].Doing so, we constructed the interval of admissible values of the shear-stress on the plate surface.

92
##### Networking / Comparative study of Financial Time Series Prediction by Artificial Neural Netwo
« Last post by IJSER Content Writer on February 18, 2012, 02:09:18 am »
Quote
Author : Arka Ghosh
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— Financial forecasting is an example of a signal processing problem which is challenging due to Small sizes, high noise, non-stationarity, and non-linearity,but fast forecasting of stock market price is very important for strategic business planning.Present study is aimed to develop a comparative predictive model with Feedforward Multilayer Artificial Neural Network & Recurrent Time Delay Neural Network for the Financial Timeseries Prediction.This study is developed with the help of historical stockprice dataset made available by GoogleFinance.To develop this prediction model Backpropagation method with Gradient Descent learning has been implemented.Finally the Neural Net ,learned with said algorithm is found to be skillful predictor for non-stationary noisy Financial Timeseries.

Key Words—.  Financial Forecasting,Financial Timeseries Feedforward Multilayer Artificial Neural Network,Recurrent Timedelay Neural Network,Backpropagation,Gradient descent.

I.   INTRODUCTION
Over past fifteen years, a view has emerged that computing based on models inspired by our understanding of the structure and function of the biological neural networks may hold the key to the success of solving intelligent tasks by machines like noisy time series prediction and more[1]. A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: Knowledge is acquired by the network through a learning process and interneuron connection strengths known as synaptic weights are used to store the knowledge[2]. Moreover, recently the Markets have become a more accessible investment tool, not only for strategic investors but for common people as well. Consequently they are not only related to macroeconomic parameters, but they influence everyday life in a more direct way. Therefore they constitute a mechanism which has important and direct social impacts. The characteristic that all Stock Markets have in common is the uncertainty, which is related with their short and long-term future state. This feature is undesirable for the investor but it is also unavoidable whenever the Stock Market is selected as the investment tool. The best that one can do is to try to reduce this uncertainty. Stock Market Prediction (or Forecasting) is one of the instruments in this process. We cannot exactly predict what will happen tomorrow, but from previous experiences we can roughly predict tomorrow. In this paper this knowledge based approach  is taken.

The accuracy of the predictive system which is made by ANN can be tuned  with help of different network architectures. Network is consists of input layer ,hidden  layer & output layer of neuron, no of neurons per layer can be configured according to the needed result accuracy & throughput,there is no cut & bound rule for  that.the network can be trained by using sample training data set,this neural network model is very much useful for mapping unknown functional dependencies between different input & output tuples.In this paper two types of neural network architecture,feed forward multilayer network & timedelay recurrent network is used for the prediction of the  NASDAQ stock price.A comparative error study for both network architecture is introduced in this paper.

In this paper gradient descent backpropagation learning algorithm is used for supervised  training of  both  network architectures. The back propagation algorithm was developed by Paul Werbos in 1974 and it is rediscovered independently by Rumelhart and Parker. In backpropagation  learning  atfirst the network weight is selected as random small value then the network output is calculated & it is compared with the desired output,difference between them is defined by error .The goal of efficient network training is to minimize this error by monotonically tuning the network weights by using gradient descent method.To compute the gradient of error surface it takes mathematical tools & it is a iterative process.

ANN is a powerful  tool  widely used in soft-computing techniques for forecasting  stock price.The first stock forecasting approach was taken by White,1988 ,he used IBM daily stock price to predict the future stock value[3].When developing  predictive model for forecasting Tokyo stock market , Kimoto, Asakawa, Yoda, and Takeoka 1990  have reported onthe effectiveness of alternative learning algorithms and prediction methods using ANN[4]. Chiang, Urban, and Baldridge 1996 have used ANN to forecast the end-of-year net asset value of mutual funds[5]. Trafalis (1999) used feed-forward ANN to forecast the change in the S&P(500) index. In that model, the input values were the univariate data consisting of weekly changes in 14 indicators[6].Forecasting of daily direction of change in the S&P(500) index is made by Choi, Lee, and Rhee 1995[7]. Despite the wide spread use of ANN in this domain, there are significant problems to be addressed. ANNs are data-driven model (White, 1989[8]; Ripley, 1993[9]; Cheng & Titterington, 1994[10]), and consequently, the underlying rules in the data are not always apparent (Zhang, Patuwo, & Hu, 1998[11]). Also, the buried noise and complex dimensionality of the stock market data makes it difficult to learn or re-estimate the ANN parameters (Kim & Han, 2000[12]). It is also difficult to come with an ANN architecture that can be used for all domains. In addition, ANN occasionally suffers from the overfitting problem (Romahi & Shen, 2000[13])[14].

II.   DATA ANALYSIS AND PROBLEM DESCRIPTION

This paper develops two comparative ANN models step-by-step to predict the stock price over financial time series, using data available at the website http://www.google.com/finance. The problem described in this paper is a predictive problem. In this paper four predictors have been used with one predictand. The four predictors are listed  below

•Stock open price
•Stock price high
•Stock price low
•Stock close price

The predictand is next stock opening price.

All these four predictors of year X are used for prediction of stock opening price of year ( X+1). Whole dataset comprises of 1460 days NASDAQ stock data. Now first subset contains early 730 days data (open,high,low,close,volume) which is the inputseries to the neural network predictor.Second subset has later 730 days data(only open) which is the target series to the neural network predictor.Now the network learns the dynamic relationship between those previous five parameters (open, high, low, close, volume)to the one final parameter(open),which it will predict in future.

A.   Data Preprocessing

Once the historical stock prices are gathered ,now this is the time for data selection for training,testing and simulating the network.In this project we took 4 years historical price of any stock ,means total 1460 working days data.We done R/S analysis  over these datafor predictability(Hurst exponent analysis).Now The Hurst exponent (H) is a statistical measure used to classify time series. H=0.5 indicates a random series while H>0.5 indicates a trend reinforcing series. The larger the H value is, the stronger trend. (1) H=0.5 indicates a random series. (2) 0<H<0.5 indicates an anti-persistent series. (3) 0.5<H<1 indicates a persistent series. An antipersistent series has a characteristic of “mean-reverting”, which means an up value is more likely followed by a down value, and vice versa. The strength of “meanreverting” increases as H approaches 0.0. A persistent series is trend reinforcing, which means the direction (up or down compared to the last value) of the next value imore likely the same as current value. The strength of trend increases as H approaches 1.0. Most economic and financial time series are persistent with H>0.5. Now we took the dataset timeseries having hurst exponent >0.5 for persistency in good predictability.

93
##### Engineering, IT, Algorithms / Anomaly Detection through NN Hybrid Learning with Data Transformation Analysis
« Last post by IJSER Content Writer on February 18, 2012, 02:08:02 am »
Quote
Author : Saima Munawar, Mariam Nosheen and Dr.Haroon Atique Babri
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— Intrusion detection system is a vital part of computer security system commonly used for precaution and detection.It is built for classifier or descriptive or predictive model to proficient classification of normal behavior from abnormal behavior of IP packets. This paper presents the solution regarding proper data transformation methods handling and importance of data analysis of complete data set which is apply on hybrid neural network approaches for used to cluster and classify normal and abnormal behavior to improve the accuracy of network based anomaly detection classifier. Because neural network classes only require the numerical form of data but IP connections or packets of network have some symbolic features which are difficult to handle without the proper data transformation analysis. For this reason, it got non redundant new NSL KDD CUP data set. The experimental results show that indicator variable is more effective as compared to the both conditional probabilities and arbitrary assignment method from measurement of accuracy and balance error rate.

Index Terms — ANN, Anomaly Detection, Self Organizing Map, Backpropagation network, Indicator variables, Conditional probability

1   INTRODUCTION
In computer security, network administrators always suggest prevented action for better cure of any system. Intrusion Detec-tion Systems (IDS) are classified in to three categories which are host-based, network-based and vulnerability-assessment [1].Signature based detection and anomaly detection model are two basic models of intrusion detection. In signature based, it is only used to detect attack through known intrusions and it cannot be detected novel behavior. It is specially used in commercial tools and it has to update new attacks in database.The anomaly intrusion detection can be resolved these limitation of signature based and used to detect new attack via searching abnormality [2], [3]. Anomaly detection issues have numerous possibilities that are yet unexplored [4]. Network and computer security is significant issues of every security demanded organization. Prevention, detection and response are three basic foundation of network security.For this purpose many researchers emphasizes on preventive action over the detection and response [5]. For increasing the demand of network security, many devices like firewall and intrusion detection used to contol the abnormal packets accesibility.Basically abnomal packets violate the internet pro-tocol standards and these packets is used to crash the systems [6].So this reason better intrusion detection devices are building for prevention and accurate detection of normal and abnormal packets and to reduce the false alarm rate. IDS are basically de-voted to fulfill this purpose to monitor the system intelligently. As far as the access control points is concerned ,firewall is good but it is not designed to prevent action against intrusions that's why most security experts emphasizes the IDS which is located before and after the firewall [7], [8].Many researchers have been improving intrusion detection systems through different research areas such as statistics, machine learning, data mining, information theory and spectral theory[2], [3] [4].The purpose of this research is to provide the hybrid learning of artificial neural network base design approach for anomaly intrusion detection classifier system.There is unable to directly handle the symbolic features of IP data set so that It is considered that there are two data transformation methods indicator variable and conditional probabilities which are effective to improve the classifier performance, it is processed through hybrid technique self organizing map and backpropagation of neural network.The data transformation is processed on selecyive nine features of IP NSL data set.It is prepared for anomaly detection classifier which is used for LAN security.

Five sections are presented in this research. Section 2 is back-ground literature of the related research processes. Section 3 pro-vides the detail analysis of proposed research methodology, algorithms of SOM and BPN and their training and testing results are discussed. Section 4 provides detail analysis of experimental results of the research and comparison between two methods effect the performance of classifier. In last, section 5 presents conclusion and discussed the future direction of this domain.

2   RELATED STUDY
2.1 Hybrid learning use in misuse and anomaly detection

Hybrid approaches have been used to resolve the anomaly intru-sion detection problems. Hamdan et.al [9] comparison four tech-niques of supervised learning of support vector machine and neural network self organizing map and fuzzy logic of unsuper-vised learning techniques. It is only proposed descriptions of theses techniques but did not include the methodology and nu-merical analysis of all these applied techniques. Other approach artificial immune system is used for detection and self organizing is used for classification. It is emphasized the higher level information output rather than the low level for more beneficial to security analyst to analyzing reports. The KDD CUP 1999 data set is used as input for specially focused on two types of attacks which is denial-of-service and user-to-root attacks [10].M.bahrololum et .al [11] presented the design approach and it would be used further explanation in future enhancement. It described introduction of SOM and backpropagation algorithm, KDDCUP data set features, training and testing data, experimenting table view. But besides all of these it did not mentioned how to arrange and used this data set in to which software, how to implement experiment, how to apply these techniques on data set and what methods used to evaluate result. It only provided the proposal and discussed some design issue with flow diagrams. Hayoung et al [12] proposed the new labeling methods apply for this domain but in real time system detection, if no correlation build how to detect the normal or anomaly data set but labeling is supervised learning ,again a huge analysis will require for the correlation between the features. But it did not provided the labeling time only described the detection time but in real time system total time is require for the completion of all processes.

2.2 Analysis and Data Transformation Processes

The data analysis and preprocessing is core part of the artificial neural network architecture for processing and analysis of accu-rate result. Anomaly detection has been paying attention of many researchers during the last decade. Due to this reason many researcher not only considered the new algorithms but also taking analysis of data set used for training and testing classifier. The KDD CUP 99 data set is mostly used for intrusion detection problems. It has 41 features. There are three basic features which are individual TCP connections feature, content features, and traffic features which include 7 symbolic and 34 continuous attributes [13]. Tavallaee et.al presented the detail and critical review of KDDCUP99 data set. It is discussed the problems in KDD CUP99 data set and resolved two issue of KDD CUP 99 data set which affects the performance and poor evaluation in anomaly detection approaches. It proposed new data set NSL-KDD, which include selected records and remove redundancy of records of KDD CUP 99.The form of this data set is ARFF (attribute relation file format).The authors claimed that this data set will help researcher for solving anomaly detection problems [14].Preprocessing apply before processing of neural networks algorithms because these algorithms require the quantitative data instead of qualitative information.The most commonly conversion method used is arbitrary assignment but criticizing of this method, three other approaches is using for machine learning algorithms. E.Hernandez et.al presented three methods for symbolic features conversion apply on KDD CUP data set. It described all these techniques in detail and also described the comparison of these techniques have been applied on different feed forward neural network and support vector machine. They claimed that these three conversion methods improve the prediction ability of the classifier. These methods are using for preprocessing (symbolic attributes convert in to numeric form) which is indicator variable, conditional prob-abilities and SSV (Separability split value) criterion based method [15].

3. PROPOSED EXPERIMENTAL METHODOLOGY
This section is divided into theses main processes which are data Analysis, preprocessing, modeling of clustering and classification and performance evaluation.

3.1 Data Analysis
NSL KDD CUP data set is reasonable and improves the evalua-tion.This data set is offline and it is provided for anomaly detec-tion classification research to better evaluation of classifier.It also gives the consistent and getting more comparable results [14], [17], [18].

3.2 Feature selection
It is difficult to select the important feature for detecting and classification between normal packets and attacks. More research work is doing for selection of feature on anomaly detection problems.The basic question is how many types of features are selective for improving the classification rate and to relate which types of attacks. In this research, first basic 9 attributes of individual tcp connections are used. It consist of duration, types of protocol, services, source bytes, destination bytes, flag, land, wrong fragment and urgent. These features have 3 symbolic and 5 continues attributes. The protocol and services are most important features to detect the attacks [13], [14].The main purpose to select these features because it has maximum number of symbolic features instead of others for handling symbolic features preprocessing.

3.3 Preprocessing
The given input data set has symbolic and continuous attributes. These data set need to be converted in to numerical form for processing on neural network algorithms. Researchers are finding best data transformation techniques applying on selected features for improving the performance of classifiers. The main purpose is to show how different preprocessing methods affect the accuracy of different tasks of machine learning simulation.Besides the modification of algorithm, it also important to consider data transformation and feature selection methods according to the demand of any machine learning and training.The details of data transformation methods are used in this research which is given below.

94
##### Engineering, IT, Algorithms / Method of Speech Signal Compression in Speaker Identification Systems
« Last post by IJSER Content Writer on February 18, 2012, 02:06:47 am »
Quote
Author : A.Raimy, K.Konate, NM Bykov
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— In this paper we present  a technique of efficacy improvement of speech signal compression algorithm without individual features speech   production loss. The compression in this case means to delete, from the digital signal, those quantization steps that can be predicted. We propose to  decrease the number of those quantization steps using a modified linear predication algorithm with variable order. That allows to decrease compression time and save computer resource.

Index Terms— speech signal compression, quantization steps, linear predication algorithm, computer resource.

1   INTRODUCTION
THE task of efficient representation of speech signal is one of the vital tasks in speaker identification problems. For example, an automatic speaker recognition system is installed on a LAN or WAN server, which authorizes a ter-minal to access the network according to the voice of the subscriber. There are two ways of processing information in this case:
1) get the identity features of the speaker from the speech signal on the subscriber’s terminal and transfer them to the server for a decision regarding the possibility of admission;
2) compress the speech signal, without loosing the information about the speaker’s identity, in the form of a password wav-file, and transfer it across the network to the server, where the identification procedure is carried out.
One of the advantages of the first approach is the reduction of  the  transmission time over the network. Its main  drawbacks  are that  it  reduces the confidentiality the  speaker identification procedures,  and there is a need to install on the terminals a system for a primary analysis and description of  the speaker signals features. Thus, the second approach is more effective for information processing regarding  the number of computations that are required for the compression, and the use of  ASP-technologies for the selection of informative features and for decision-making.

Analysis of known works
According to the well known methods of signal compression and given the statistical characteristics of the speech signal, the parameters of the analog-to-digital converters (ADC) are chosen according to the rules presented in [1, 2]: the discretization frequency is determined by the upper limit frequency of the signal, the  quantification range  – by the dispersion, the quantification step - by the signal to noise ratio and the required precision. Since the speech signal is not stationary, the parameters of the ADC are chosen approximately using the most catastrophic situation, which is rarely encountered. As a result, the inherent redundancy of the speech signal is com-pleted by the redundancy of the discrete transformation. As a result a new problem arises:  eliminating the ADC’s redundancy. In the numerous variants of pulse modulation  and adaptive coding, which are used today to eliminate encoding redundancy, the sample rate remains constant and equals the Nyquist frequency, and redundancy is eliminated by analyzing the values of neighboring signal samples.

The aim of the research
The aim of the research is to increase the efficiency of the algorithm of speech signal compression without loosing the information related to the personnal peculiarities of the speaker,  by removing those samples that can be predicted.

2 THEORETICAL FOUNDATIONS OF THE PROPOSED METHOD

In this work we propose to reduce the number of signal sam-ples by using the modified method of variable order linear prediction. The peculiarity of the proposed method consists in a two step processing of the speech signal, which allows  reducing the time that is necessary for wav-file compression. The process is carried out in two steps:
1.Preliminary compression;
2. Final compression.
At the first stage the wav-file is processed using an original technique, which consists in approximating the speech signal using a polyline, with the possibility to  establish the degree of its deviation from the original signal. At the second stage  the wav-file areas which were not affected during the initial compression procedure are approximated using a polynomial, whose order is determined according to the accuracy that is required to restore the speech signal from the archive file.
Since the speech signal is a continuous function  , whose spectrum  is limited by the upper  frequency  F, it is defined by the succession of his samples, whose time interval is calculated using the following formula:

.
Thus the signal    can be described as follows:

,

where     is the sample function and  assumes discrete value

For a limited duration  of the speech signal the number of the signal samples is defined by the expression:

Taking to account the quazzi stationarity of the signal and also the non critical state of the data collection systems to real time of processing, a method of reduction of the encoding redundancy of the speech signal using the ADC has been developed.
Minimization of the error of restored signal consist in the finding those fixed values of the argument   that ensure convergence of broken plot from the vertices   towards the function  so that for the entire range of argument changing  the absolute error does not exceed permissible values.
The function   in these points can be presented as follows:

for  ,
for  ,
,
for ,

where  can be defined as follows :

,

In general:
,
where

Approximation error is determined by the remainder term of interpolation formula. In this case, the segment of line in the within the time interval [ ] is defined by the expression:

and the remaining member of functions expansion  at the same interval will be:

where  - the second derivative of a given function within the interval.
If it is known that  and   are maximal, then
.
Letting , we get the formula for the sampling interval
.
Asking the upper frequency of signal bandwidth is defined we can determine the deviation of real signal value from predicted. Based on the above, an algorithm to imple-ment the procedure for pre-compression of voice information was created. It includes following steps:
1. Set level of allowable absolute error of the recovery signal  ;
2. Set the minimum size  of buffer compression;
3. For the current point the coefficient of prediction is determined;
4. If a deviation of the coefficient  , we incorporate current sample in compression buffer, increasing the value   of  buffer counter by 1 and go to Item 3, if the inequality is not fulfilled, then check the buffer counter  : if  then set   and go to to Item 3; if  then compression is full field;
5. If end of wav-file not found, then go to Item 3.
Linear prediction used for the realization of the process of the second step of compression [3,4].The signal is presented in a digital form ,  , where  is number of signal samples, which is obtained by sampling it at a certain frequency F. This signal  ,  ,can be presented as a linear combination of preceding values of the signal and some influence

where   is the amplification coefficient and   is the order of prediction.
Then, knowing the values of signal , the problem reduces to searching the coefficients   and . Concerning the estimate, we will use the least square method assuming the signal   as deterministic.
The values of signal   will be expressed through his estimating values   by the following formula :
.
Then the predicion error can be described as follows:

Using the least square method, the parameters   are selected so as to minimize the average or the sum of squares of the prediction error. In order to find the coefficients , let us use the matrix method [5,7] called as Darbin method.
Calculation of the coefficients of linear prediction and the prediction error is performed by the following algorithm: of coefficients of linear prediction and prediction error is:
1.   The segmentation of the speech signal  at stationary intervals;
2.    For separated intervals, a system of linear equations is formed that is solved by matrix method or by Darbin method using the auto-correlation function (method is selected by user);
3.    The prediction error is calculated.

95
##### Engineering, IT, Algorithms / FPGA Prototyping of Hardware Implementation of CORDIC Algorithm
« Last post by IJSER Content Writer on February 18, 2012, 02:05:34 am »
Quote
Author : Er. Manoj Arora, Er. R S Chauhan, Er.Lalit Bagga
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract- In 1959 J. E. Volder presents a new algorithm for the real time solution of the equations raised in navigation system. This algorithm was the best replacement of analog navigation system by the digital. CORDIC algorithm used for the fast calculation of elementary functions like multiplication, division, trigonometric functions, logarithmic function, and various conversions like conversion of rectangular to polar coordinate, conversion between BCD and binary coded information. In the present time CORDIC algorithm have a number of applications in the field of communication, 3-D graphics, signal processing and a lot more. This review paper presents the prototype of hardware implementation of CORDIC algorithm using Spartan –II series FPGA, with constraint to area efficiency and throughput architecture.

Index Terms : CORDIC; FPGA; Discrete Fourier Transform (DFT); Discrete Cosine transform (DCT); Iterative CORDIC; Pipelined CORDIC,SVD.

1 INTRODUCTION
Co-ordinate Rotation Digital Computer is abbreviated as ORDIC. The main concept of this algorithm is based on the very simple and long lasting fundamentals of two-dimensional geometry. The first description for iterative approach of this algorithm is firstly provided by Jack E. Volder in 1959”[1]”. CORDIC algorithm provides an efficient way of rotating the vectors in a plane by simple shift add operation to estimate the basic elementary functions like trigonometric operations, multiplication, division and some other operations like logarithmic functions, square roots and exponential functions. Most of the applications either in wireless communication or in digital signal processing are based on microprocessors which make use of a single instruction and a bunch of addressing modes for their working. As these processors are costs efficient and offer extreme flexibility but yet are not suited for some of these applications. For most of these applications the CORDIC algorithm is a best suited alternative to that architecture which rely on simple multiply and add hardware. The pocket calculators and some of DSP objects like FFT, DCT, and demodulators are some common fields where CORDIC algorithm is found.
In 1971 CORDIC based computing received attention, when John Walther showed that, by varying a few simple parameters, it could be used as a single algorithm for implementation of most of the mathematical functions. During this period Mr Cochran invent various algorithms and
showed that CORDIC is much better approach for scientific calculator applications. The popularity of CORDIC is enhanced there after mainly due to its potential for efficient and low-cost implementation of a large class of applications which include the generation of trigonometric, logarithmic and transcendental elementary functions; complex number multiplication, eigenvalue computation, matrix inversion, solution of linear systems and singular value decomposition (SVD) for signal processing, image processing, and general scientific computation. Some other popular and upcoming applications are:
1) Direct frequency synthesis, digital modulation and coding for   speech/music synthesis and communication;
2) Direct and inverse kinematics computation for robot manipulation;
3) Planar and three-dimensional vector rotation for graphics and animation.
Although CORDIC algorithm is not a very fast algorithm for use but this algorithm is followed due to its very simple implementation and also the same architecture can be used for all the applications which is based on simple shift- add operation.

2 CORDIC ALGORTHM
CORDIC is acronym for COordinate Rotation DIgital Computer. The CORDIC algorithm is used to evaluate real time calculation of the exponential and logarithmic functions using the iterative rotation of the input vector. This rotation of a given vector (xi, yi) is realized by means of a sequence of rotations with fixed angles which results in overall rotation through a given angle or result in a final angular argument of zero. Fig1.shows all the computing steps involved in CORDIC algorithm.

In the fig.1 “[1]” the angle αi is the amount of rotation angle for each iteration and this rotational angle is defined by the following equation:-

α I = tan-12-1                                                                                                     (1)

Fig.1 CORDIC computing step
So this angular moment of vector can easily be achieved by the simple process of shifting and adding. Now if we consider the iterative equation as shown on the next page

xi+1 = xi cos αi – yi sin αi
yi+1 = xi sin αi + yi  cosαi                                                                    (2)
from equation (1), we can write as

xi+1 = cos αi (xi– yi  tan αi)
yi+1 = cos αi (xi  tan αi + yi )
Now here we define scale factor kn which is same as shown below:
Ki = cos αi   or  1/√(1+2-2i)
So for the above written two equation we can rewrite them as

xi+1 = (1/√(1+2-2i) ) Ri  cos( αi + θ )
yi+1 = (1/√(1+2-2i) ) Ri  cos( αi - θ )                        (3)

OR   xi+1 = ki (xi  - 2-i   yi)
yi+1 = ki (yi + 2-i  xi )
Now as shown in above equation the direction of rotation may be clock wise or anticlockwise means unpredictable for different iterations so for that ease we define a binary notation di to identify the direction. It  can equal either +1 or -1. So putting di in above equation we get:

xi+1 = ki (xi  - di 2-i   yi)
yi+1 = ki (yi + di 2-i  xi)                                                                                (4)

As the value of di depends on the direction of rotation. If we move clockwise then the value of di is +1 otherwise -1.Now these iterations are basically combination of elementary functions like addition , subtraction , shifting and table look up operations and no multiplication and division functions are required in the CORDIC operation.
In CORDIC algorithm, a number of microrotations are combined in different ways to realize some different functions. This is achieved by properly controlling the direction of the successive microrotations. So on the basis of controlling these microrotations we can divide CORDIC in two parts and this control on successive microrotations can be achieved in the following two ways:

Vectoring mode: - In this type of mode the y-component of the input vector is forced to zero. So this type of consideration yields computation of magnitude and phase of the input vector.
Rotation mode: - In the rotation mode θ-component is forced to zero. and this mode yields computation of a plane rotation of the input vector by a given input phase θ0 .

2.1 Vectoring mode
As earlier written the in vectoring mode of CORDIC algorithm the magnitude and the phase of the input vector are calculated. The y-component is forced to zero that means the input vector(x0, y0) is rotated towards the x-axis. So the CORDIC iteration in vectoring mode is controlled by the sign of y-component as well as x-component. Means in the vectoring mode the rotator rotates the input vector through any angle to align the result in the x-axis direction.
So in the vectoring mode the CORDIC equations are:

96
##### Networking / Eight User, 4Gb/s, Spectral Phase-Encoded OCDMA System in time domain for Metrop
« Last post by IJSER Content Writer on February 18, 2012, 02:04:22 am »
Quote
Author : Savita R.Bhosale Dr. S. L. Nalbalwar and Dr. S.B.Deosarkar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract: In optical code division multiple access (OCDMA) system, many users share the same transmission medium by assigning unique pseudo-random optical code (OC) to each user. OCDMA is attractive for next generation broadband access networks due to its features of allowing fully asynchronous transmission with low latency access, soft capacity on demand, protocol transparency, simplified network management as well as increased flexibility of QoS control and enhanced confidentiality in the network. Hence, in this paper, we proposed a technique using spectral phase encodingin time domain for eight users. This technique is proved to be much effective to handle eight users at 4 Gb/s bit rate for Metropolitan area Network (MAN). Results indicate significant improvement in term Beat Error Rate (BER) and very high quality factor in the form of Quality of Service (QoS). In our analysis, we have used Pseudo Orthogonal (PSO) codes. The simulations are carried out using OptSim (RSOFT).

Keywords:  MAI, OCDMA,  OOC, PSO, QoS, BER ,PON,ISD,CD.

1  INTRODUCTION
OPTICAL code division multiple access (OCDMA), where users share the same transmission medium by assigning unique pseudo-random optical code (OC), is attractive for next generation broadband access networks due to its features of allowing fully asynchronous transmission with low latency access, soft capacity on demand, protocol transparency, simplified network management as well as increased flexibility of QoS control [1~3]. In addition, since the data are encoded into pseudo-random OCs during transmission, it also has the potential to enhance the confidentiality in the network [4~6]. Figure1. illustrates a basic architecture and working principle of an OCDMA passive optical network (PON) network. In the OCDMA-PON network, the data are encoded into pseudo random OC by the OCDMA encoder at the transmitter and multiple users share the same transmission media by assigning different OCs to different users.

At the receiver, the OCDMA decoder recognizes the OCs by performing matched filtering, where the auto-correlation for target OC produces high level output, while the cross-correlation for undesired OC produces low level output. Finally, the original data can be recovered after electrical thresholding. Recently, coherent OCDMA technique with ultra-short optical pulses is receiving much attention for the overall superior performance over incoherent OCDMA and the development of compact and reliable en/decoders (E/D) [7~12]. In coherent OCDMA, encoding and decoding are performed either in time domain or in spectral domain based on the phase and amplitude of optical field instead of its intensity.

Fig.1. Working principle of an OCDMA network

In coherent time spreading (TS) OCDMA, where the encoding/decoding are performed in time domain. In such a system, the encoding is to spread a short optical pulse in time with a phase shift pattern representing a specific OC. The decoding is to perform the convolution to the incoming OC using a decoder, which has an inverse phase shift pattern as the encoder and generates high level auto-correlation and low level cross-correlations.

2 SIMULATION SET-UP
The encoders use delay line arrays providing delays in terms of integer multiples of chip times. The placement of delay line arrays and the amount of each delay and phase shifts are dictated by the specific of the signatures. PSO matrix codes are constructed using a spanning ruler or optimum Golomb ruler is a (0,1) pulse sequence where the distances between any of the pulses is a non repeating integer, hence the distances between nearest neighbors, next nearest neighbors, etc., can be depicted as a difference triangle with unique integer entries. The ruler-to-matrix transformation increases the cardinality (code set size) from one (1) to four(4)and the ISD (=Cardinality/CD) from 1/26 to 4/32 = 1/8.The ISD translates to bit/s/Hz when the codes are associated with a data rate and the code dimension is translated into the bandwidth expansion associated with the codes as follows:

ISD   =((throughput))/((bandwidth required) )

=((cardinality*data rate))/((1/Tb)(bandwidth expansion))

=((n*r*R))/((R)(CD))

=(n*r)/((CD))

The enhanced cardinality and ISD, while preserving the OOC property, are general results of the ruler-to-matrix transformation.

We can convert the PSO matrices to wavelength/time (W/T ) codes by associating the rows of the PSO matrices with wavelength (or frequency) and the columns with time-slots, as shown in Table I. The matricesM1….M32 are numbered 1…32 in the table, with the corresponding assignment of wavelengths and time-slots. For example, code M1 is (λ1 ; λ1 ; λ3; λ1 ) and M9 is ( λ1,λ4;0;λ7,λ8;0); here the semicolons separate the timeslots in the code. (The codes M1 and M9 are shown in bold numerals.)
We focus on codes like M1 because it shows extensive wavelength reuse, and on codes likeM9 because it shows extensive time-slot reuse. It is the extensive wavelength and time-slot reuse that gives these matrix codes their high cardinality and high potential ISD. Four mode-locked lasers are used to create a dense WDM multi-frequency light source. Pseudo-orthogonal (PSO) matrix codes [3] are popular for OCDMA applications primarily because they retain the correlation advantages of PSO linear sequences while reducing the need for bandwidth expansion. PSO matrix codes also generate a larger code set. An interesting variation is described in [1] where some of the
wavelength/time (W/T) matrix codes can permit extensive wavelength reuse and some can allow extensive time-slot reuse. In this example, an extensive time-slot reuse sequence is used for User 1 (λ1λ3;0;λ2λ4;0).There are four time slots used without any guard-band giving the chip period of 100 ps. Codeset for time spreading is mapped as C1:{0; λ2;0;λ4},C2:{λ1;0;λ3;0}….C8:{λ1; λ2;0;0}.Code set to apply binary phase shift mapped as M1:{ 0;1;0;1;} M2:{1;0;1;0;}….M8:{0;0;1;1;}.(1represents as a π phase shift,0 represents as no phase shift)

TABLE 3
SPE O-CDMA system parameters used for simulation

3 PROPOSED SPE O-CDMA SCHEME
1) Lasers (mode locked laser requited to produce 4
wavelength signal)
2) Encoders consisting of required components like
fiber delay lines, PRBS, External Modulator,
multiplexers
3) Multiplexers
4) Optical fiber of 60 km length
5) De multiplexers
6) Decoders corresponding to each encoder
BER analyzer
9) Eye Diagram analyzer
10) Signal analyzer

97
##### Networking / Two Levels TTL for Unstructured P2P Network using Adaptive Probabilistic Search
« Last post by IJSER Content Writer on February 18, 2012, 02:03:09 am »
Quote
Author : Yash Pal Singh, Rakesh Rathi, Jyoti Gajrani, Vinesh Jain
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— P2P networks are playing an important role in current scenario of unstructured networks. P2P network supports various applications and taking the advantage over the centralize search system .Centralize search systems suffer from the problems of single point of failure, low availability, denial of service attacks. Searching of the required data is a vital issue in the P2P network. Many methods have been implemented for searching in P2P network such as Flooding, Random Walk, Expanding Ring or Iterative deepening, K-Walker Random Walk, Two Level K Walker Random Walk, etc. These methods are based on property of randomness in the network. Some of these generate large traffic while others take long searching time. A probabilistic approach with Two Level K Walker Random Walk for searching has been implemented in this paper and comparative study has been done with other algorithms.

Index Terms— peer to peer, APS, random walk, K-walker, dynamic search, peersim, probabilistic, flooding.

1   INTRODUCTION
AP2P network is a collection of distributed, heterogeneous, autonomous and highly dynamic peers. Participant peer shares a part of its own resources such as processing power, storage capacity software and files. P2Pnetworks are dynamic in nature. Various types   of P2P networks are the Purely Decentralized Systems, Partially Centralized Systems, and Hybrid Decentralized Systems. According to network structure P2P network is classified as Unstructured, Structured and Loosely Structured based on their data location with respect to overlay topology. In the first case data content are totally unrelated to overlay topology. In Structured P2Pnetwork data contents are precisely defined with respect to overlay topology. In this type of network each node has the idea about data content so searching can be done easily. In last one searching can be done on the basis of routing hints. Structured P2Pnetworks are not suitable for highly transient node population where node can leave and join any time. In this paper a probabilistic approach with two levels TTL has been implemented for searching in unstructured P2Pnetwork. In this case we maintain a database and each node has the same probability of its neighbors in the database. First time we perform K-Walker Random walk because every neighbor has the same probability. After termination of the search we increase the probability of the neighbor by some amount on successful path and decrease the probability on unsuccessful path. Next time when any node wants to make query, it will choose the node with highest probability from the database and send a query message to this node. This continues until the desired content is found or TTL expires. In the case of Two Level TTL, those nodes where the search is unsuccessful a new TTL1 will be initialize which will be less than the last TTL, and at every unsuccessful node the query message will be exploded in to K number of threads. This process will be continuing like K Walker Random Walk until the desired content found or TTL1 expires. We proposed a new search algorithm which takes the advantage of these two searching techniques [1]. Implementation of this algorithm and comparative study has been done with other algorithms in the paper.

2 RELATED WORK
Various search protocols have been implemented for Unstructured P2Pnetwork. Basic searching techniques are  blind search and  knowledge base search. Flooding and Random Walk protocols use blind search and Adaptive probabilistic search uses knowledge based search. Mostly protocols are working for file sharing applications on the Internet. Most common examples are Napster, Gnutella, Kazaa and BitTorrent. Gnutella is based on flooding which is used for file sharing application. BitTorrent is also used for. Flooding[5]: In the case of flooding each querying node sends query message to its entire neighborhood. These neighbors also forward this query message to their corresponding neighbors until search is successful or TTL expires. If desire content is very far from querying node then number of message generated for this query will be very large. Iterative Deepening [5]: The idea of iterative deepening is taken from artificial intelligence and used in P2P searching, where the querying node issues BFS searches (in sequence) with increasing depth limits. It terminates the query either when maximum depth limit (D) has reached or result is obtained. Same sequence of depth limits is used by all nodes and same time period W between consecutive BFS searches. Local Indices [5]: In this technique a system wide policy specifies the depths at which the query should be processed. All nodes at depths not listed in the policy simply forward the query to the next depth. Routing indices [5]: Routing indices guide the entire search process like intelligent search. Intelligent search uses information about past queries which have answered by neighbors, While Routing indices stores information about the documents topic and also the number of documents stored in its neighbors. It concentrates on content queries, queries based on the file content rather than file name or file identifier. Dynamic Search [10]: It maintains user define threshold value. If the number of hop count is less than threshold, flooding will be performed otherwise K walker random walk will be perform. This will generate lesser amount of traffic than flooding. K-walker random walk and related schemes [5]: In the standard random walk algorithm, query message( also called walker) is forwarded to one randomly selected neighbor which again randomly chooses one of its neighbors and forwards the query message to selected neighbor and procedure continues until the data is found. One walker is used in standard random walk algorithm and this will reduce the message overhead greatly but causes longer searching delay. In the k-walker random walk algorithm, k walkers are deployed by the querying node which means querying node(source node)forwards k copies of the original message to k neighbors (randomly selected) and at the intermediate node it will forward single copy of the message to one randomly selected neighbor until the search is successful or TTL expires. Each walker takes its own random walk. Each walker talks with the querying node periodically to decide whether it should terminate or not. Soft states are used to forward different walkers for the same query to different neighbors. This algorithm tries to reduce the routing delay. On an average the total number of nodes reached by k random walkers in H hops is the same as the number of nodes reached by one walker in kH hops and this causes routing delay to be k times smaller. A similar scheme is the two-level random walk [7]. In this scheme in level 1, the querying node deploys k1 random walkers from the querying node with the TTL1 and perform random walk at intermediate nodes. At the nodes where TTL1 expires and search is unsuccessful level2 will start, each walker forges k2 walkers with the TTL2. Query is processed by all walkers in the path. For same number of walkers (k=k1+k2), this scheme has longer searching delays than the k-walker random walk but generates less duplicate messages. Another similar approach called “modified random BFS”, where query is forwarded only to a randomly selected subset of neighbors and when query message is received , each neighbor forwards the query to a randomly selected subset of its neighbors  (excluding the querying node). Until the query stop condition is satisfied, same method continues. This approach may visits more nodes and has a higher query success rate than the k-walker random walk. Adaptive Probabilistic Search [5][6]:  It assumes that objects and their copies in the network follows a replication distribution for storage. The number of query requests for each object follows a query distribution. This process does not affect object placement and the P2P overlay topology. APS is based on k-walker random walk and is probabilistic forwarding. The querying node deploys k walkers simultaneously and on receiving the query, each node looks up its local storage for the desired object. The walker stops successfully once the object is found otherwise it continues. Query is forwarded to the best neighbor that has the highest probability value. The probability values are computed based on the results of the past queries and are also updated based on the result of the current query. This query processing continues until all k walkers terminate either success or fail (the TTL limit is reached).

2 TWO LEVELS TTL FOR UNSTRUCTURED P2P NETWORK USING ADAPTIVE PROBABILISTIC SEARCH [1]

Performance of APS algorithm can be increase by using Two-Level Random Walk instead of One-Level Random Walk (K Walker Random Walk). In the case of Two Level Random Walk we generate K1 threads which will be less than K which we have used in K Walker Random Walk. At the edge nodes where TTL1 expires and the search is unsuccessful then second level will start and these K1 threads will exploit in K2 threads subsequently a new TTL2 will be initialized which will be less than TTL1. In this case we are generating fewer threads so chances of collision will decrease than other searching algorithms. Algorithm for the proposed technique is as follows:

98
##### Electronics / FPGA Based Embedded Multiprocessor Architecture
« Last post by IJSER Content Writer on February 18, 2012, 02:01:48 am »
Quote
Author : Mr.Sumedh.S.Jadhav, Prof.C.N.Bhoyar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— Embedded multiprocessor design presents challenges and opportunities that stem from task coarse granularity and the large number of inputs and outputs for each task. We have therefore designed a new architecture called embedded concurrent computing (ECC), which is implementing on FPGA chip using VHDL. The design methodology is expected to allow scalable embedded multiprocessors for system expansion. In recent decades, two forces have driven the increase of the processor performance: Advances in very large-scale integration (VLSI) technology and Micro architectural enhancements. Therefore, we aim to design the full architecture of an embedded processor for realistic to perform arithmetic, logical, shifting and branching operations. We will be synthesize and evaluated the embedded system based on Xilinx environment. Processor performance is going to be improving through clock speed increases and the clock speed increases and the exploitation of instruction- level parallelism. We will be designing embedded multiprocessor based on Xilinx environment or Modelsim environment.

Index Terms— FPGA based embedded system design, multiprocessor architecture, Pipelining system, real time processor, System memory, Micro blaze architecture, VHDL environment .

1  INTRODUCTION
IN recent decades, two forces have driven the increase of the processor performance: Firstly, advances in very large-scale integration (VLSI) technology and secondly micro architectural enhancements [1].
Processor Performance has been improve through clock speed Increases and the exploitation of instruction-level Parallelism. While transistor counts continue to increase, recent attempts to achieve even more significant increase in single-core performance have brought diminishing returns [2, 3]. In response, architects are building chips With multiple energy-efficient processing cores instead of investing the whole transistor count into a single, complex, and power-inefficient core [3, 4]. Modern embedded systems are design as systems-on a-chip (SoC)
that incorporate single chip multiple Programmable cores ranging from single chip multiple programmable cores ranging from processors to custom designed accelerators.
This paradigm allows the reuse of pre-designed cores, simplifying the design of billion transistor chips, and amortizing costs. In the past few years, parallel-programmable SoC (PPSoC)have Successful PPSoC are high-performance embedded multiprocessors such as the STI Cell [3] .They are dubbed single-chip heterogeneous multiprocessors (SCHMs) because they have a dedicated processor that coordinates the rest of the processing units. A multiprocessor design with SoC like integration of less-efficient, general-purpose processor cores with more efficient special-purpose helper engines is project to be the next step in computer evolution [5].
First, we aim to design the full architecture of an embedded processor for realistic throughput. We used FPGA technology not only for architectural exploration but also as our target deployment platform because we believe that this approach is best for validating the feasibility of an efficient hardware implementation.
This architecture of the embedded processor resembles a superscalar pipeline, including the fetch, decode, rename, and dispatch units as parts of the in-order front-end. The out of-order execution core contains the task queue, dynamic scheduler; execute unit, and physical register file. The in order back-end is comprised of only the retire unit. The embedded architecture will be implementing using the help of RTL descriptions in System VHDL.
We will integrate the embedded processor with a shared memory system, synthesized this system on an FPGA environment, and performed several experiments using realistic benchmarks. the methodology to design and implement a microprocessor or multiprocessors is presented. To illustrate it with high detail and in a useful way, how to design the most complex practical session is shown. In most cases, computer architecture has been taught with software simulators [1], [2]. These simulators are useful to show: internal values in registers, memory accesses, cache fails, etc. However, the structure of the microprocessor is not visible.
In this work, a methodology for easy design and real Implementation of microprocessors is proposed, in order to provide students with a user-friendly tool. Simple designs of microprocessors are exposed to the students at the beginning, rising the complexity gradually toward a final design with two processors integrated in an FPGA; each of which has an independent memory system, and are intercommunicated with a unidirectional serial channel [10].

2 MULTIPROCESSOR

Multiprocessor system consists of two or more
Connect processors that are capable of communicating.This can be done on a single chip where the processors are connected typically by either a bus. Alternatively, the multiprocessor system can be in more than one chip, typically connected by some type of bus, and each chip can then be a multiprocessor system. A third option is a multiprocessor system working with more than one computer connected by a network, in which each
Computer can contain more than one chip, and each chip can contain more than one processor.
1. Faster calculations are made possible.
2. A more responsive system is created.
3. Different processors can be utilized for different
Tasks. In the future, we expect thread and process parallelism to become widespread for two reasons: the nature of the Applications and the nature of the operating system. Researchers have therefore proposed two alternatives Micro architectures that exploit multiple threads of Control: simultaneous multithreading (SMT) and chip multiprocessors (CMP). Chip multiprocessors (CMPs) use relatively simple.
Single-thread processor cores that exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores. Wide-issue superscalar processors exploit instruction level parallelism (ILP) by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit thread-level parallelism (TLP) by executing different threads in parallel on Different processors.

3 SOFTWARE TOOL

The Xilinx Platform Studio (XPS) is used to design
Micro Blaze processors. XPS is a graphical IDE for developing and debugging hardware and software. XPS simplifies the procedure to the users, allowing them to select, interconnect, and configure components of the final system. Dealing with this activity, the student learns to add processors and peripherals, to connect them through buses, to determine the processor memory extension and allocation, to define and connect internal and external ports, and to customize the configuration parameters of the components. Once the hardware platform is built, the students learn many concepts about the software layer, such as: assigning drivers to Peripherals, including libraries, selecting the operative system (OS), defining processor and drivers parameters, assigning interruption drivers, establishing OS and libraries parameters.
An embedded system performed with XPS can be
Summarized as a conjunction of a Hardware Platform (HWP) and a Software Platform (SWP), each defined separately.

99
##### Electronics / Harnessing of wind power in the present era system
« Last post by IJSER Content Writer on February 18, 2012, 01:59:23 am »
Quote
Author : Raghunadha Sastry R, Deepthy N
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract— This paper deals with the harnessing of the wind power in the present era system with the introduction of DFIG. The studied system here is a variable speed wind generation system based on DFIG which uses the rotor side converter and grid side converter which keeps the dc link voltage constant. Both the converters are overloaded temporarily so that the DFIG provides a considerable contribution to grid voltage during short circuit conditions.  This report includes DFIG, AC/DC/AC converter control and finally the SIMULINK/MATLAB simulation for isolated Induction generator as well as for grid connected Doubly Fed Induction Generator and corresponding results and waveforms are displayed.

Index Terms— DFIG, GSC, PWM firing Scheme, RSC, Simulink, Tracking Characteristic, Tolerance band control.

1   INTRODUCTION
Penetration of high wind power, in recent years, has made it necessary to introduce new practices. For example, grid codes are being revised to ensure that wind turbines would contribute to the control of voltage and frequency and also to stay connected to the host network following a disturbance.
In response to the new grid code requirements, several DFIG models have been suggested recently, including the full-model which is a 5th order model. These models use quadrature and direct components of rotor voltage in an appropriate reference frame to provide fast regulation of voltage. The 3rd order model of DFIG which uses a rotor current, not a rotor voltage as control parameter can also be applied to provide very fast regulation of instantaneous currents with the penalty of losing accuracy. Apart from that, the 3rd order model can be achieved by neglecting the rate of change of stator flux linkage (transient stability model), given rotor voltage as control parameter. Additionally, in order to model back-to back PWM converters, in the simplest scenario, it is assumed that the converters are ideal and the DC-link voltage between the converters is constant. Consequently, depending on the converter control, a controllable voltage (current) source can be implemented to represent the operation of the rotor-side of the converter in the model. However, in reality DC-link voltage does not keep constant but starts increasing during fault condition. Therefore, based on the above assumption it would not be possible to determine whether or not the DFIG will actually trip following a fault.
In a more detailed approach, actual converter representation with PWM-averaged model has been proposed, where the switch network is replaced by average circuit model, on which all the switching elements are separated from the remainder of network and incorporated into a switch network, containing all the switching elements. However, the proposed model neglects high frequency effects of the PWM firing scheme and therefore it is not possible to accurately determine DC-link voltage in the event of fault. A switch-by-switch representation of the back-to-back PWM converters with their associated modulators for both rotor- and stator-side Converters has also been proposed. Tolerance-band (hysteresis) control has been deployed. However, hysteresis controller has two main disadvantages: firstly, the switching frequency does not remain constant but varies along the AC current waveform and secondly due to the roughness and randomness of the operation, protection of the converter is difficult. The latter will be of more significance when assessing performance of the system under fault condition.

Power quality is actually an important aspect in integrating wind power plants to grids. This is even more relevant since grids are now dealing with a continuous increase of non-linear loads such as switching power supplies and large AC drives directly connected to the network. By now only very few researchers have addressed the issue of making use of the built-in converters to compensate harmonics from non-linear loads and enhance grid power quality. In, the current of a
non-linear load connected to the network is measured, and the rotor-side converter is used to cancel the harmonics injected in the grid. Compensating harmonic currents are injected in the generator by the rotor-side converter as well as extra reactive power to support the grid. It is not clear what are the long term consequences of using the DFIG for harmonic and reactive power compensation. some researchers believe that the DFIG should be used only for the purpose for which it has been installed, i.e., supplying active power only . This paper extends the concept of grid connected doubly fed induction generator .
The actual implementation of the DFIG using converters raises additional issues of harmonics. The filter is used to eliminate these harmonics.
The above literature does not deal with the modelling of DFIG system using simulink. In this work, an attempt is made to model and simulate the DFIG system using Simulink.

Fig 1: Schematic Diagram of DFIG

2  PROBLEM FORMULATION
The stator is directly connected to the AC mains, whilst the wound rotor is fed from the Power Electronics Converter via slip rings to allow DIFG to operate at a variety of speeds in response to changing wind speed. Indeed, the basic concept is to interpose a frequency converter between the variable frequency induction generator and fixed frequency grid. The DC capacitor linking stator- and rotor-side converters allows the storage of power from induction generator for further generation. To achieve full control of grid current, the DC-link voltage must be boosted to a level
higher than the amplitude of grid line-to-line voltage. The slip power can flow in both directions, i.e. to the rotor from the supply and from supply to the rotor and hence the speed of the machine can be controlled from either rotor- or stator-side converter in both super and sub-synchronous speed ranges. As a result, the machine can be controlled as a generator or a motor in both super and sub-synchronous operating modes realizing four operating modes.
The mechanical power and the stator electric power output are computed as follows:

For a loss less generator the mechanical equation is:

In steady-state at fixed speed for a loss less generator

and It follows that:

Where
is defined as the slip of the generator.
Below the synchronous speed in the motoring mode and above the synchronous speed in the generating mode, rotor-side converter operates as a rectifier and stator-side converter as an inverter, where slip power is returned to the stator. Below the synchronous speed in the generating mode and above the synchronous speed in the motoring mode, rotor-side converter operates as an inverter and stator side converter as a rectifier, where slip power is supplied to the rotor. At the synchronous speed, slip power is taken from supply to excite the rotor windings and in this case machine behaves as a synchronous machine. Fig 2: Back to Back AC/DC/AC Converter modeling

Functional model describes the relationship between the input and output signal of the system in form of mathematical function(s) and hence constituting elements of the system are not modeled separately. Simplicity and fast time-domain simulation are the main advantages of this kind of modeling with the penalty of losing accuracy. This has been a popular approach with regard to DFIG modeling, where simulation of converters has been done based on expected response of controllers rather than actual modeling of Power Electronics devices. In fact, it is assumed that the converters are ideal and the DC-link voltage between them is constant. Consequently, depending on the converter control, a controllable voltage (current) source can be implemented to represent the operation of the rotor-side of the converter in the model. Physical model, on the other hand, models constituting elements of the system separately and also considers interrelationship among different elements within the system, where type and structure of the model is normally dictated by the particular requirements of the analysis, e.g. steady-state, fault studies, etc. Indeed, due to the importance of more realistic production of the behavior of DFIG, it is intended to adopt physical model rather than functional model in order to accurately assess performance of DFIG in the event of fault particularly in determining whether or not the generator will trip following a fault.

100
##### Engineering, IT, Algorithms / Applied Software Project Management Software Project Planning Estimation Techni
« Last post by IJSER Content Writer on February 18, 2012, 01:57:44 am »
Quote
Author : T.Rajani Devi
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518

Abstract - Most critical activities in the modern software development process is without a realistic and objective software project plan, the software development process cannot be managed in an effective way. The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan is then developed to describe the tasks that will lead to completion. software and project estimation techniques existing in industry and literature, it has strengths and weaknesses. Usage, popularity and applicability of such techniques are elaborated. In order to improve estimation accuracy, such knowledge is essential. Many estimation techniques, models, methodologies exists and applicable in different categories of projects. None of them gives 100% accuracy but proper use of them makes estimation process smoother and easier. Organizations should automate estimation procedures, customize available tools and calibrate estimation approaches as per their requirements.

Key Words- Black art, business domain, fair estimate, granularity, magnitude, magnitude estimate, quibble, rough estimate, starved, weighing factors.

1  Introduction

SOFTWARE project management begins with a set of activities that are collectively called project planning. Before the project can begin, the manager and the software team must estimate the work to be done, the resources that will be required, and the time that will elapse from start to finish. Whenever estimates are made, we look into the future and accept some degree of uncertainty as a matter of course. Software project planning actually encompasses several activities planning involves estimation—the attempt to determine how much money, how much effort, how many resources, and how much time it will take to build a specific software-based system or product. The appropriate software is for everyone in the project to understand and agree on both why and how that software will be built before the work begins. That’s the purpose of project planning process cannot be managed in an effective way. Project planning is an aspect of Project Management that focuses a lot on Project Integration. The project plan reflects the current status of all project activities and is used to monitor and control the project. The Project Planning tasks ensure that various elements of the Project are coordinated and therefore guide the project execution.
Project Planning helps in - Facilitating communication - Monitoring/measuring the project progress, and - Provides overall documentation of assumptions/planning decisions.
The Project Planning Phases can be broadly classified as follows: -Development of the Project Plan - Execution of the Project Plan - Change Planning is an ongoing effort throughout the Project Lifecycle.

Fig 1: project life cycle

2  Objectives
The objective of software project planning is to provide a framework that enables the manager to make reasonable estimates of resources, cost, and schedule.
These estimates are made within a limited time frame at the beginning of a software project and should be updated regularly as the project progresses.
In addition, estimates should attempt to define best case and worst case scenarios so that project outcomes can be bounded.
The planning objective is achieved through a process of information discovery that leads to reasonable estimates.

3 Useful Estimation Techniques for Software Projects

3.1 The Importance of Good Estimation
Software projects are typically controlled by four major variables; time, requirements, resources (people, infrastructure/materials, and money), and risks. Unexpected changes in any of these variables will have an impact on a project. Hence, making good estimates of time and resources required for a project is crucial. Underestimating project needs can cause major problems because there may not be enough time, money, infrastructure/materials, or people to complete the project. Overestimating needs can be very expensive for the organization because a decision may be made to defer the project because it is too expensive or the project is approved but other projects are "starved" because there is less to go around.
In my experience, making estimates of time and resources required for a project is usually a challenge for most project teams and project managers. It could be because they do not have experience doing estimates, they are unfamiliar with the technology being used or the business domain, requirements are unclear, there are dependencies on work being done by others, and so on. These can result in a situation akin to analysis paralysis as the team delays providing any estimates while they try to get a good handle on the requirements, dependencies, and issues. Alternatively, we will produce estimates that are usually highly optimistic as we have ignored items that need to be dealt with. How does one handle situations such as these?

3.2 We provide reliable estimates
Programmers often consider estimating to be a black art—one of the most difficult things they must do. Many programmers find that they consistently estimate too low. To counter this problem, they pad their estimates (multiplying by three is a common approach) but sometimes even these rough guesses are too low.
Are good estimates possible? Of course! You just need to focus on your strengths.

3.3 What Works (and Doesn't) in Estimating
Part of the reason estimating is so difficult is that programmer can rarely predict how they will spend their time. A task that requires eight hours of uninterrupted concentration can take two or three days if the programmer must deal with constant interruptions. It can take even longer if the programmer works on another task at the same time.
Part of the secret to good estimates is to predict the effort, not the calendar time that a project will take. Make your estimates in terms of ideal engineering days (often called story points): the number of days a task would take if you focused entirely on it and experienced no interruptions.
Ideal time alone won't lead to accurate estimates. I've asked some of the teams I've worked with to measure exactly how long each task takes them. One team gave me 18 months of data, and even though we estimated in ideal time, the estimates were never accurate.
Still, they were consistent. For example, one team always estimated their stories at about 60% of the time they actually needed. This may not sound very promising. How useful can inaccurate estimates are, especially if they don't correlate to calendar time? Velocity holds the key.