Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - IJSER Content Writer

Pages: 1 2 [3] 4 5 ... 22
31
Electronics / FPGA Based Embedded Multiprocessor Architecture
« on: February 18, 2012, 02:01:48 am »
Quote
Author : Mr.Sumedh.S.Jadhav, Prof.C.N.Bhoyar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Embedded multiprocessor design presents challenges and opportunities that stem from task coarse granularity and the large number of inputs and outputs for each task. We have therefore designed a new architecture called embedded concurrent computing (ECC), which is implementing on FPGA chip using VHDL. The design methodology is expected to allow scalable embedded multiprocessors for system expansion. In recent decades, two forces have driven the increase of the processor performance: Advances in very large-scale integration (VLSI) technology and Micro architectural enhancements. Therefore, we aim to design the full architecture of an embedded processor for realistic to perform arithmetic, logical, shifting and branching operations. We will be synthesize and evaluated the embedded system based on Xilinx environment. Processor performance is going to be improving through clock speed increases and the clock speed increases and the exploitation of instruction- level parallelism. We will be designing embedded multiprocessor based on Xilinx environment or Modelsim environment.
 
Index Terms— FPGA based embedded system design, multiprocessor architecture, Pipelining system, real time processor, System memory, Micro blaze architecture, VHDL environment .
                         
1  INTRODUCTION
IN recent decades, two forces have driven the increase of the processor performance: Firstly, advances in very large-scale integration (VLSI) technology and secondly micro architectural enhancements [1].
    Processor Performance has been improve through clock speed Increases and the exploitation of instruction-level Parallelism. While transistor counts continue to increase, recent attempts to achieve even more significant increase in single-core performance have brought diminishing returns [2, 3]. In response, architects are building chips With multiple energy-efficient processing cores instead of investing the whole transistor count into a single, complex, and power-inefficient core [3, 4]. Modern embedded systems are design as systems-on a-chip (SoC)
that incorporate single chip multiple Programmable cores ranging from single chip multiple programmable cores ranging from processors to custom designed accelerators.
This paradigm allows the reuse of pre-designed cores, simplifying the design of billion transistor chips, and amortizing costs. In the past few years, parallel-programmable SoC (PPSoC)have Successful PPSoC are high-performance embedded multiprocessors such as the STI Cell [3] .They are dubbed single-chip heterogeneous multiprocessors (SCHMs) because they have a dedicated processor that coordinates the rest of the processing units. A multiprocessor design with SoC like integration of less-efficient, general-purpose processor cores with more efficient special-purpose helper engines is project to be the next step in computer evolution [5].
    First, we aim to design the full architecture of an embedded processor for realistic throughput. We used FPGA technology not only for architectural exploration but also as our target deployment platform because we believe that this approach is best for validating the feasibility of an efficient hardware implementation.
   This architecture of the embedded processor resembles a superscalar pipeline, including the fetch, decode, rename, and dispatch units as parts of the in-order front-end. The out of-order execution core contains the task queue, dynamic scheduler; execute unit, and physical register file. The in order back-end is comprised of only the retire unit. The embedded architecture will be implementing using the help of RTL descriptions in System VHDL.
    We will integrate the embedded processor with a shared memory system, synthesized this system on an FPGA environment, and performed several experiments using realistic benchmarks. the methodology to design and implement a microprocessor or multiprocessors is presented. To illustrate it with high detail and in a useful way, how to design the most complex practical session is shown. In most cases, computer architecture has been taught with software simulators [1], [2]. These simulators are useful to show: internal values in registers, memory accesses, cache fails, etc. However, the structure of the microprocessor is not visible.
    In this work, a methodology for easy design and real Implementation of microprocessors is proposed, in order to provide students with a user-friendly tool. Simple designs of microprocessors are exposed to the students at the beginning, rising the complexity gradually toward a final design with two processors integrated in an FPGA; each of which has an independent memory system, and are intercommunicated with a unidirectional serial channel [10].
                       
2 MULTIPROCESSOR

     Multiprocessor system consists of two or more
Connect processors that are capable of communicating.This can be done on a single chip where the processors are connected typically by either a bus. Alternatively, the multiprocessor system can be in more than one chip, typically connected by some type of bus, and each chip can then be a multiprocessor system. A third option is a multiprocessor system working with more than one computer connected by a network, in which each
Computer can contain more than one chip, and each chip can contain more than one processor.
     A parallel system is presented with more than one task, known as threads. It is important to spread the workload over the entire processor, keeping the difference in idle time as low as possible. To do this, it is important to coordinate the work and workload between the processors. Here, it is especially crucial to consider whether or not some processors are special-purpose IP cores. To keep a system with N processors effective, it has to work with N or more threads so that each processor constantly has something to do. Furthermore, it is necessary for the processors to be able to communicate with each other, usually via a shared memory, where values that other processors can use are stored. This introduces the new problem of thread safety. When thread safety is violated, two processors (working threads) access the same value at the same time. Some methods for restricting access to shared resources are necessary. These methods are known as thread safety or synchronization. Moreover, it is necessary for each processor to have some private memory, where the processor does not have to think about thread safety to speed up the processor. As an example, each processor needs to have a private stack. The benefits of having a multiprocessor are as follows:
1. Faster calculations are made possible.
2. A more responsive system is created.
3. Different processors can be utilized for different
Tasks. In the future, we expect thread and process parallelism to become widespread for two reasons: the nature of the Applications and the nature of the operating system. Researchers have therefore proposed two alternatives Micro architectures that exploit multiple threads of Control: simultaneous multithreading (SMT) and chip multiprocessors (CMP). Chip multiprocessors (CMPs) use relatively simple.
Single-thread processor cores that exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores. Wide-issue superscalar processors exploit instruction level parallelism (ILP) by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit thread-level parallelism (TLP) by executing different threads in parallel on Different processors.

   3 SOFTWARE TOOL         
     
The Xilinx Platform Studio (XPS) is used to design
Micro Blaze processors. XPS is a graphical IDE for developing and debugging hardware and software. XPS simplifies the procedure to the users, allowing them to select, interconnect, and configure components of the final system. Dealing with this activity, the student learns to add processors and peripherals, to connect them through buses, to determine the processor memory extension and allocation, to define and connect internal and external ports, and to customize the configuration parameters of the components. Once the hardware platform is built, the students learn many concepts about the software layer, such as: assigning drivers to Peripherals, including libraries, selecting the operative system (OS), defining processor and drivers parameters, assigning interruption drivers, establishing OS and libraries parameters.
     An embedded system performed with XPS can be
Summarized as a conjunction of a Hardware Platform (HWP) and a Software Platform (SWP), each defined separately.

Read More: Click here...

32
Electronics / Harnessing of wind power in the present era system
« on: February 18, 2012, 01:59:23 am »
Quote
Author : Raghunadha Sastry R, Deepthy N
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— This paper deals with the harnessing of the wind power in the present era system with the introduction of DFIG. The studied system here is a variable speed wind generation system based on DFIG which uses the rotor side converter and grid side converter which keeps the dc link voltage constant. Both the converters are overloaded temporarily so that the DFIG provides a considerable contribution to grid voltage during short circuit conditions.  This report includes DFIG, AC/DC/AC converter control and finally the SIMULINK/MATLAB simulation for isolated Induction generator as well as for grid connected Doubly Fed Induction Generator and corresponding results and waveforms are displayed.

Index Terms— DFIG, GSC, PWM firing Scheme, RSC, Simulink, Tracking Characteristic, Tolerance band control.

1   INTRODUCTION                                                                      
Penetration of high wind power, in recent years, has made it necessary to introduce new practices. For example, grid codes are being revised to ensure that wind turbines would contribute to the control of voltage and frequency and also to stay connected to the host network following a disturbance.
In response to the new grid code requirements, several DFIG models have been suggested recently, including the full-model which is a 5th order model. These models use quadrature and direct components of rotor voltage in an appropriate reference frame to provide fast regulation of voltage. The 3rd order model of DFIG which uses a rotor current, not a rotor voltage as control parameter can also be applied to provide very fast regulation of instantaneous currents with the penalty of losing accuracy. Apart from that, the 3rd order model can be achieved by neglecting the rate of change of stator flux linkage (transient stability model), given rotor voltage as control parameter. Additionally, in order to model back-to back PWM converters, in the simplest scenario, it is assumed that the converters are ideal and the DC-link voltage between the converters is constant. Consequently, depending on the converter control, a controllable voltage (current) source can be implemented to represent the operation of the rotor-side of the converter in the model. However, in reality DC-link voltage does not keep constant but starts increasing during fault condition. Therefore, based on the above assumption it would not be possible to determine whether or not the DFIG will actually trip following a fault.
In a more detailed approach, actual converter representation with PWM-averaged model has been proposed, where the switch network is replaced by average circuit model, on which all the switching elements are separated from the remainder of network and incorporated into a switch network, containing all the switching elements. However, the proposed model neglects high frequency effects of the PWM firing scheme and therefore it is not possible to accurately determine DC-link voltage in the event of fault. A switch-by-switch representation of the back-to-back PWM converters with their associated modulators for both rotor- and stator-side Converters has also been proposed. Tolerance-band (hysteresis) control has been deployed. However, hysteresis controller has two main disadvantages: firstly, the switching frequency does not remain constant but varies along the AC current waveform and secondly due to the roughness and randomness of the operation, protection of the converter is difficult. The latter will be of more significance when assessing performance of the system under fault condition.

Power quality is actually an important aspect in integrating wind power plants to grids. This is even more relevant since grids are now dealing with a continuous increase of non-linear loads such as switching power supplies and large AC drives directly connected to the network. By now only very few researchers have addressed the issue of making use of the built-in converters to compensate harmonics from non-linear loads and enhance grid power quality. In, the current of a
non-linear load connected to the network is measured, and the rotor-side converter is used to cancel the harmonics injected in the grid. Compensating harmonic currents are injected in the generator by the rotor-side converter as well as extra reactive power to support the grid. It is not clear what are the long term consequences of using the DFIG for harmonic and reactive power compensation. some researchers believe that the DFIG should be used only for the purpose for which it has been installed, i.e., supplying active power only . This paper extends the concept of grid connected doubly fed induction generator .
The actual implementation of the DFIG using converters raises additional issues of harmonics. The filter is used to eliminate these harmonics.
The above literature does not deal with the modelling of DFIG system using simulink. In this work, an attempt is made to model and simulate the DFIG system using Simulink.
 
Fig 1: Schematic Diagram of DFIG

2  PROBLEM FORMULATION
The stator is directly connected to the AC mains, whilst the wound rotor is fed from the Power Electronics Converter via slip rings to allow DIFG to operate at a variety of speeds in response to changing wind speed. Indeed, the basic concept is to interpose a frequency converter between the variable frequency induction generator and fixed frequency grid. The DC capacitor linking stator- and rotor-side converters allows the storage of power from induction generator for further generation. To achieve full control of grid current, the DC-link voltage must be boosted to a level
higher than the amplitude of grid line-to-line voltage. The slip power can flow in both directions, i.e. to the rotor from the supply and from supply to the rotor and hence the speed of the machine can be controlled from either rotor- or stator-side converter in both super and sub-synchronous speed ranges. As a result, the machine can be controlled as a generator or a motor in both super and sub-synchronous operating modes realizing four operating modes.
The mechanical power and the stator electric power output are computed as follows:
 
For a loss less generator the mechanical equation is:
 
In steady-state at fixed speed for a loss less generator
 
and It follows that:
 
Where
  is defined as the slip of the generator.
Below the synchronous speed in the motoring mode and above the synchronous speed in the generating mode, rotor-side converter operates as a rectifier and stator-side converter as an inverter, where slip power is returned to the stator. Below the synchronous speed in the generating mode and above the synchronous speed in the motoring mode, rotor-side converter operates as an inverter and stator side converter as a rectifier, where slip power is supplied to the rotor. At the synchronous speed, slip power is taken from supply to excite the rotor windings and in this case machine behaves as a synchronous machine. Fig 2: Back to Back AC/DC/AC Converter modeling

Functional model describes the relationship between the input and output signal of the system in form of mathematical function(s) and hence constituting elements of the system are not modeled separately. Simplicity and fast time-domain simulation are the main advantages of this kind of modeling with the penalty of losing accuracy. This has been a popular approach with regard to DFIG modeling, where simulation of converters has been done based on expected response of controllers rather than actual modeling of Power Electronics devices. In fact, it is assumed that the converters are ideal and the DC-link voltage between them is constant. Consequently, depending on the converter control, a controllable voltage (current) source can be implemented to represent the operation of the rotor-side of the converter in the model. Physical model, on the other hand, models constituting elements of the system separately and also considers interrelationship among different elements within the system, where type and structure of the model is normally dictated by the particular requirements of the analysis, e.g. steady-state, fault studies, etc. Indeed, due to the importance of more realistic production of the behavior of DFIG, it is intended to adopt physical model rather than functional model in order to accurately assess performance of DFIG in the event of fault particularly in determining whether or not the generator will trip following a fault.

Read More: Click here...

33
Quote
Author : T.Rajani Devi
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - Most critical activities in the modern software development process is without a realistic and objective software project plan, the software development process cannot be managed in an effective way. The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan is then developed to describe the tasks that will lead to completion. software and project estimation techniques existing in industry and literature, it has strengths and weaknesses. Usage, popularity and applicability of such techniques are elaborated. In order to improve estimation accuracy, such knowledge is essential. Many estimation techniques, models, methodologies exists and applicable in different categories of projects. None of them gives 100% accuracy but proper use of them makes estimation process smoother and easier. Organizations should automate estimation procedures, customize available tools and calibrate estimation approaches as per their requirements.

Key Words- Black art, business domain, fair estimate, granularity, magnitude, magnitude estimate, quibble, rough estimate, starved, weighing factors.

1  Introduction
 
SOFTWARE project management begins with a set of activities that are collectively called project planning. Before the project can begin, the manager and the software team must estimate the work to be done, the resources that will be required, and the time that will elapse from start to finish. Whenever estimates are made, we look into the future and accept some degree of uncertainty as a matter of course. Software project planning actually encompasses several activities planning involves estimation—the attempt to determine how much money, how much effort, how many resources, and how much time it will take to build a specific software-based system or product. The appropriate software is for everyone in the project to understand and agree on both why and how that software will be built before the work begins. That’s the purpose of project planning process cannot be managed in an effective way. Project planning is an aspect of Project Management that focuses a lot on Project Integration. The project plan reflects the current status of all project activities and is used to monitor and control the project. The Project Planning tasks ensure that various elements of the Project are coordinated and therefore guide the project execution.
Project Planning helps in - Facilitating communication - Monitoring/measuring the project progress, and - Provides overall documentation of assumptions/planning decisions.
The Project Planning Phases can be broadly classified as follows: -Development of the Project Plan - Execution of the Project Plan - Change Planning is an ongoing effort throughout the Project Lifecycle.
 
Fig 1: project life cycle

2  Objectives
The objective of software project planning is to provide a framework that enables the manager to make reasonable estimates of resources, cost, and schedule.
These estimates are made within a limited time frame at the beginning of a software project and should be updated regularly as the project progresses.
In addition, estimates should attempt to define best case and worst case scenarios so that project outcomes can be bounded.
The planning objective is achieved through a process of information discovery that leads to reasonable estimates.

3 Useful Estimation Techniques for Software Projects

3.1 The Importance of Good Estimation
Software projects are typically controlled by four major variables; time, requirements, resources (people, infrastructure/materials, and money), and risks. Unexpected changes in any of these variables will have an impact on a project. Hence, making good estimates of time and resources required for a project is crucial. Underestimating project needs can cause major problems because there may not be enough time, money, infrastructure/materials, or people to complete the project. Overestimating needs can be very expensive for the organization because a decision may be made to defer the project because it is too expensive or the project is approved but other projects are "starved" because there is less to go around.
In my experience, making estimates of time and resources required for a project is usually a challenge for most project teams and project managers. It could be because they do not have experience doing estimates, they are unfamiliar with the technology being used or the business domain, requirements are unclear, there are dependencies on work being done by others, and so on. These can result in a situation akin to analysis paralysis as the team delays providing any estimates while they try to get a good handle on the requirements, dependencies, and issues. Alternatively, we will produce estimates that are usually highly optimistic as we have ignored items that need to be dealt with. How does one handle situations such as these?

3.2 We provide reliable estimates
Programmers often consider estimating to be a black art—one of the most difficult things they must do. Many programmers find that they consistently estimate too low. To counter this problem, they pad their estimates (multiplying by three is a common approach) but sometimes even these rough guesses are too low.
Are good estimates possible? Of course! You just need to focus on your strengths.

3.3 What Works (and Doesn't) in Estimating
Part of the reason estimating is so difficult is that programmer can rarely predict how they will spend their time. A task that requires eight hours of uninterrupted concentration can take two or three days if the programmer must deal with constant interruptions. It can take even longer if the programmer works on another task at the same time.
Part of the secret to good estimates is to predict the effort, not the calendar time that a project will take. Make your estimates in terms of ideal engineering days (often called story points): the number of days a task would take if you focused entirely on it and experienced no interruptions.
Ideal time alone won't lead to accurate estimates. I've asked some of the teams I've worked with to measure exactly how long each task takes them. One team gave me 18 months of data, and even though we estimated in ideal time, the estimates were never accurate.
Still, they were consistent. For example, one team always estimated their stories at about 60% of the time they actually needed. This may not sound very promising. How useful can inaccurate estimates are, especially if they don't correlate to calendar time? Velocity holds the key.

Read More: Click here...

34
Quote
Author : Abubakar Agil Emhmed, Kalaivani Chellapan
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—Location information is a set of data describing an individual geographical location. This information can be used in many ways to provide information and entertainment services to the user who has the require devices and resources to operate such services. However, different challenges are reported in terms o availability and homogeneity. These challenges are mainly focused on the integration, standardization, security and privacy of the various GIS resources that helps to show essential but exhaustive information into a simple, clear and possibly user-friendly interface. Therefore, a simple but complete interface should provide all GIS functionalities, identified by graphically clear and easily understandable symbols, icons or texts. This paper intends to apply the Service Oriented Architecture (SOA) to re-manage the GIS resources and provide a distributed, dynamic, and reliable service system that can meet information and service requirements of many different users over the internet.

Index Terms— GIS, SOA, map locator, mobile services, web services, JSP, GIS mappping

1   INTRODUCTION                                                                      
THE deployment of new technologies in the various com-munication sectors have indicates the needs for enhancing and developing a new techniques to fit a specific purposes. It’s become quite familiar that using mobile in our daily life has extremely extended to help users to save and retrieve a certain information through wireless services [1]. The prime reason for using mobile devices back to that mobile devices are small handheld devices such as mobile phones, palmtop computers and other devices that come with operating system. From the other hands, mobile services provide a wide range of accessibility for different clients that locates in different locations, which include the most competitive technology such as the Personal Digital Assistants (PDAs) with or without networking capabilities that could access the Web services [2].

Even though that tourism industry is now undergoing tremendous change due to tourism development, but it does not stop the commercial and residential growth. The level of innovation is high and new technologies, devices, applications and services emerge in a rapid manner. The application of GIS can be huge and one of them is tourism. Tourism is information intensive and sensitive industry in which GIS in expected to play an important role. Tourists need well-organized information anywhere and anytime, to ensure pleasant traveling. Paper maps usually provided, but it has many limitations such as limited amount of details and environment changed. This paper is conducted to model an effective tourism application architecture based on mixture of mobile computing and GIS technology along with the SOA.

SOA helps to allow the transition from an application-centric view of the world to a process centric one. Mobile applications now have the freedom to combine business services from multiple applications to deliver true end-to-end support for business processes. SOA has a variety definitions regarding what it consists of, sometimes driven by vendor-specific solutions instead of a general architecture [3]. It is important that SOA is about more agile business processes and not purely a technology-driven solution. SOA is an architectural concept, and hence to realize an SOA solution, one must map the architecture to a logical construct followed by an implementation using a specific set of technologies/platforms/products [4].

Form the other hands, providing a homogenate mobile service for locating places has recognized as a challenging procedure for proceesing, presenting, and organizing contents based on some function to calculate the user destination and mapping the current user location. Then, this procedure is often measured based on the performance and efficiency of the system.

Tourists mostly use maps before, during and after place visits. Before a visit, tourists usually use maps to assists them planning their activities and also learn about the general layout and social zones of cities. These pre-visit activities are increasingly supported through web sites which may even offer interactive maps and associated services to help with the planning of a visit [5].

2 ISSUES & CHALLANGES
Nowadays, it is become hard for individual standalone services to meet all the service requirements of many users. Such service requests could be met by dynamically chaining multiple services provided by single and multiple service providers. These challenges were clearly defined by Dogru and Toz (2008) stated the flexiability of using SOA to recognize these challenges and tries to construct a distributed, dynamic, flexible, and re-configurable service system over Internet that can meet information and service requirements of many different users [6].

Furthermore, the process of managing GIS information into classified sets is found to be a problematical task especially for hand-held devices and the request-response delay increases based on the hardware and software provided. From another side, a various drawbacks were demonstrated, which involves the connection, delays and high response times. These issues are wished to be eliminated with the development of a suitable GIS architecture. This was indicated also with stand-alone applications that don’t require any communication with a remote source during the execution.

GIS application server is employed to update the GIS information and user profile details frequently. This allows the GIS system administrator or the manager to add, delete, and update the relevant GIS information very quickly as well as work with information at the last level of updating.

GIS functions vary depending on the use that mobile GIS have been thought for. So, basic functionalities of map viewer such as zoom in, zoom out, query such as feature query and attribute-based query, and editing information, while other functionalities, such as search place, register service controls are added for characterizing the service. This back to the limitation of physical constraints of hand-held devices, such as small screen, small memory storage, and low frequency of CPU, some critical issues must be taken in account.

Fangxiong and Zhiyong in [7] addressed the difficulties for displaying the GIS contents over the client devices of WAP-based Mobile GIS. They acknowledged that such devices need a suitable mechanism to be installed on the server to determine the client device and generate corresponding presentation logic for the client.

By consequence a mobile GIS is required to show essential but exhaustive information into a simple, clear and possibly user-friendly interface. This aspect become even more important when the mobile GIS is oriented to the users such as travelers. Therefore, a simple but complete interface should provide all GIS functionalities, identified by graphically clear and easily understandable symbols, icons or texts. To make essential and exhaustive the displayed geographic content, Mobile GIS have to face the challenges of map generalization, i.e. how to vary maps depending on the scale without loss of information for a given scale, and map schematization, which can be considered an effective mean of generalization at large scales, to make contents user friendly for interpretation [8]. Several implications, such as generalization, assortment, displacement and combination of features, are necessary to create maps with features sim-plified in form or shape [9].

Another reason for developing a new SOA architecture for GIS application is that all the and-held devices have low memory capacity, which mostly consume within 64 MB against at least 512 MB of desktop PC with a low frequency of CPU at 400 MHz against 4 GHz of desktop PCs. This affects on the development of GIS functionalities for mobile, because such a limitation doesn’t consent to provide a high CPU stress operations. Therefore, many functions of data analysis available on desktop PC cannot be moved on Mobile GIS. Thus, WMS (Web Map Services) and WPS (Web Processing Services) can be considered as a potential solu-tion, because they enable the GIS server side operations to be performed highly, while the client has only to call the service and to manage the response. Thus, this paper aims to design a new GIS architecture for the tourism mapping needs based on the classification of SOA.

3 COMPARISON STUDY
Today, most of the handheld devices have the necessary built-in hardware [10]. Location Based Services (LBS) are information services that provide users with customized contents such as the nearest restaurants, hotels, clinics, petrol pump from the dedicated spatial database based on the user’s current location [11]. Figure 1 presents the initialization of GIS into mobile application for retrieving and displaying the location details remotely.
 
Fig 1. GIS Services through Mobile [11]
Service Oriented Architecture is considered as a collection of different services such as  converting the business applications into individual business functions and processes  and security between enterprises that transferred via the Web [12].

Nevertheless, different number of layers can be adopted while using SOA in system development as shown in Figure 2.
 
Fig 2. SOA Components [4]
According to Nie, et al. in [13] proposed a design and im-plement of a tourist route planning and navigation system based on LBS. This system called TRPNS which helps tourist to arrange travel route and navigate tourist through the chosen travel route. By the help of LBS, the portable devices are aware of its location and can query the back-end system for the updated information about the local sites of interest in anytime and anywhere. The proposed system allows a tourist to plan a specific day, find places, shopping, hotel, airport, interesting/ beaches, provide value-added services according to his/her preferences.

Another scholar by Neteler and Raghavan in [14] conducted his research on the digital cities, mobile applications and Geographic Information Systems (GIS) tourism applications. He proposed a prototype location-aware outdoor application to help people navigate through a city with a mobile handheld device. The research embraces the above areas as well as research on building recognition through image processing. His system helps people locate their positions and identify the buildings from user-supplied Personal Digital Assistant (PDA) images, as well as GPS and orientation data, is presented.

As stated in [15] implemented system framework, designed efficient algorithms of a GIS system, called Intelligent Map Agent (IMA) system. The main focus of the IMA is to provide access to services, which require the usage of spatial data and information, to mobile users regardless of their location. The IMA can accommodate a large number of mobile users and services which distributed across a wide geographical area. The system represented by agents using JADE/LEAP technologies.

3 PROPOSED ARCHITECTURE
A number of stages were involved in this research to design and develop an efficient GIS architecture.

A number of solutions were developed and executed for mobile GISs in terms of two groups, client server architectures and stand alone applications, which carried different advantages and disadvantages. Mobile GIS client-server solutions, are usually considered as derivative of Web GIS as acknowledged by [16]. They estimated that the mobile GIS presents the client side is constantly linked to a wireless network that helps to connect with the server. Such process was used in order to provide a reliable schema for sending the request details and receiving responses through Internet services. Frequently a “four-tiered architecture” is executed in terms of; the client tier that includes the desktop GIS, web processing services, and mobile GIS applications, application server tier that includes web map services (WMS), GIS Application Server, java platform, geospatial model language, GIS application server, and Data Base (DB) server,  service tier that includes web map services, client-server application, LBS, and java scripts,  and geo database tier that includes places information, map viewer, MySQL, PostGIS, Geo-data Repositories. Consequently, a client-server application allows establishing a real-time connection with the server, that provides data and services, and, by consequence, allows getting up to- date information, which relies on the request execution. LBS services were integrated to allow user or the manager to edit their details based server at real-time and contribute to update the database in order. MapfileManager.java was also used to generate contrast-language-based MapFiles as shown in the typical procedure below:

Read More: Click here...

35
Quote
Author : Prof.Paresh K. Kasundra, Dr. Ashish V. Gohil
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract- Bio-diesel is produced by mixing vegetable or animal oil with a small quantity of methanol in a process known as esterification. Bio-diesel is a promising nontoxic and biodegradable renewable alternative fuel compared to petroleum diesel in the light of the limited nature of fossil fuel and the environmental concerns.
The biodiesel is reported to be sulfur-free, nontoxic, biodegradable oxygenated and renewable. and the characteristics of biodiesel are very close to diesel fuel and some are better then diesel such as higher cetane number, no aromatics, almost no sulfur, and more then 10% oxygen by weight, which reduce the emission of carbon monoxide (CO), hydrocarbon(HC), particulate sulfur oxides(Sox) and volatile organic compounds(VOCs).although there are some advantages of using biodiesel instead of petroleum based diesel that biodiesel blends up to a maximum 5% should not cause engine and fuel system problem. so it can be seen that the properties of biodiesel can affect the engine performance and emissions. So in this paper we discussed anticipating emission and suggestions for reducing exhaust emissions with CNSL as a fuel in CI engine.

Index Terms: - biodiesel, vegetable oil, esterification, diesel engine performance, emissions.
 
1.1   (1) Anticipating Emission with CNSL As 
     Fuel
As not much literature is found regarding CNSL hence the kind of emissions that will be encountered while using CNSL as fuel in diesel engine will have to be anticipated. As from the structure of CNSL it can be easily said that fuel contains higher levels of aromatics and cetane number will be subsequently low, literature is studied for effects of cetane number and aromatics on emission of diesel engine. Spreen et al. conducted experiments on a prototype 1994 Navistar DTA-466 heavy-duty engine.  This engine was an in-line 6-cylinder configuration of 7.6 liter displacement with compression ratio of 17:5:1 and had a DI combustion Chamber. They then construed certain statistical models, which were used to estimate independent effect of cetane number, aromatics and oxygen content of the fuel on the emissions. A similar experiment was done by Ullman et on a 1994 prototype DDC series 60 heavy-duty engine. This was a DI engine with an in-line 6-cylinder configuration of 11.1-liter displacement and compression ratio 16:1 [1]. Cetane number was determined to be most important fuel variable associated with emission of HC, CO, and NOX whereas fuel aromatics significantly affected PM emissions and was also observed changing emission of HC and NOX. Oxygen content in the fuel was important for estimating PM emission [1,2]. The statistical models are not include in the study but their results are presented below and there implication on CNSL as a fuel is stated.

►   Unburnt Hydrocarbons
They estimated using statistical models that an independent increase of cetane number by 10 reduces composite HC levels by 0.037 g/hp-hr. Decreasing


aromatics by 10% was estimated to reduce HC levels by 0.014 g/hp-hr. Adding 2% by weight oxygen to the fuel using monoglyme was estimated to increase emissions by 0.051 g/hp-hr.


►   Carbon Monoxide
Only Cetane number seemed to have certain effect on CO emissions. An increase of 10 in cetane reduced CO levels by 0.28 g/hp-hr.

►   Nitrogen Oxide
NOX emissions were significantly related to cetane number and aromatics. An increase of 10 in cetane number reduces NOX emissions by 0.131 g/hp-hr. it was also predicated that a 10% decrease in aromatics will reduce NOX level by 0.052 g/hp-hr.

►   Particulate Matter
PM emissions were highly affected by aromatics in the fuel. A decrease of 10% in aromatics gave 0.004 g/hp-hr reductions in PM levels. Increasing Oxygen in diesel furl to 2% reduced PM by 0.009 g/hp-hr.
Tamanouchi et al. conducted experiments on similar lines and found a relationship between fuel properties and exhaust emissions. Their findings are shown in fig 2.9, which shows effect of cetane number on exhaust emissions [3].

Fig. 1.1 (C)  Effect of Density on Exhaust Emissions [3]

As CNSL is fuel with high aromatic content, high density, and a lower cetane number can be expected. Following conclusions can be made from the above results:
•   High HC emission is expected from CNSL
•   Very high NOX emissions are expected from this fuel.
•   Nothing can be said for sure about CO emissions.
•   PM emissions are expected very high.

1.1 (2) Suggestions for Reducing Exhaust Emissions
In the last section it became evident that if CNSL has to be used as an alternative to diesel fuel then some techniques of reducing emissions will have to be used. Caused by the tightening of regulation on diesel emission around the world, engine and fuel technology for reducing emissions is also progressing. Following are some suggestions, which are found in literature for reducing emissions and improving working of engine.


1)   Fuel Additives
In petroleum industry, it is a common practice to use cetane improvers to enhance the cetane number of the commercial diesel fuel. The addition of cetane improvers does not change other fuel properties much since its concentration is usually low. The cetane improvers enhance the ignition quality by generating a radical pool at a lower temperature than the component in the base fuel. Compared with other processing methods, the addition of cetane improver is a cost effective way for the refineries to produce diesel fuel with feedstock of low cetane rating [4]. A study for investigating cetane response of such cetane improves was conducted by sobotowaski. Cetane response is defined as the relationship between cetane number of the fuel and concentration of cetane improver. But no correlation was found to accurately characterize the response of fuels used in this study [5]. Li et al. conducted experiments on a single cylinder DI engine and two kinds of cetane improvers were used, nitrate-type additive and peroxide-type additive. The objective of his study was to compare the emission impact of both types of cetane improvers. Nandi et al. conducted similar experiment on these cetane improvers and come out with similar results. They found HC, CO, NOX and PM emissions were reduced significantly by treating fuels with either cetane improver. Similar reductions in NOX emissions were observed indicating that nitrogen introduced by the nitrate type cetane improver into fuel does not contribute to NOX formation. They also found that commercially used 2-ethylhexyl nitrate and di-t-butyl peroxide are mutually compatible i.e. mixing of fuels containing will not have negative effect on cetane number and engine emissions [4,6].

2)   Oxidation Catalyst
It has been found in literature that use of oxidation catalyst enables HC, CO, and PM levels to be reduced. In addition, the effect is not sensitive to fuel used. It was also observed that blending of oxygen-containing fuel enchases the effect of the oxidation catalyst [3]. Results have shown that use of oxidation catalyst is more effective in reducing exhaust emissions than fuel modification.

3)   High Pressure Injection
It has been found by Tamanouchi et al. that use of high-pressure injection is very effective in reducing PM emissions and NOX emissions.
4)   Particular Traps
Particulate traps have evolved as a novel means of reducing PM in the exhaust gas. A great variety of fitter materials are being investigated like wire mesh tubes of Layered ceramic fibers, ceramic foam, cross-flow ceramic filters, honeycomb ceramic filters and others. The biggest problem with particulate trap is the regeneration of the fibers [7].

Read More: Click here...

36
Quote
Author : Rohithbalaji Jonnala
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— This paper presents the implementation of Direct Torque Control (DTC) with Sector Advancement Technique (SAT) algorithm for the control of a Hybrid Cascaded H-Bridge Multilevel Inverter Induction motor Drive. It is useful to keep the motor torque and stator flux and the inverter’s neutral point potential within given hysteresis bounds while reducing the average switching frequency of the inverter and overall computational time period comparison with the standard direct torque control (CDTC). This method also improving overall efficiency with Torque and Flux ripple reduction. In addition, the multilevel inverter can generate a high and fixed switching frequency output voltage with fewer switching losses, since only the small power cells of the inverter operate at a high switching rate. Therefore, a high performance and also efficient torque and flux controllers are obtained, enabling a DTC solution for multilevel-inverter-powered motor drives.

Index Terms— Direct Torque Control (DTC), Multi Level Inverter, Induction Motor Drives, Sector Advancement Technique (SAT). 

   Introduction                                                            
Since its introduction, direct torque control (DTC) has become a powerful control scheme for the control of induction motor (IM) drives. The standard DTC scheme uses hysteresis comparators for the control of both stator flux magnitude and electromagnetic torque. This control structure ideally keeps both controlled parameters within the hysteresis bands and results in a non constant switching frequency. One of the methods that have been used by one major manufacturer in multilevel inverters is direct torque control (DTC), which is recognized today as a high-performance control strategy for ac drives. Several authors have addressed the problem of improving the behaviour of DTC ac motors, particularly by reducing the torque ripple. However, when the DTC scheme is used in a discrete implementation, both torque and flux exceed the bands imposed by the hysteresis comparators, due to the fixed sampling frequency. It is possible for the discrete scheme to operate as an analogy one if the hysteresis bounds are chosen to be sufficiently large. On the contrary, when the width of the bands is comparable to the maximum torque and flux variations during one sampling period, the excursions will be relatively large, partly due to the time delay that is caused by the data processing. Therefore, the sampling period is an important factor determining the control performance and switching frequency.

   To improve the performance of control operation, different approaches have been proposed: improving the lookup table; varying the hysteresis bandwidth of the torque controller; and using flux, torque, and speed observers. Although these approaches are well suitable for the classical two-level inverter, their extension to a greater number of levels is not easy. Throughout this paper, a theoretical background is used to design a simple and practical strategy that is compatible with hybrid cascaded H-bridge multilevel inverter. It allows not only controlling the electromagnetic state of the motor with improved performance (minimization of the torque ripple) but also reducing flux and current distortion caused by flux position sector change.
To improve the flux waveform, ripple free torque, dynamics and efficiency of the drive and to enhance the quality of stator currents in the motor. Sector Advancing Technique (SAT) is employed for reducing the response time of the drive to given torque command.

2  CASCADED H-BRIDGE STRUCTURE AND OPERATION
   The hybrid cascaded H-bridge inverter is composed of three legs, in each one is a series connection of two H-bridge inverters fed by independent dc sources that are not equal (V1 < 2).Indeed, it may be obtained from batteries, fuel cells. The use of asymmetric input voltages can reduce, or when properly chosen, eliminate redundant output levels, maximizing the number of different levels generated by the inverter. Therefore, this topology can achieve the same output voltage quality with fewer numbers of semiconductors.

The use of asymmetric input voltages (inverter fed by a set of dc-voltage sources where at least one of them is different from the other one) can reduce, or when properly chosen, eliminate redundant output levels, maximizing the number of different levels generated by the inverter.

  Figure 2: Voltage Vector formation in Multilevel Inverter

 Therefore, this topology can achieve the same output voltage quality with fewer numbers of semiconductors. The maximum number of redundancies is equal to (3K −2K − 1) and can be obtained when the partial dc voltages are equal to E/ (N − 1). If there is 2K connected cells per multilevel-inverter phase leg, 3K switching configurations are possible and N is number of levels.

Figure 3: Basic H-Bridge Structure
The multilevel-inverter output voltage depends on the partial voltage feeding each partial cell. The possible number of redundant switching states can be reduced if the cells are fed by unequal dc-voltage sources. This also reduces volume and costs and offers inherent low switching losses and high conversion efficiency. When cascading two-level inverters like H-bridges [Fig3], the optimal asymmetry is obtained by using voltage sources proportionally scaled to the two- or three-H-bridge power. Particular cell i can generate three levels (+Vi, 0, −Vi). The total inverter output voltage for a particular phase j is defined by

v_jN=∑_(i=1)^m▒〖v_jN=∑_(i=1)^m▒〖V_i (S_i1-S_i2)〗             j∈{a,b,c}〗  ………(1)

Where νij is the i cell output voltage, m is the number of cells per phase, and (Si1, Si2) is the switching state associated to the i cell. Equation (1) explicitly shows how the output voltage of one cell is defined by one of the four binary combinations of switching state, with “1” and “0” representing the “ON” and “OFF” states of the corresponding switch, respectively. The optimal asymmetry is obtained with dc links scaled in powers of two or three, generating seven or nine different output levels. Nine different output levels can be generated using only two cells (eight switches); while four cells (16 switches) are necessary to achieve the same amount of level with a symmetric-fed inverter.

Read More: Click here...

37
Quote
Author : Kiran Sahu, Mrs. Susmita Ghosh Mazumdar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - Monitoring and control of greenhouse environment play an important role in greenhouse production and management. To monitor the greenhouse environment parameters effectively, it is necessary to design a measurement and control system. The objective of this project is to design a simple, easy to install, microcontroller-based circuit to monitor and record the values of temperature, humidity, soil moisture and sunlight of the natural environment that are continuously modified and controlled in order optimize them to achieve maximum plant growth and yield. The controller used is a low power, cost efficient chip manufactured by ATMEL having 8K bytes of on-chip flash memory. It communicates with the various sensor modules in real-time in order to control the light, aeration and drainage process efficiently inside a greenhouse by actuating a cooler, fogger, dripper and lights respectively according to the necessary condition of the crops. An integrated Liquid crystal display (LCD) is also used for real time display of data acquired from the various sensors and the status of the various devices. Also, the use of easily available components reduces the manufacturing and maintenance costs. The design is quite flexible as the software can be changed any time. It can thus be tailor-made to the specific requirements of the user. This makes the proposed system to be an economical, portable and a low maintenance solution for greenhouse applications, especially in rural areas and for small scale agriculturists. 

Index Terms – Wireless sensor network, Digital Agriculture, Environment Monitoring; Greenhouse Monitoring, Environment Parameter

1   INTRODUCTION  
 
THE proposed system is an embedded system which will closely monitor and control the  microclimatic  parameters of a greenhouse on a regular  basis  round  the  clock  for cultivation of crops or specific plant species which could maximize their production over the whole crop growth season and to eliminate the difficulties involved in the system by reducing human intervention to the best possible extent. The system comprises of sensors, Analog to Digital Converter, microcontroller and actuators [1].When any of the above mentioned climatic parameters cross a safety threshold which has  to  be  maintained   to  protect  the  crops,  the  sensors sense  the  change  and  the microcontroller reads this from the data at its input ports after being converted to a digital form by the ADC [10]. The microcontroller then performs the needed actions by employing relays until the strayed-out parameter has been brought back to its optimum level. Since a microcontroller is used as the heart of the system, it makes the set-up low-cost and effective nevertheless. As the system also employs an LCD display for continuously alerting the user about the condition inside the greenhouse, the entire set-up becomes user friendly. Thus, this system eliminates the drawbacks of the existing set-ups and is designed as an easy to maintain, flexible and low cost solution.

2 SYSTEM MODEL
2.1 Basic Model of the System

Fig 1. Block Diagram of the System

2.1 Parts of the System
•   Sensors (Data acquisition system)
i.   Temperature sensor
ii.   Humidity sensor
iii.   Light sensor (LDR)

•   Analog to Digital Converter

•   Microcontroller (AT89C51)

•   Liquid Crystal Display

•   Actuators – Relays

•   Devices controlled
i.   Water Pump (simulated as a bulb)
ii.   Sprayer (simulated as a bulb)
iii.   Cooler (simulated as a fan)
iv.   Artificial Lights (simulated as 2 bulbs)

2.2 Steps Followed In Designing the System
   Three general steps can be followed to appropriately select the control system:

Step # 1: Identify measurable variables important to production. It is very important to correctly identify the parameters that are going to be measured by the controller’s data acquisition interface, and how they are to be measured.

Step # 2: Investigate the control strategies.An important element in considering a control system is the control strategy that is to be followed. The simplest strategy is to use threshold sensors that directly affect actuation of devices.   

Step # 3: Identify the software and the hardware to be used. Hardware must always follow the selection of software, with the hardware required being supported by the software selected. In addition to functional capabilities, the selection of  the  control  hardware  should  include  factors  such  as  reliability,  support,  previous experiences with the equipment (successes and failures), and cost [2].

3 HARDWARE DESCRIPTION
3.1 Transducers
      A transducer is a device which measures a physical quantity and converts it into a signal which can be read by an observer [9] .It can also be read by an instrument [3]. The sensors used in this system are:
1.   Light Sensor (LDR (Light Dependent Resistor))
2.   Humidity Sensor
3.   Temperature Sensor

3.2 Analog to Digital Converter
             In physical world parameters such as temperature, pressure, humidity, and velocity are analog signals. A physical quantity is converted into electrical signals. We need an analog to digital converter (ADC), which is an electronic circuit that converts continuous signals into discrete form so that the microcontroller can read the data. Analog to digital converters are the most widely used devices for data acquisition [7].

Fig. 2 Getting data from the analog world

3.3 Microcontroller (At89s51)
                  The microcontroller is the heart of the proposed embedded system [4].  It constantly monitors  the  digitized  parameters  of  the  various  sensors  and  verifies  them  with  the predefined  threshold  values [5]. It checks if any corrective action is to be taken for the condition at that instant of time. In case such a situation arises, it activates the actuators to perform a controlled operation [6].

3.4 Liquid Crystal Display
                 A liquid crystal display (LCD) is a thin, flat display device made up of any number of color or monochrome pixels arrayed in front of a light source or reflector [4]. Each pixel consists of a column of liquid crystal molecules suspended between two transparent electrodes, and two polarizing filters, the axes of polarity of which are perpendicular to each other [6].

3.5 Relays
                A relay is an electrical switch that opens and closes under the control of another electrical circuit. In the original form, the switch is operated by an electromagnet to open or close one or many sets of contacts. Because a relay is able to control an output circuit of higher power than the input circuit, it can be considered to be, in a broad sense, a form of an electrical amplifier.

3.6 Power Supply Connection
                The power supply section consists of step down transformers of 230V primary to 9V and 12V secondary voltages for the +5V and +12V power supplies respectively.

4 SOFTWARE
4.1 Keil Software
                Keil Micro Vision is an integrated development environment used to create software to be run on embedded systems (like a microcontroller). It allows for such software to be written either in assembly or C programming languages and for that software to be simulated on a computer before being loaded onto the microcontroller.

4.1.1   Device Database: A unique feature of the Keil µVision3 IDE is the Device Database, which contains information about more than 400 supported microcontrollers. 

4.1.2   Peripheral Simulation: The µVision3 Debugger provides complete simulation for the CPU and on-chip peripherals of most embedded devices.

4.2 Programmer
                The programmer used is a powerful programmer for the Atmel 89 series of microcontrollers that includes 89C51/52/55, 89S51/52/55 and many more. Major parts of this programmer are Serial Port, Power Supply and Firmware microcontroller. Serial data is sent and received from 9 pin connector and converted to/from TTL logic/RS232 signal levels by MAX232 chip [8].  A  Male to Female serial port cable, connects to the 9 pin connector of hardware and another side connects to back of computer.

4.3 Proload Programming Software
               ProLoad’ is a software working as a user friendly interface for programmer boards from Sunrom Technologies. The programmer connects to the computer’s serial port (Comm 1, 2, 3 or 4) with a standard DB9 Male to DB9 Female cable. Baud Rate - 57600, COMx Automatically selected by window software. No PC Card Required [5].

Read More: Click here...

38
Electronics / Association Rule Mining on Distributed Data
« on: February 18, 2012, 01:50:32 am »
Quote
Author : Pallavi Dubey
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - Applications requiring large data processing, have two major problems, one a huge storage and its management and second processing time, as the amount of data increases. Distributed databases solve the first problem to a great extent but second problem increases. Since, current era is of networking and communication and people are interested in keeping large data on networks, therefore, researchers are proposing various algorithms to increase the throughput of output data over distributed databases. In my research, I am proposing a new algorithm to process large amount of data at the various servers and collecting the processed data on client machine as much as he/she is requiring. The data is kept in XML format, which allows processing it further, if needed.

The local copy of searched data is provided to the users if he/she requires it again, this allows making a proxy server where frequently searched items can be kept with the frequency of their access. This not only allows providing fast access to the data but will also provide to maintain list of frequently accessed data.

For accessing the data from the various servers, there are several methods such as mobile agents, direct networked access, client-server techniques Etc. I have used multithreaded environment to map various distributed servers to collect data. For processing of data at the server end, Apriori Algorithm has been applied to get the outputs, which are then sent to the client. At client data from various servers is collected and list of uncommon data is created which is then converted into XML data format. If the search is successful then user is allowed to store the search locally or at proxy server, this will reduce the future processing time of the same data search. In this paper an Optimized Distributed Association Rule mining algorithm for geographically distributed data is used in parallel and distributed environment so that it reduces communication costs. The response time is calculated in this environment using XML data.

Keywords - Association rules, Apriori algorithm, parallel and distributed data mining, Multiprocessing Environment, XML data, response time.

1.    INTRODUCTION
Association rule mining (ARM) has become one of the core data mining tasks and has attracted tremendous interest among data mining researchers. ARM is an undirected or unsupervised data mining technique which works on variable length data, and produces clear and understandable results. There are two dominant approaches for utilizing multiple Processors that have emerged; distributed memory in which each processor has a private memory; and shared memory in which all processors access common memory [5]. Shared memory architecture has many desirable properties. Each processor has direct and equal access to all memory in the system. Parallel programs are easy to implement On such a system. In distributed memory architecture each processor has its own local memory that can only be accessed directly by that processor [10]. For a processor to have access to data in the local memory of another processor a copy of the desired data element must be sent from one processor to the other through message passing. XML data are used with the Optimized Distributed Association Rule Mining Algorithm.
A Parallel application could be divided into number of tasks and executed concurrently on different processors in the system [9]. However the performance of a parallel application on a distributed system is mainly dependent on the allocation of the tasks comprising the application onto the available processors in the system.In different kinds of information databases, such as scientific data, medical data, financial data, and marketing transaction  data; analysis and finding critical hidden information has been a focused area for researchers of data mining. How to effectively  analyze and apply these data and find the critical hidden information from these databases, data mining technique has been the most widely discussed and frequently applied tool from recent decades. Although the data mining has been successfully  applied in the areas of scientific analysis, business application, and medical research and its computational efficiency and accuracy are also improving, still manual works are required to complete the process of extraction. Association rule mining model among data mining several models, including Association rules, clustering and classification models, is the most widely applied method. The Apriori algorithm is the most representative algorithm for association rule mining. It consists of many modified algorithms that focus on improving its efficiency and accuracy. For the purpose of simulation, I have employed the database of Industries to assess the proposed algorithm.The rest of this study is organized as follows. Section 2 briefly presents the general background, while the proposed method is explained in Section 3. Sections 4 and 5 illustrate the computational results of the Industry database. The concluding remarks are finally made in Section 6.

2. LITERATURE REVIEW
Association Rule Mining: In data mining, association rule Learning is a popular and well researched method for discovering interesting relations between variables in large databases. It analyzes and present strong rules discovered in databases using different measures of interestingness. Based on the concept of Strong, rules, Agrawal et al., introduced association rules for discovering regularities between products in large scale transaction data recorded by point-of-sale (POS) systems in supermarkets.

For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, he or she is likely to also buy burger. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements. In addition to the above example from market basket analysis association rules are employed today in many application areas including Web usage mining, intrusion detection and bioinformatics. Three parallel algorithms for mining association rules [3], an important data mining problem is formulated in this paper. These algorithms have been designed to investigate and understand the performance implications of a spectrum of trade-offs between computation, communication, memory usage, synchronization, and the use of problem-specific information in parallel data mining [11]. Fast Distributed Mining of association rules, which generates a small number of candidate sets and substantially reduces the number of messages to be passed at mining association rules [4].

Algorithms for mining association rules from relational data have been well developed. Several query languages have been proposed, to assist association rule mining such as [12], [13]. The topic of mining XML data has received little attention, as the data mining community has focused on the development of techniques for extracting common structure from heterogeneous XML data. For instance, [14] has proposed an algorithm to construct a frequent tree by finding common sub trees embedded in the heterogeneous XML data. On the other hand, some researchers focus on developing a standard model to represent the knowledge extracted from the data using XML. JAM [15] has been developed to gather information from sparse data sources and induce a global classification model. The PADMA system [16] is a document analysis tool working on a distributed environment, based on cooperative agents. It works without any relational database underneath. Instead, there are PADMA agents that perform several relational operations with the information extracted from the documents.
 
ASSOCIATION RULE MINING ALGORITHMS
An association rule is a rule which implies certain association relationships among a set of objects (such as ``occur together'' or ``one implies the other'') in a database. Given a set of transactions, where each transaction is a set of literals (called items), an association rule is an expression of the form X Y, where X and Y are sets of items. The intuitive meaning of such a rule is that transactions of the database which contain X tend to contain Y Association rule mining(ARM) is one of the data mining technique used to extract hidden knowledge from datasets that can be used by an organizations decision makers to improve overall profit.[2].

2.1 Apriori Algorithm
An association rule mining algorithm, Apriori has been developed for rule mining in large transaction databases by
IBM's Quest project team [4]. An {item set} is a non-empty set of items.
They have decomposed the problem of mining association rules into two parts:
1.   Find all combinations of items that have transaction support above minimum support. Call those combinations frequent item sets. Item.
2.   Use the frequent item sets to generate the desired rules. The general idea is that if, say, ABCD and AB are
3.   frequent item sets, and then we can determine if the Rule AB CD holds by computing the ratio
                     r = support (ABCD)/support (AB).
The rule holds only if r >= minimum                                confidence. Note that the International Journal of Computer Science and Information Technology, Volume 2, Number 2, April 2010 90 rule will have minimum support because ABCD is frequent. The algorithm is highly scalable [8].
The Apriori algorithm used in Quest for finding all frequent item sets
is given below.

Read More: Click here...

39
Quote
Author : Shachi Awasthi, Anupam Dubey, Dr. J.M.Kellar, Dr. P.Mor
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

1   INTRODUCTION                                                                     
WITH expansion of urbanization and population the energy requirement is increasing day by day which leads to extraction of renewable resources of energy, like sun and wind among these sun energy has a vast potential to fulfill the energy needs.
The terrestrial solar radiation is very important data for evaluating the performance of solar energy conversion system. One can use the electronic integrator for total radia-tion measurement. The principle of the electronic integrator is based on the use of voltage and current consumption of  solar cell . A typical solar radiation measuring station usually installs the pyranometer quite far from the integrator. Since the EMF output signal from the pyranometer is very small. The signal is in microvolt level ,which results more noise coupling. The insolation value is also printed out locally. To make the insolation data base for wide area, we have to install many stations. This makes  difficulty to collect data. The proposed work describes the alternative method by connecting the Solar cell to the high-resolution analog to digital converter and the use of computer software along with the memory card  for computing the insolation including the use of internet server for sending the everyday data to the receiver.
From the latest researches it would appear that solar, wind or biomass would be sufficient to supply all of our energy needs, however, the increased use of biomass has had a negative effect on global warming and dramatically increased food prices by diverting forests and crops into biofuel production. As intermittent resources, solar and wind raise other issues.

Development of suitable solar irradiance measurement system with additional features such as remote monitoring , real time capture  and facility to backup and store the data is thus essential.

2 SYSTEM  MODEL
2.1 Overview
As seen from the figure above our measurement unit consist of PIN photodiode as a SENSOR whose readings are converted digitally using a 10 bit delta to sigma analog to digital converter unit which is inbuilt in AVR ATMega16 microcontroller, unit takes the sample and convert the digital data into energy per  unit distance from the formulae as describe  in section 2.3.3, energy unit is then passed to tcp/ip stack by ENC28j60 Ethernet module. Microcontroller unit here acts as a data acquisition device which is converting the analog reading into meaning data to be transferred to the Ethernet gateway.

2.2  Microcontroller unit as a DAQ (Data acquisition System)
Transducers, a common component of any data acquisition system, convert physical phenomena, such as strain or pres-sure, into electrical signals that can be acquired by a data acquisition (DAQ) device. Common examples of transducers include microphones, thermometers, thermocouples, and strain gauges. When selecting a transducer for use with a DAQ device, it is important to consider the input and output range of the transducer and whether it outputs voltage or current. Often, the sensor and DAQ device require signal conditioning components to be added to the system to acquire a signal from the sensor or to take full advantage of the resolution of the DAQ device. However, the transducer's output impedance is commonly overlooked as a vital consideration when building a DAQ system.
Impedance is a combination of resistance, inductance, and capacitance across the input or output terminals of a circuit. Figure 1 models the resistive output impedance of a transducer and the resistive input impedance of a DAQ device. Realistically, capacitance and inductance are also present in all DAQ systems. It is important that the input impendence of the DAQ device is much higher relative to the output impedance of the selected transducer. In general, the higher the input impedance of the DAQ device the less the measured signal will be disturbed by the DAQ device. It is also important to select a transducer with as low an output impedance as possible to achieve the most accurate analog input (AI) readings by the DAQ device. The following sections address how high output (source) impedance affects a measurement system and how to use a unity gain buffer or voltage follower to decrease the output impedance of a sensor.
 
Figure 1.-Model of a Typical Transducer and DAQ Device

2.3 Using a Unity Gain Buffer to Decrease Source Impedance
When you can neither use a transducer with a low output impedance nor reduce the sampling rate of the DAQ device, you must use a voltage follower that employs operational amplifiers (op-amps) with unity gain (gain = 1) for each high-impedance source before connecting to the DAQ device. This configuration is commonly referred to as a unity gain buffer, and it decreases the impedance of the source connected to the DAQ device. A power supply is required to provide +/- 5 V to the op-amp, and the power supply should be referenced to the analog input ground (AIGND) of the DAQ device.
 
Figure: Unity gain Buffer for the ADC  to decrease the source impedance here LM358 is used as a unity gain buffer , PIN photodiode BPW34  solar irradiance sensor, R1 as shunt resistor

3   Measurement And Readings
Silicon PIN photodiode is used as a irradiance transducer .Photodiode used has a photosensitive area of 7.5mm2. Below is the characteristics’ table of BPW34 photodiode. Readings taken by a standard ammeter
Photodiode 3.337 +/- 0.116 mA at an ambient temperature of 25şC when exposed to a solar irradiance of 1000 W/m2.In order to keep the readings in a voltage range a shunt resistance is used to calibrate the circuit . Our goal is to make full sunlight give a 100 mvolts reading on the digital meter (full sunlight is about 1000 watts per square meter), so our meter will read 1 mvolts per 10 Watts/m2.

Read More: Click here...

40
Quote
Author : T. Nirmal Raj, S. Saranya, S. Arul Murugan, G. Bhuvaneswari
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - We propose a novel method of message security using trust-based multi-path routing. Simulation results, coupled with theoretical justification, affirm that the proposed solution is much more secured than the traditional multi-path routing algorithms. We propose a method to securely route messages in an ad-hoc network using multi-path routing and trustworthiness of the nodes. Hence, we aim at addressing the issues underlying message confidentiality, message integrity and access control. We combine multi-path routing and trust with soft encryption technology to propose a scheme which is much more secure than traditional multi-path algorithms. By soft encryption, we mean having encryption methods, but are more efficient in terms of performance and require less resource.

Keywords: Trust, Misbehaving nodes, soft encryption, Dynamic Source Routing, Multi-path routing.
 
1. INTRODUCTION

A mobile ad hoc network (MANET) is a kind of wireless ad hoc network, and is a self-configuring network of mobile routers (and associated hosts) connected by wireless links – the union of which form an arbitrary topology. The routers are free to move randomly and organize themselves arbitrarily; thus, the network’s wireless topology may change rapidly and unpredictably. Mobile ad hoc networks (MANETs) are composed of a set of stations (nodes) communicating through wireless channels, without any fixed backbone support.  With the advancement in radio technologies like Bluetooth, IEEE 802.11 or hiperlan, a new concept of networking has emerged which makes wireless networks increasingly popular in the computer industry.  This is particularly true within the past decade, which has seen wireless networking being adapted to enable mobility. 

Later on numbers of different protocol have been proposed as a routing solution for mobile ad hoc networks.  These different routing techniques classified as proactive, reactive and hybrid routing protocols. Reactive routing protocols have been found to be user friendly and efficient when compared to other routing protocols. The main boon of Reactive routing protocols when compared with Proactive and Hybrid routing protocols is the relatively unconditional low storage requirements, higher mobility and the availability of routes when needed. There are a variety of reactive routing protocols such as AODV, DSR, LAR1, LMR, ABR, SSI, TORA, RDMAR, MSR, AOMDV, MRAODV, ARA. Most of the multipath routing protocols like AOMDV, MP-OLSR and MP-DSR are the extension of unipath protocols like AODV, OLSR and DSR. In these protocols we use the DSR in this paper. DSR is the next generation pure reactive routing protocol for MANETs. It was proposed for the first time by Johnson and Maltz [5] in order to provide routing with minimum overhead while adapting to the network dynamics.

DSR is undergoing fast evolution thanks to the many optimizations integrated into it. DSR is based on a pure reactive approach and operates using two simple and complementary mechanisms: route discovery and route maintenance. In this paper We propose a novel method of message security using trust-based multi-path routing we propose a method to securely route messages in an ad-hoc network using multi-path routing and trustworthiness of the nodes.  We aim at addressing the issues underlying message confidentiality, message integrity and access control.

2. RELATED WORKS

Security in MANETs has been a topic of much discussion in the last few years.  There are a plenty of works available in the literature that discuss security in MANETs.  But efficiently providing complete message security in such networks still remains an open issue. 

Much research work has been done to make the route discovered by Dynamic Source Routing (DSR) secure. A Trust based multi path DSR protocol is proposed by Poonam et al. [11] in which uses multi-path forwarding approach. In this approach each node forwards the RREQ if it is received from different path. Through this method detect and avoid misbehaving nodes which were previously included due to vulnerability in DSR route discovery. In the traditional DSR protocol [5] when a node receives a RREQ packet, it checks if it has previously processed it, if so it drops the packet. A misbehaving node takes advantage of this vulnerability and forwards the RREQ fast so that the RREQ from other nodes are dropped and the path discovered includes itself. In their protocol each node broadcast the packet embedding trust information about the node from which the packet is receive. At the source node a secure and efficient route to the destination is calculated as weighted average of the number of nodes in the route and their trust values.

All the existing models have one or more of the following limitations. Most of the methods use the traditional DSR request discovery model, in which a node drops a RREQ packet, if it has previously processed it. A misbehaving node takes advantage of this and forwards the RREQ packet fast so that the RREQ received from other nodes, which arrive later, are dropped and the path discovered includes itself. Most of the trust based routing protocols have used forward trust model to find the path from source to destination. In this model trust is embedded only in the RREQ packet when it is forwarded. So each node evaluates only its previous node and the source node evaluates all the nodes involved in path. But we believe that the trust is asymmetric, so mutual trust information should be used. In watch dog and pathrater approach the trust values are not updated based on node behavior, rather they are updated periodically. Such periodic updates are not able to quantify the misbehaving nodes. Therefore the path discovered includes misbehaving nodes. All of these possible vulnerabilities have been taken care of in [11]. The authors have designed a secure routing protocol, called Trust based multi path DSR protocol, which depends on two-way effort of the node by embedding trust to find an end-toend secure route free of misbehaving nodes. This protocol has a drawback routing overhead is very high compared to traditional DSR due to broadcasting of RREQ packet. The other drawback is that all the one hop neighbors of destination after receiving first RREQ propagate to destination and also among them. Then this results in discarding the RREQ packet from most of the other paths to the destination node.

3. GLOMOSIM

Glomosim is a library-based sequential and parallel simulator for wireless networks.  This has been developed using PARSEC, a C-based parallel simulation language.  Glomosim can be modified to add new protocols and applications to the library.  Therefore Glomosim is a good choice for implementing the different traffic sources.

4. TRUST BASED MULTI-PATH ROUTING WITH SOFT ENCRYPTION TECHNOLOGY

We propose a method to securely route messages in an ad-hoc network using multi path routing and trustworthiness of the nodes.  Hence, we aim at addressing the issues underlying message confidentiality, message integrity and access control. We divide the message into different parts and encrypt these parts using one another.  We then route these parts separately using different paths between a pair of source-destination nodes.  An intermediate node can access different parts on the basis of its trustworthiness.  That is, a more trusted node is allowed to feature in more paths than a less trusted node and hence access to more message parts than a less trusted node.  This feature allows the routing algorithm to avoid nodes that are more likely to attempt ‘breaking-in’ the encryption.  In addition, suspected nodes which have high computation power and are hence likely to be more successful in cryptanalysis can be given less parts to stymie their plans. Since establishment of trust also requires cryptographic key exchange, we use a soft approach to trust.  Trust levels of peer nodes of the network are found using effort return based trust model.  We use a variation of the model, which uses a combination of derived trust and reputation to estimate trust values of a node.

We combine multi-path routing and trust with soft encryption technology to propose a scheme which is much more secure than traditional multi-path algorithms. Networks using the DSR protocol have been connected to the internet.  DSR can interoperate with mobile IP, and nodes using mobile IP and DSR have seamlessly migrated between WLANs, cellular data services, and DSR mobile ad hoc etworks. The DSR protocol include easily guaranteed loop-free routing, support for use in networks containing unidirectional links, use of only “soft state” in routing, and very rapid recovery when routes in the network change. This is the reason for preparing to the DSR protocol.

Read More: Click here...

41
Quote
Author : Jožica Bezjak A. Professor
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—The aim of this research was to analyse the inhibition of internal passivation by changing the chemical composition of silver alloys and to estimate the concentration boundaries of the selected microalloying element at which the passivation is still inhibited. Since, the ability of inocculation or modification is mostly based on the large free energy of formation of oxides of microalloying elements and their crystallographic similarity, Mg in the quantities of 0.001 to 0.5 mass% was chosen as a microalloying element for Ag-based alloys in addition to the main alloying element (Zn). For the Ag-5.8%Zn alloy, the boundary conditions of the inhibition of passivation were established. For concentrations of MgO below 0.005 vol.% not enough nuclei are formed. For too high concentration of MgO (above 1.2 vol.%) defect microstructures are formed which are characteristic for MgO precipitation. This produces a local or general passivation. By changing the composition of microalloying the internal passivation of Ag-5.8%Zn alloy could not only be inhibited but increased. Thus under some microalloying conditions much bigger depths of internal oxidation were reached.

Keywords – Ag-based alloys, Passivation, oxidation kinetics, alloys, microalloying, internal oxidation.

1   INTRODUCTION
The passivation is a phenomenon where at certain depth from the surface of the investigated material an oxide barrier is formed. The formed barrier is thick enough that further progress of oxidation through the material is not possible any more. In internally oxidised alloys the passivation has been observed for certain conditions of concentration of main element, partial pressure and temperature of oxidation[1 to 8]. We attempt to study the inhibition of passivation by microalloying. With the addition of microalloying elements (0.001 to 0.5 mass%) with very high free formation energy of oxides, regarding the oxide of the main alloying element, we tried to inhibit this phenomenon and to enable an undisturbed process of oxidation, as well as a growth of internal oxidation zone.
Our previous investigations[6 to 8] established that Mg, Si, Ti, Al, Zr, Be are important microalloying elements for Ag-Zn, Ag-In and Ag-Sn alloys. These microalloying elements have large affinity to oxygen and therefore oxidise at lower partial pressures (concentrations) of oxygen in silver than the main alloying elements (Zn, In, Sn etc.). Their oxides form the nuclei for the growth of oxides of the main alloying element. Therefore, by their distribution in the metal matrix, the distribution of the main alloying element oxide is also determined (Fig. 1)[1,2].

 
Fig. 1 
The scheme of  the concentration profile of oxygen and alloying elements at the beginning of the internal oxidation zone.

This heterogeneous nucleation is very effective at the defined concentration interval of a microalloying element.
The aim of our research was to analyse the inhibition mechanism of the formation of passivation in the Ag-Zn alloys by changing the fraction of main alloying element, as well as the fraction of microalloying element (Mg) and to estimate the range of concentrations at which the microalloying element Mg is still effective as a modificator.

2   EXPERIMENTAL
The different alloys (Table 1) were prepared in evacuated (up to 10-2 bar) quartz tubes in an induction furnace. After 30 hours of homogenisation and annealing at 800 oC the alloys were hot forged and recrystallization annealed. Finally, they were cold rolled into 3 mm thick stripes, which were then ground and etched in 10 % solution of HNO3.


Main alloying element
Zn    Type of oxide
ZnO   Additional  alloying
element   Types of oxides
MgO   Alloy designation
      mass%        at%   mass%          vol.%   mass%      at%  of Mg   mass%    vol.%   
6.3            (9.97)   7.74            (13.73)         2S1
0.045       (0.20)   0.08        0.217  (MgO)         2S0
5.8            (9.18)   7.12            (12.64     0.51       (2.29)     0.85         2.46   3S1
5.8            (9.18)   7.12            (12.64)      0.29       (1.31)   0.48         1.39   3S2
5.0            (7.91)   6.14            (10.89)     0.25       (1.18)   0.43         1.20   1S4
5.8             (9.18)   7.12            (12.64)      0.21       (0.95)   0.35         1.01   2S2
5.8             (9.18)   7.12            (12.64)     0.065     (0.29)   0.108       0.31   2S3
5.8            (9.18)   7.12            (12.64)      0.012     (0.05)   0.02         0.06   3S3
5.0            (7.91)   6.14            (10.89)     0.001     (0.005)   0.002       0.005   1S6
5.8            (9.18)   7.12            (12.64)   <0.002     (0.009)   0.003       0.009   2S4
5.8            (9.18)   7.12            (12.64)     0.001     (0.005)   0.002       0.005   3S4

Table 1 Chemical composition of the investigated Ag-based alloys

The tests of internal oxidation kinetics were made at different temperatures (in the temperature range between 750 oC and 850 oC), and at different time (from 5 to 140 hours) in a tube furnace, where air flow circulation was assured.
In the most cases, the oxidising atmosphere was air. However, some tests, in the oxidising atmosphere of oxygen (with partial pressure of 1.01105 Pa) were also performed.
Microstructural changes in the zone of internal oxidation were observed by optical (Leitz Wetzlar, Germany and Nikon Microphot–FXA, Japan) and scanning electron microscopy (SEM/WDX analyser Jeol JSM 840 A). Volume and energy changes, as well as the process of internal oxidation were measured by thermogravimetric analysis (TGA, Mettler TG 50, Germany). Applying the Wagner’s theory[1 to 3], the changes of system kinetics were calculated.

Read More: Click here...

42
Quote
Author : Akhila S, Jayanthi K Murthy, Arathi R Shankar, Suthikshn Kumar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract- Wireless communication   of  the future will comprise of  several heterogeneous networks whose  access technologies will  vary to a large extent on the network capacity, data rates,  bandwidth, power consumption, Received Signal Strength and coverage areas. With their complementary characteristics, integration of these networks to offer overlapping coverage to mobile users pose many interesting research challenges to bring about anytime, anywhere connectivity. The  best of these networks with their   varying characteristics can be brought about through a process called vertical handoff. Vertical handoff is the seamless transfer of an ongoing user session between these networks  and   requires accurate and precise decisions about the  availability of the networks and their resources for connection. A good handoff decision should avoid unwanted handoffs which  leads to an increased computational load  or should not miss making a handoff  leading to an ongoing  service being dropped causing  packet loss. Many techniques for vertical handoffs have  been proposed in literature which are based on several parameters,  but there still exists some ambiguity as  to which of these parameters give an optimum performance. This paper aims at providing an account on the various policies  developed in the decision phase of the vertical  handoff. 

Index Terms-  Heterogeneous Networks, Mobility Management, Vertical handoff, handoff decision.
 
I.   INTRODUCTION
The main attraction of wireless communication lies in the ability to communicate and exchange information on the move. The demand for the available services anytime anywhere is accelerating at a very high rate which calls for an integration of the  various wireless access technologies. With the current technologies varying widely in their bandwidths, latencies, frequencies and access methods, the next generation systems will allow global roaming among a range of mobile access networks.
This calls for a seamless transfer of the Mobile Terminal (MT) to the best access link among all available candidates with no perceivable interruption to an ongoing conversation[1]. It should also provide an end-to-end optimization that takes into account variables such as throughput optimization, routing optimization, delay profiles and economical profitability. The actual trend is to integrate complementary wireless technologies with overlapping coverage, to provide the expected ubiquitous coverage and to achieve the Always Best Connected (ABC) advantage [2]. The Always Best Connected concept should enable a user, to choose among a host of networks that best suits his or her needs and to change when something better becomes available. It requires a framework that supports mobility management, access discovery and selection, authentication, security and profile server. This calls for an efficient Vertical Handoff Decision(VHO) scheme which involves a tradeoff among several handoff parameters such as network conditions, system performance, application types, power  requirements, mobile  node conditions, user preferences, security  cost  and the Quality of Service(QoS). These parameters may have varying levels of importance in the decision process [3]. Also, the handoff solution should be network-layer-transparent and infrastructure-modification-free so that existing Internet server and client applications can painlessly survive the rapid pace of wireless technology evolution [4].

       Fig. 1: Horizontal and vertical handoff
The handoffs are classified into two main streams, Horizontal Handoff (HHO) and Vertical Handoff (VHO). Figure1 illustrates horizontal and vertical handoff. The main distinction between Vertical Handoff and Horizontal handoff (HHO) is symmetry.

   VHO   HHO
Access Technology   Changed   Not Changed
QoS Parameters   May be changed   Not Changed
IP Address   Changed   Changed
Network Interface   May be Changed   Not Changed
Network Connection   More than one connection   Single connection
Table 1: Difference between Vertical  and Horizontal Handoff.
While HHO is symmetric or an intra-technology based process, VHO is an asymmetric or an inter-technology based process in which the MT moves between two different networks with different characteristics [5]. The vertical handoff process involves three main phases [6] [7], namely system discovery phase, decision phase and execution phase.
During the system discovery phase, the MT scans for  available candidate network for connection which may include several parameters like  the supported data rates and QoS parameters. This phase needs to be invoked periodically, since the users are mobile.
In the decision phase, the mobile terminal determines whether the connections should continue using the existing network or be switched to another network depending  on various parameters like the type of the application (e.g., conversational, streaming), minimum bandwidth available, delay constraints, cost, transmit power and the user’s preferences.
In the execution phase, the connections of the mobile terminal are handed over to the new network in a seamless manner. Authentication, authorization, and transfer of a user’s information is done during this phase.
Handover discovery and decision phase can sometimes overlap, since some situations may require more additional probing of the network condition.  A delay in handoff process can be differentiated into three main mechanisms [10].
Discovery Time (td):  During this period, the mobile terminal perceives its new wireless network range either through the trigger-based router solicitation or waits to receive a router advertisement from an access router in the visited network and gets its router advertisement (RA) from the new access router.
Address Configuration Interval (tc): During this period,  the mobile device receives the  Router advertisement and  updates its routing table. A new care-of- address (CoA) will be based on the prefix of the new router that is obtained from the RA.
Network Registration Period (tr): This is the period during which the binding update(i.e., the association of home address with a care-of address) to the home agent as well as the correspondent node is sent and first packet from the correspondent node is received. Since the binding acknowledgment from correspondent node is elective, optimizing IP-level vertical handoff delay would involve minimizing the discovery time and network registration period. The decision phase is the most important phase in VHO since it determines how meaningful the handoff is to the user. This needs an extensive research to find accurate ways of discovering precise decision techniques which may include one or more parameters. The objective of this paper is to show how decision parameters or policies affect VHO.  A brief survey of the various decision making techniques used has been provided.

Read More: Click here...

43
Quote
Author : Tonye K. Jack
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract — and Program Objective -The familiarity and user friendliness of the Microsoft Excel TM spreadsheet environment allows the practicing engineer to develop engineering desktop companion tools to carry out routine calculations. A Multitask single screen gas pipeline sizing calculation program is developed in Microsoft Excel TM. Required equations, and data sources for such development is provided.

Index Terms— Isothermal pipeline design, pipe sizing, piping program, gas pipelines, engineering on spreadsheet, spreadsheet solutions.

1   INTRODUCTION                                                                     
Gas pipelines are employed for meeting various energy needs. Calculations for the design of such gas piping can often involve repetitive calculations whether for simple horizontal straight pipelines or pipelines for complex terrains. Advances in computer applications for piping design have created several off-the-shelf can programs, for which cost might be a limitation to their uses for certain, quick-check calculations. Microsoft Excel TM with its Visual Basic for Applications (VBA) automation   tool   can   be   used   to   develop   a   multi - functional   single   screen   desktop tool   to   carry   out   such   calculations.

2 REQUIRED GENERAL EQUATIONS FOR ISOTHERMAL FLOW
Pipe cross-sections are of Circular types for which the applicable relations are:

Reynolds Number:                (1)

Velocity:                            (2)

Area:                       (3)

Friction factor:

For Laminar Flow,                   (4)

For Turbulent Flow, f, is obtained by the Colebrook-White equation. Method of solution described in [1], uses the goal seek option in Microsoft ExcelTM.
   
    (5)

Flow rate:
G = γAV      (6)
      Where,
γ = P/RT = ρg       (6a)

The General Relation for evaluating such gas lines is given by equation (7).
   
          (7)      
   
3   FLUID PROPERTIES FUNCTIONS
A database of physical properties of typical piped gases can be developed using Microsoft Excel TM Functions category. The developed functions are then available as drop down lists in the Functions option of the Toolbar INSERT menu. Yaws, [2], [3], [4] provides density, and viscosity data as functions of temperature.

As an example, the [5], derived curve-fitted gas viscosity relationship for Methane (CH4), as a function of temperature is:
 
         (9)

Where, A= 15.96, B= 0.3439, C=-8.14 E-05
The unit of viscosity is in micro-poise, which can be converted to Ns/m2 by multiplying by 1E-6:

Thus, the revised equation is:

               (9a)

The temperature, T, in “(9),” is in Kelvin (K). The program can be developed to handle temperature data in Centigrade (oC) with a built-in conversion option.
The ALIGNAgraphics [6], structured naming convention for the fluid properties functions described in [1], is applied, i.e.

Name of property_ (temperature)

For Methane:    rhoMethane (temperature)
      viscoMethane (temperature)

Where rhoMethane, and viscoMethane are the function names for Methane gas density and gas viscosity respectively. The program developer could also adopt the chemical formula of the fluid type, particularly in cases of long fluid property names as in some hydrocarbons. Thus, using Methane as example, the following gas density function name will apply:

   rhoCH4(temperature).

4   APPLICATION EXAMPLE
Carbon Dioxide flows isothermally at 30oC through a horizontal 250 mm diameter pipe at the rate of 0.12kN/s. If the pressure at a section 1 is 250 kPa, find the pressure at a section 2, which is 150 m downstream?  Take that: Pipe Roughness = 6 x 10-4 m.

Nomenclature

P1   Upstream pressure (kPa).
P2   Downstream pressure (kPa ).
G    Mass Flowrate  (KN/s).
g    Gravity constant  (m/s2).
A   Area (m2).
f     Friction factor
D    Pipe internal diameter (m)
L    Pipe section length  (m).
M   Molecular Weight
γ    Specific weight  (kN/m3).
ρ     Gas density  (kN-s2/m4).
V     Velocity of flow  (m/s).
R     Gas constant  (m/degK ).
T   Temperature, (oC) or (K)

Read More: Click here...

44
Quote
Author : Ganga Reddy Tankasala
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— This paper deals with optimization of fuel cost of coal fired generators of a modern power sytem. The conventional method of solving economic load dispatch (ELD) uses Newton Raphson, Gauss and Gauss Siedel techniques whose time of computation increases exponentially with the size. Inorder to overcome the dreawbacks of conventional methods, Artificial Intelligent (AI) techniques likes like Genetic Algorithm (GA), Nueral Networks (NN), Artificial Immune systems (AIS) and Fuzzy Logics etc… are used. One such AI technique used is Artificial Bee Colony optimization (ABC) inspired from the foraging behaviour of bees. The ABC is applied for ELD and compared with the other AI techniques. The results show that ABC promises global minimum of the solution while others may land in local minimum.

Index Terms— Artificial bee colony, Artificial intelligent techniques, Economic load dispatch, Genetic Algorithm, Power systems

1   INTRODUCTION                                                                     
Artificial Bee Colony  optimization algorithms are formulated on the basis of natural foraging behaviour of honey bees. ABC was first developed by Dr.Korba. Some artificial ideas are added to construct a robust ABC .Very unlike to classical search and optimization methods ABC starts its search with a random set of solutions (Colony size), instead of single solution just like GA. Each population member is then evaluated for the given objective function and is assigned fitness.The best fits are entertained for next generation while the others are discarded and compensated by a new set of random solutions in each generation. The only stopping criterion is the completion of maximum no of cycles or generations. At the end of cycles the solutions with best fit is the desired solution.

Economic Load Dispatch (ELD) is one of the important opti-mization problems in modern Energy Management Systems (EMS). ELD determines the optimal real power settings of generating units in order to minimize total fuel cost of thermal plants. Various mathematical programming methods and optimization techniques have previously been applied for solution of ELD. These include Lambda iteration method, participation factors method and gradient methods. ELD problems in practice are usually hard for traditional mathematical programming methodologies because of the equality and inequality constraints.

ABC is applied for solution of ELD.A generating unit based encoding scheme is used, however when applied to large size systems, the number of maximum iterations or generations has to be increased proportinally.The solution time grows approximately linearly with problem size rather than geometrically.

2 ECONOMIC LOAD DISPATCH

2.1 Problem Formulation
The objective of Economic Load Dispatch (ELD) for power system consisting of coal fired thermal generating units is to find the optimal combination of power generations that minimizes the total fuel cost for generation while satisfying the specified equality and inequality constraints. The fuel cost function of the generator is modeled as a quadratic function of generator active powers (P). The minimization function ‘A’ can be obtained as sum of the fuel costs Fi of all the generating units.

Min A = ∑ Fi            ∀ i ϵ (1, 2,3…, NG)                             (1)

Subjected to   
∑ PGi  = PD + Ploss        ∀ i ϵ (1, 2,3…, NG)                     (2)

PGimin ≤ PGi ≤ PGimax    ∀ i ϵ (1, 2, 3…, NG)                (3)

The fuel cost of generating unit is given by

 Fi = (ai + bi Pi + ciPi2 )                                                  (4)
Where ai, bi, ci are cost coefficients of generating unit i, Pi or PGi is real power generation of unit ‘i’. PD is the total demand and Ploss represents the transmission losses. PGimin and PGimax are the minimum and maximum generation lim-its of ith unit.

This is a constrained optimization problem that may be solved using calculus methods that involve Legrange function. The necessary condition for the minimization of fuel cost is that the incremental cost rates of all the units be equal to some undetermined value Lambda (λ). Along with the above condition, the equality constraint, the sum of the power outputs must be equal to the combined power demand and losses. If transmission system losses are neglected, the equality constraint becomes, the sum of the power outputs must be equal to the total power demand by the load. Also, the power output of each unit must be with in its generation range.

2.2 Transmission system Losses
Since always transmission losses are involved with a network, in order to achieve exact ELD, transmission system losses must be taken into account.Using B-coefficients method, the network losses are expressed as a quadratic function of unit generations as

Ploss =ΣΣPi Bij Pj           ∀ i,j ∈ [1,2,3, … NG]                       (5)

In (5) Bij are called as B-coefficients or loss coefficients which are constant under certain assumed conditions. The above loss formula is known as the George’s formula.   

3 ARTIFICIAL BEE COLONY FORAGING BEHAVIOUR
To find the optimal decision variables, to optimize an objective function and to satisfy the constraints, the variables are bounded to the limits. Eqn. (6) gives a function defined to [1] take care of variable bounds.

Read More: Click here...

45
Quote
Author : Rishi Asthana, Neelu Jyoti Ahuja, Manuj Darbari, Praveen Kumar Shukla
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Modeling and development of control systems to deal with the congestion at intersection in urban traffic is a critical research issue. Several approaches have been used to develop the modeling and controlling phenomenon in the said problem. These approaches include, Petri net, Fuzzy Logic, Neural Network, Genetic Algorithms, Activity Theory, Multi Agent Systems and many more. This paper is a survey on the development of Urban Traffic Control Systems using techniques discussed above in the last decade.   

Index Terms— Fuzzy Logic, Neural Network, Genetic Algorithms, Multi Agent Systems, Activity Theory, Petri Nets.

1   INTRODUCTION                                                                      
The development of control systems to deal with the con-gestion at the intersection in urban traffic is a critical re-search issue. The prime requirements of the developed system are, the signal must not allow the ambiguous move-ment to the traffic and it must be clear that how/when the indication of signal shown to be changed. Two other aspects to be handled are to take decisions about signal indication sequence in the control system to make the system well opti-mized and development of control logic for signal generation. 
 
This paper has been divided into 6 subsections. In section 2, Petri net based modeling has been studied and revised. Section 3 contains the review of multi-agent systems in urban traffic control systems. Neural Network based approaches are discussed in Section 4. Section 5 contains the fuzzy logic based approaches in Traffic Control Systems. Several hybrid approaches of fuzzy logic, neural network, petri nets are discussed in section 6. Section 7 contains various other approaches for the traffic control systems, like activity theory, complex network theory, incident and real time traffic control etc.
   
2 PETRI NET MODEL BASED APPROACHES
Petri Nets [1] are also known as a place/Transition Net or P/T net. It is the mathematical modeling language for the description of Discrete Event Systems (DES). PN theory is developed in 1962 by Carie Asam Petri. These are highly applicable in graphical modeling, Mathematical modeling, simulation and real time control by the use of places and transitions.

Different variations of the Petri Nets are applied in the modeling and control of traffic systems.
A Colored Timed Petri net (CTPN) model has been used for validating a Urban Traffic Network in [2].
A model for real time control of urban traffic networks is proposed in [3]. A modular framework based on first order hybrid Petri nets model is developed. The vehicle flows by a first order fluid approximation, in this approach. The lane interruptions and the signal timing plan controlling the area are developed by the discrete event dynamics integrated with timed Petri nets.

A new hybrid Petri net model for modeling the traffic behavior at intersection is developed in [4]. The important aspects of the flow dynamics in urban networks are interpreted very well.
A new approach of continuous Petri nets with variable speed (VCPN) is proposed in [5]. The analysis and control design in urban and interurban networks is done.

A network model via hybrid Petri nets [6] is used to dem-onstrate and implement the solution of the problem of coordinating several traffic lights. It aims the improvement in the performance of some classes of special vehicles, like public and emergency vehicles.

A model of TCPN (Timed Control Petri Nets) is used to demonstrate and solve the problem of coordinating sever-al traffic lights in [7]. The analysis of the control TCPN mod-els is done by Occurrence Graphs (OG) techniques.

A Colored Petri Net Model of an urban traffic network for the purpose of performance evaluation is demonstrated in [8]. The subnets for the network, the intersections, the external traffic inputs and control are discussed.

A Urban Traffic Simulation has been done using petri net in [9]. This approach is based on generating producer consumer network and grid simulation of petri nets.

3 MULTI AGENT SYSTEMS
A multi-agent system (MAS) [10] is a system consists of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems that are possible to be difficult or impossible for an individual agent or a monolithic sys-tem to solve. Intelligence may include few methodic, functional, procedur-al and algorithmic searching, finding and processing techniques.
A multi agent system approach to develop distributed un-supervised traffic responsive signal control models, has been developed in [11]. Each agent in the system is a lo-cal traffic signal controller for one intersection in the traffic network. The first multi agent system is identified using hybrid soft computing techniques. Each agent employs a multistage online learning process to update and adapt its knowledge base and decision-making procedure. The second multi agent system is produced by integrating the simultaneous perturbation stochastic approximation theorem in fuzzy extended neural networks (NN).

An approach to model the traffic of an important crossroad in Mashhad city using intelligent elements in a multi-agent environment and a large amount of real data, has been developed in [12]. The overall traffic behavior at the intersection was first modeled by the Bayesian networks structures. Also, the probabilistic causal networks are used to model the effective factors.
Among the several ITS applications is the notion of Dynamic Traffic Routing (DTR), which involves generating “optimal” routing recommendations to drivers with the aim of maximizing network utilizing. In [13], it has been presented that the feasibility of using a self-learning intelligent agent to solve the DTR problem to achieve traffic user equilibrium in a transportation network. The agent then learns by itself by interacting with the simulation model. Once the agent reaches a satisfactory level of performance, it can then be deployed to the real-world, where it would continue to learn how to refine its control policies over time.

The integration of cooperative, distributed multi-agent system to improve urban traffic control system is proposed in [14]. Real-time control over the urban traffic network is done through an agent-based distributed hierarchy traffic control system. This system cooperates with dynamic route guidance system. Cooperative system framework and agent structure are discussed in this work.

A new framework of hybrid control system for UTC is presented in [15], in which any optimal control strategy can be adopted. By the interface D-S and interface C-S namely cooperation model, the hybrid system of UTC is divided into three layers including digital control loop, discrete event module, and Group Decision-making Support System (GDSS). By integrating GDSS consisted of central agent and intersection agents, real time control and coordinate control with the characteristics of self-decision, cooperation, and intelligence are implemented.
An approach of modeling the urban traffic flow system is discovered for combining the global and local model information for the whole city net in [16]. It is assumed that traffic digraph consists of several nodes and those nodes are linked with routes lines. The proposed system uses the random walk theory. Vehicle flow density and driver strategy independence are also the important factors in this approach.

An agent-based approach to model the individual driver behavior under the influence of real-time traffic information is proposed in [17]. The driver behaviour models developed in this work are based on a behavioural survey of drivers. This survey was conducted on a congested commuting corridor in Brisbane, Australia. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined.

Read More: Click here...

Pages: 1 2 [3] 4 5 ... 22