Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - IJSER Content Writer

Pages: 1 2 3 [4] 5 6 ... 22
46
Engineering, IT, Algorithms / Mechanisms of Tunneling IPv6 in IPv4 networks
« on: February 13, 2012, 04:40:43 am »
Quote
Author : Nirjhar Vermani
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— In IpV4 major requirement is that all the IP networks should have unique network number, even if they are or if they are not connected with the internet, which results in the consumption of more addresses, due to this consumption IP addresses in IPV4, are becoming exhausted. Secondly the structure of IPV4 is of classes which had address spaces with different size and studied independently. To manage this problem internet experts focus on the use of Classless Inter Domain Routing (CIDR) and Dynamic Host Configuration protocol (DHCP) to manage the address space. But due to the growth in usage of internet CIDR and DHCP are not working properly as an alternative. It is becoming challenging to retain the large routing tables, network authentication and security of the network which is the major requirement in the current cyber age.

Index Terms— Sub netting, IPV6, IPV4, Tunneling, Teredo, Routing, DHCP, Tunel Broker
 
INTRODUCTION:-
IPV4 is the version number 4 of the internet protocol, ituses 32 bit addressing scheme and has exclusive 232=4294967296 IP addresses, and it is the first version which is deployed on the internet broadly. The main usage of the IPv4 is on the Ethernet and it doesn’t assure the delivery of the packets or about the sequence in which the packets are transported and about the delivery of same packet again and again.so it works on the concept of performing its best effort to deliever the data from source to destination. It has a checksum in its header which detects and then removes the corrupted data.

In IpV4 major requirement is that all the IP networks should have unique network number, even if they are or if they are not connected with the internet, which results in the consumption of more addresses, due to this consumption IP addresses in IPV4, are becoming exhausted.

Secondly the structure of IPV4 is of classes which had address spaces with different size and studied independently. To manage this problem internet experts focus on the use of Classless Inter Domain Routing (CIDR) and Dynamic Host Configuration protocol (DHCP) to manage the address space. But due to the growth in usage of internet CIDR and DHCP are not working properly as an alternative. It is becoming challenging to retain the large routing tables, network authentication and security of the network which is the major requirement in the current cyber age.

IPV6 AS AN ALTERNATIVE:-
IPV6 or IPng (Internet Protocol of Next generation) is the version number 6 of the internet protocol, it uses 128 bit addressing scheme and has exclusive 2128 =3.40282366920938446346337460743177e+38 IP addresses, which are enough to keep internet alive for a long period of time. Though a new version known as IPV6 (Internet Protocol version number 6) has also been introduced and also its deployment is under process but still the progress is very slow.

IPv6 was introduced as an alternative to solve the problems or at least minimize the problems which we are facing in the internet protocol version 4.so in ipv6 first of all larger space for addresses was introduces which is assumed to be enough for next 30 or 35 years, unique addressing with a complete hierarchy of addresses has been introduced which depends on the prefix of address instead of classes as in ipv4,it helps in keeping the well-organized routing in the core and in outcome is the small routing tables, and this efficiency in routing tables also helps in maintaining better security and authentication in networks.

TRANSITIONS FROM IPV4 TO IPV6:-
Internet is running successfully on the IPv4 from last 20 years or so, but now there is a time to move forward toward the new IP version 6 because the unallocated addresses in ipV4 is expected to be allocated in next 5 or 6 years, and then IPV4 alone cannot fulfill the requirements of ever growing cyber population, so IPv6 transition has been accepted as the most promising solution for now.

DIFFICULTIES IN TRANSITION:-
Major difficulty in the transition phase is that from last 20 years or so internet is running on IPv4 so it is very hard to transferthis huge internet from ipv4 to ipv6 and it can only be done gradually. Experts are continuously researching about the transition and its effects on the users and internet service providers, and what will be the best scenario and effective mechanisms for transition and how this will arise or solve security issues.“One of the problem faced by organizations especially website operators ,wanting to deploy IPv6 is the lack of information on IPv6 adoption and the quality of service provided by the IPv6 internet.”[EvaluatingIPv6AdoptionintheInternet]

Read More: Click here...

47
Quote
Author : Deepmala Singh Parihar, Prof. Ravi Mohan
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract - The channel estimation has become very vast field due to different types of interference present in wireless channel and in equipments. In this thesis, estimation algorithms for digital communications systems in the presence of Additive White Gaussian noise and Multipath environment are explored and their performance is investigated. In particular, least square Error and Zero forcing equalizers are used to provide the optimum solution and compensate for Inter-Symbol error. As the BER performance of equalizers in variable in multipath fading channel therefore we have combined Equal Gain combining and Maximal Ratio Combing Diversity techniques, and searched that Maximal Ratio combining techniques is able to fight with Co-Channel interference and Inter-symbol interference problem.

Keywords: - OFDM, Equalizer, Diversity, QAM
 
1   INTRODUCTION
Wireless communication [1] systems require signal processing techniques that improve the link performance in hostile mobile radio environments. Complex channel estimation i.e. estimation of channel gain, which includes phase and amplitude. Equalization, diversity and channel coding are three techniques which can be used independently or in tandem to improve received signal quality and link performance over small scale times and distances. In flat fading environment, estimation of the channel using trained sequence of the data has been studied and implemented in [2]. Then pilot data of some required percentage of data length is inserted into the source data. It is used to estimate the random phase shift of the fading channel and train the decision to adjust the received signal with phase recover. So, finally phase estimation using training symbol is implemented in flat fading environment. The radio channels in mobile radio systems are usually multipath fading channel, which are causing intersymbol interference (ISI) and intercarrier interference (ICI) in the received signal. To remove ISI and ICI from the signal many kind of equalizers and diversity algorithms can be used. Detection algorithms based on trellis search like Least square error (LSE) and Zero forcing (ZF) algorithms for equalization[3] and Maximal ratio combining (MRC) and Equal gain combining (EGC) for diversity techniques[4] offer a good receiver performance, but still often not much computation. Therefore, these algorithms are currently quite popular. Channel estimation in frequency selective has different approach then compared with flat fading environment.

Semi analytical method to evaluate BER of quadrature amplitude modulation (QAM) and additive noise where pilot assisted linear channel estimation and channel equalization. A novel channel estimation scheme for OFDMA uplink packet transmissions over doubly selective channels was suggested in [5].   

2      OFDM
OFDM is a spectrally efficient modulation technique [6]. It is conveniently implemented using IFFT and FFT operation. There are very fast and efficient implementation of the FFT and IFFT, which is the big reason of the popularity of OFDM. It handles frequency selective channels well when combined with error correction coding. In other words OFDM is frequency division multiplexing of multicarriers which are orthogonal to each other i.e. they are placed exactly at the nulls in the modulation spectra of each other. In OFDM data is divided into several parallel data streams or sub-channels, one for each sub carrier which are orthogonal to each other although they overlap spectrally. Each subcarrier is modulated with a conventional modulation scheme (QAM or QPSK) at a low symbol rate, maintaining total data rates similar to conventional single carrier modulation schemes in the same bandwidth.
 
Figure 1 Subdivision of the channel bandwidth W into narrowband sub channels of equal width  ∆f

The advantages of OFDM include its robustness to narrowband cochannel interference. High spectral efficiency and its low sensitivity to time synchronization errors. Besides these advantages it has some disadvantages like its complexity and sensitive to Doppler shift and frequency synchronization problems. OFDM requires a more linear power amplifier.

 Figure 2 Block diagram of OFDM transmitter and receiver
FFT is written as

     ... (1)
                                     WN be the complex-valued phase factor               
Thus, X (k) becomes   
   
           ... (2)                            
  Similarly IFFT is written as,

  … (3)

3      EQUALIZERS
Equalization is the process of adjusting the balance between frequency components within an electronic signal. The circuit or equipment used to achieve equalization is called Equalizer [7]. Equalization compensates for ISI created by multipath within time dispersive channels. If the modulation bandwidth exceeds the coherence bandwidth of the radio channel, ISI occurs and modulation pulses are spread in time into adjacent symbols. An equalizer within a receiver compensates for the average range of expected channel amplitude and delay characteristics. Equalizers must be adaptive since the channel is generally known and time varying. So, an adaptive equalizer compensates for an unknown and time varying channel, it requires a specific algorithms to update equalizer coefficients and track the channel variations, we use zero forcing (ZF) algorithm and least square error (LSE) algorithm.

3.1    Zero Forcing Algorithms:
 In a zero forcing equalizer, the equalizer coefficients Cn are chosen to force the samples of the combined channel and equalizer impulse response to zero at all. For a channel with frequency response F(f) the ZF equalizer    . Thus the combination of channel and equalizer gives a flat frequency response and linear phase must satisfy Nyquists criterion.         
                     .... (4)
 Zero Forcing equalizer has the disadvantage that the inverse filter may excessively amplify noise at frequencies where the folded channel spectrum has high attenuation.
3.2   Least Mean Square Algorithms:
A more robust equalizer is the LMS equalizer where the criterion used is the minimization of the MSE between the desired equalizer output and the actual equalizer output. Define the input signal to the equalizer as a vector yk.
Mean Square Error is

    … (5)
Equalization can be used to any signal processing operation that minimizes intersymbol interference (ISI). Since the mobile fading channel is random and time varying, equalizer must track the time varying characteristics of the mobile channel and thus are called adaptive equalizer.

Read More: Click here...

48
Quote
Author : B. Uppalaiah, Dr. N. Subhash Chandra, R. V. Gandhi, Prof. G. Charles Babu, N. Vamsi Krishna
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— A framework for knowledge discovery, knowledge use, and knowledge management is presented in this article to provide knowledge-based access of the domain databases using multi-agent systems approach. This framework encompasses five different agents: namely, knowledge management agent, data filter agent, rule induction agent, dynamic analysis agent, and interface agent. This article suggests an enhancement in the typical Knowledge Query and Manipulation Language (KQML) used to interact recurrently and to share information between multiple agents to achieve their goals by including the notion of linguistic variable and, hence, to support fuzzy decision making. The article also includes a sample KQML query block (along with membership function used by the knowledge management agent), result of the query, and structure of database files for a co-operative dairy. The approach provides advantages like effectiveness, explanation, reasoning, multimedia, and user-friendly interface in accessing multiple databases for an application.

Index Terms— knowledge-based systems, multi-agent system, knowledge query and manipulation language, linguistic variable. 

1   INTRODUCTION                                                                      
Knowledge-Based Systems (KBS) are productive tools of Artificial Intelligence (AI) working in a narrow domain to impart quality, effectiveness, and knowledge-oriented approach in decision making process. Being a product of fifth generation computer technology, KBS possess cha-racteristics like (Efraim, 1993):

   providing a high intelligence level;
   assisting people to discover and develop unknown fields;
   offering vast knowledge base;
   aiding management activities;
   solving social problems in better way;
   acquiring new perceptions by simulating
   unknown situations;
   offering significant software productivity improve-ment; and reducing cost and time to develop
   computerized systems.

One of the main components of KBS is the knowledge base, in which domain knowledge, knowledge about knowledge, factual data, procedural rules, business heuristics, and so on are available. The inference engine is another component, which infers new knowledge and utilizes existing knowledge for decision-making and problem solving. Explanation/reasoning and self-learning are two more components to improve acceptability and scope of the system. These components also provide justification for the decision taken. Additionally, a user interface is available to interact with users in more friendly way. Figure 1 shows position of the KBS in the well-known data pyramid along with its general structure.
T
ypical relational database management systems deal with data stored in predefined format in one or more databases/tables. These systems do not deal with knowledge and/or decision processing and do not include features like:

   capability to add powers to the solution and concen-trate on effectiveness;
   transfer of expertise, use of expertise in decision making, self learning, and explanation;
   mainly symbolic manipulation;
   learning by case/mistakes;
   ability to deal with partial and uncertain information; and
   work for narrow domain in a proactive manner.

 In the information and communication technology era today, a large number of processes is automated and generates number of large databases. Some applications span their boundaries in multiple dimensions and deal with multiple databases in a distributed fashion. Such large databases in business contain staggering amounts of raw data. These data must be looked at to find new relationships, emerging lines of the business, and ways for improving it. Trying to make sense out of these data requires a knowledge-oriented perspective, which is not easily achieved through either statistical process or even multidimensional visualization alone (Cox, 2005). The potential validity or usefulness of data elements or patterns of data elements may be different for various users. The relevance of such items is highly contextual, personal, and changing continuously. According to Donovan (2003), making re-trieved data or a description of data patterns generally understandable is also highly problematic. Moreover, the structure and size of the data set or database and the nature of the data itself make the procedure more complex and tedious. This leads to the need for the proposed system in which databases can be accessed in knowledge-oriented fashion. To achieve this, productive agents like KBS can be utilized to search and manage database content to impart quality and effectiveness. Section two of this paper proposes a framework and methodology of knowledge-based access to multiple databases using modified Knowledge Query and Manipulation Language (KQML) as communication means between agents. Section three discusses an illustrative situation along with the structure of databases, a sample agent communication using KQML block, and a typical query by an agent to another agent with an example in dairy industry that works on the proposed architecture.

2 MULTI-AGENT SYSTEM ARCHITECTURE
The term ‘agent’ is loosely defined. However, an agent can be referred to as a component of software and/or hardware, which is capable of acting exactly like a user in order to ac-complish tasks on behalf of its user. KBS tools used as agents are autonomous, co-operative, and able to learn themselves. A multi-agent system can be considered as a loosely coupled network of problem solver entities that work together to find answers to problems that are beyond the individual capabilities or knowledge of each entity (Durfee, Lesser, & Corkill, 1989; Sajja, 2005). A multiagent system comprised of multiple autonomous components needs to have certain characteristics (Jennings, Sycara, & Wooldridge, 1998; Roberto, 1999):

   each agent has incomplete capabilities to solve a problem;
   there is no global system control;
   data is decentralized; and
   computation is asynchronous.

That is, combining multiple agents in a framework presents a useful software engineering paradigm where problem-solving components are described as individual agents pursuing high-level goals.
      As most business applications deal with several data-bases of a homogeneous nature, they can interact easily. However, such interaction and content retrieval is limited in its scope and is static. In addition to this, to access the databases in knowledge-based fashion, explicit (manual) expertise becomes essential. Such expertise includes tasks like meta-knowledge and domain knowledge management, filtering and statistical analysis of data from the databases, interface, and presentation related tasks. Most of these tasks have their own methodology and are highly independent from the other tasks though carried out for common system objectives. Moreover, for the standard tasks like interface, data analysis, and data retrieval, mechanisms once developed can be reused for other systems to increase reusability of the system. This leads to development of necessary components as different agents for every specific independent task within a common framework enabling these agents to interact. The multi-agent systems developed so-far are application specific and can not be reused. Gibert et al (2002) developed a system using statistical and knowledge management agents, specifically for management of environmental databases for effective decision support systems. Another example is an agent-based intelligent environmental monitoring system by Ioannis & Pericles (2004), which presents a multi-agent system for monitoring and assessing air-quality attributes and which uses data coming from a meteorological station.

Read More: Click here...

49
Quote
Author : M. Abragam Siyon Sing, K. Vidya
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—Modern Embedded multiprocessor design presents challenges and opportunities that stem from task coarse granularity and the large number of inputs and outputs for each task. They are complex systems that often require years to design and verify. A significant factor is that engineers must allocate a disproportionate share of their effort to ensure that modern FPGA chips architecture behave correctly. Therefore, in order to reduce the complexity in design and verification, a new architecture is proposed which is implemented using FPGA.  In this, the Embedded Processors are integrated with the shared memory system, synthesized that this system on an FPGA environment and ARISE interface is used to extent the processor and this interface is used once. Then, an arbitrary number of processors can be attached, via the interface, which can be a reconfigurable unit. Using this interface, more number of processor cores can be attached and bit to bit conversions can also be possible from one processor to another processor, that is, asymmetric processors can be built.

Index Terms – ARISE Interface, VLIW Processor, FPGA, Wrapper.
 
1   INTRODUCTION
Any system that incorporates two or more microprocessors working together to perform one or more related tasks is commonly referred to as a multiprocessor system. In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating-system software design considerations determines the symmetry in a given system. For example, hardware or software considerations may require that only one CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one processor (either a specific processor, or only one processor at a time), whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized.

Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. Because of the flexibility of SMP and because of its cost being relatively low, this architecture has become the standard for mainstream multiprocessing. Multitasking operating systems can run processes on any CPU in a SMP system because each processor has the same view of the machine.
In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single-instruction, multiple-data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple-instruction, single-data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple-instruction, multiple-data or MIMD).

Tightly-coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own on board cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM.

Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly-coupled multiprocessing. Mainframe systems with multiple processors are often tightly-coupled.
Loosely-coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone single or dual processor commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely-coupled system.

Tightly-coupled systems perform better and are physically smaller than loosely-coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely-coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster. Power consumption is also a consideration. Tightly-coupled systems tend to be much more energy efficient than clusters. This is because considerable economies can be realized by designing components to work together from the beginning in tightly-coupled systems, whereas loosely-coupled systems use components that were not necessarily intended specifically for use in such systems.

2   MULTIPROCESSORS
Multiprocessor system consists of two or more connected processors that are capable of communicating. This can be done on a single chip where the processors are connected typically by either a bus or a NoC. Alternatively, the multiprocessor system can be in more than one chip, typically connected by some type of bus, and each chip can then be a multiprocessor system. A third option is a multiprocessor system working with more than one computer connected by a network, in which each computer can contain more than one chip, and each chip can contain more than one processor. Most modern supercomputers are built this way.
 A parallel system is presented with more than one task, known as threads. It is important to spread the workload over the entire processor, keeping the difference in idle time as low as possible. To do this, it is important to coordinate the work and workload between the processors. Here, it is especially crucial to consider whether or not some processors are special-purpose IP cores. To keep a system with N processors effective, it has to work with N or more threads so that each processor constantly has something to do. Furthermore, it is necessary for the processors to be able to communicate with each other, usually via a shared memory, where values that other processors can use are stored. This introduces the new problem of thread safety. When thread safety is violated, two processors (working threads) access the same value at the same time. Consider the following code:
                A = A + 1
When two processors P1 and P2 execute this code, a number of different outcomes may arise due to the fact that the code will be split into three parts.
L1 : get A;
L2 : add 1 to A;
L3 : store A;

It could be that P1 will first execute L1, L2 and L3 and afterward P2 will execute L1, L2 and L3. It could also be that P1 will first execute L1 followed by P2 executing L1 and L2, giving another result. Therefore, some methods for restricting access to shared resources are necessary. These methods are known as thread safety or synchronization. Moreover, it is necessary for each processor to have some private memory, where the processor does not have to think about thread safety to speed up the processor. As an example, each processor needs to have a private stack. The benefits of having a multiprocessor are as follows:

1.   Faster calculations are made possible.
2.   A more responsive system is created.
3.   Different processors can be utilized for different tasks.

In the future, we expect thread and process parallelism to become widespread for two reasons: the nature of the applications and the nature of the operating system. Researchers have therefore proposed two alternative micro architectures that exploit multiple threads of control: simultaneous multithreading (SMT) and chip multiprocessors (CMP).

Chip multiprocessors (CMPs) use relatively simple single-thread processor cores that exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores.

Wide-issue superscalar processors exploit instruction level parallelism (ILP) by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit thread-level parallelism (TLP) by executing different threads in parallel on different processors.

Read More: Click here...

50
Others / Lower and Upper Approximation of Fuzzy Ideals in a Semiring
« on: February 13, 2012, 04:24:02 am »
Quote
Author : G. Senthil Kumar, V. Selvan
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— In this paper, we introduce the rough fuzzy ideals of a semiring. We also introduce and study rough fuzzy prime ideals of a semiring.

Index Terms— Semiring, lower approximation, upper approximation, fuzzy ideal, fuzzy prime ideal, rough ideal.

1   INTRODUCTION                                                                      
The fuzzy set introduced by L.A.Zadeh [16] in 1965 and the rough set introduced by Pawlak [12] in 1982 are generali-zations of the classical set theory. Both these set theories are new mathematical tool to deal the uncertain, vague, im-precise and inexact data. In Zadeh fuzzy set theory, the degree of membership of elements of a set plays the key role, whereas in Pawlak rough set theory, the equivalence classes of a set are used to define the lower and upper approximation of a set.

Rosenfeld [13] applied the notion of fuzzy sets to groups and introduced the notion of fuzzy subgroups. After this paper, many researchers applied the theory of fuzzy sets to several algebraic concepts such as rings, fields, vector spaces, etc.

The notion of rough subgroups was introduced by Biswas and Nanda [1]. The concept of rough ideal in a semigroup was introduced by Kuroki in [11]. B.Davvaz [3], [2], [4] studied the roughness in many algebraic system such as rings, modules, n-ary systems,  -groups, etc. Osman Kazanci and B.Davvaz [10] introduced the rough prime and rough primary ideals in commutative rings and also discussed the roughness of fuzzy ideals in rings. The roughness of ideals in BCK algebras was considered by Y.B. Jun in [8]. In [14] the present authors have studied rough ideals in semirings.

In this paper, we introduce the concept of rough fuzzy ideal of a semiring. Also we study the notion of rough fuzzy prime ideal in a semiring.

2    CONGRUENCE IN SEMIRINGS
Definition 2.1. A semiring is a nonempty set R on which operations of addition and multiplication have been defined such that the following conditions are satisfied.
(i)   is a commutative monoid with identity element 0;
(ii)   is a monoid with identity element   ;
(iii) Multiplication distributives over addition from either side;
(iv)  ,  for all  .

Throughout this paper   denotes a semiring.

 Definition 2.2.  [6] Let   be an equivalence relation on  , then   is called a congruence relation if   implies   for all  .

Theorem 2.3. [6] Let   be a congruence relation on  , then  and   imply    and   for all  .

Lemma 2.4. [6] Let   be a congruence relation on a semiring  . If   then
(i)    
(ii)   

Definition 2.5. A congruence relation   on   is called complete if
(i)     and
(ii)    .
for all  .

Definition 2.6. A ideal  of a semiring   is a nonempty subset of   satisfying the following condition:
(i)   If   then  .
(ii)   If   and   then  .

A ideal   of a semiring   defines an equivalence relation   on  , called the Bourne relation, given by   if and only if there exists elements   and   of   satisfying  . The relation   is an congruence relation on   [6], [7].



We denote the set of all equivalence classes of elements of   under this relation by   and we will denote the equiva-lence class of an element   of   by  .   

 Throughout this paper   denotes the Bourne congruence relation induced by an ideal   of a semiring  .
   
Definition 2.7. An ideal   of a semiring   is called a  -ideal if   implies   for each   and each  .             


3   LOWER AND UPPER APPROXIMATION OF A FUZZY IDEAL IN A SEMIRING

A mapping   is called a fuzzy subset of  .
A fuzzy subset   of a semiring   is called a fuzzy ideal of   if it has the following properties:   
(i)    
(ii)   
 A fuzzy ideal   of   is said to be normal if  .

Definition 3.1. A fuzzy ideal   of a semiring   is said to be prime if   is not a constant function and for any two fuzzy ideals   and   of    implies either  .

Definition 3.2. Let   be the Bourne congruence relation on   induced by   and   be a fuzzy subset of . Then we define the fuzzy sets  as follows:
         

Read More: Click here...

51
Others / Application of wavelet packet analysis for speech synthesis
« on: February 13, 2012, 04:22:39 am »
Quote
Author : Vaishali Jagrit, Subhra Debdas, Chinmay chandrakar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

                    Abstract— Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale. They have advantages over traditional Fourier methods in analyzing physical situations where the signal contains Discontinuities. Wavelet packet analysis is analysis the different entropy of voice signal.

KEYWORD: wavelet packet trees, entropy.
 
1    INTRODUCTION                                                                      
Wavelets are functions that satisfy certain mathematical re-quirements and are used in representing data or other functions .The fundamental idea behind wavelets is to analyze[1,2] according to scale. Indeed, some researchers in the wavelet field feel that, by using wavelets, one is adopting a whole new mindset or perspective in processing data. Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. This idea is not new. Approximation using superposition of functions has existed since the early 1800's, when Joseph Fourier discovered that he could superpose sines and cosines to represent other functions. However, in wavelet analysis, the scale that we use to look at data plays a special role. Wavelet algorithms process data at different scales or resolutions. If we look at a signal with a large \window," we would notice gross features. Similarly, if we look at a signal with a small \window," we would notice small features. The result in wavelet analysis is to see both the forest and the trees, so to speak.  This makes wavelets interesting and useful. For many decades, scientists have wanted more appropriate functions than the sines and cosines which comprise the bases of Fourier analysis, to approximate choppy signals . By their definition, these functions are non-local (and stretch out to in unity). They therefore do a very poor job in approximating sharp spikes. But with wavelet analysis, we can use approximating functions[3] that are contained neatly in finite domains. Wavelets are well-suited for approximating data with sharp discontinuities. The wavelet analysis procedure is to adopt a wavelet prototype function[4,5], called an analyzing wavelet or mother wavelet. Temporal analysis is performed with a contracted, high-frequency version of the prototype wavelet, while frequency analysis is performed with a dilated, low-frequency version of the same wavelet Because the original signal or function can be represented in terms of a wavelet expansion (using coefficients in a linear combination of the wavelet functions), data operations can be performed using just the corresponding[6] wavelet coefficients. And if you further choose the best wavelets adapted to your data, or truncate the coefficients below a threshold, your data is sparsely represented. This sparse coding[7,8] makes wavelets an excellent tool in the field of data compression. Other applied fields that are making use of wavelets include astronomy, acoustics, nuclear engineering, sub-band coding, signal and image processing, neurophysiology, music, magnetic resonance imaging, speech discrimination, optics, fractals, turbulence, earthquake-prediction, radar, human vision, and pure mathematics applications such as solving partial differential equations.

1.1 Discrete Wavelet Transform
Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. The basic idea of the wavelet transform is to represent any arbitrary signal ‘X’ as a superposition of a set of such wavelets or basis functions.  These basis functions are obtained from a single photo type wavelet called the mother wavelet by dilation (scaling) and translation (shifts).
 Low frequencies are examined with low temporal resolution while high frequencies with more temporal resolution. A wavelet transform combines both low pass and high pass filtering in spectral decomposition of signals.

1.2 Wavelet Packet and Wavelet Packet Tree

Ideas of wavelet packet is the same as wavelet, the only difference is that wavelet packet offers a more complex and flexible analysis because in wavelet packet analysis the details as well as the approximation are splited. Wavelet packet decomposition gives a lot of bases from which you can look for the best representation with respect to design objectives. The wavelet packet tree for 3-level decomposition is construct the best tree. Shannon entropy criteria find the information con-tent of signal ‘S’

       Information content of decomposed component (approximation and details) may be greater than the information content of components, which has been decomposed. In this paper the sum of information of decomposed component (child node) is checked with information of component.

2    WAVELET PACKET ANALYSIS
In wavelet analysis that every coefficient is associated with a function either a scaling function or a wavelet function de-pending on whether it is a ‘smooth’ or ‘detail’ coefficient. In the wavelet analysis the value are move from higher scale to lower scale and the basic function do not change.  In this analysis it can be split on detail coefficient lead to change in basis set and these basis sets are called ‘wavelet packets’.

      The 8 data on the leaf nodes having 8 coefficients. These 8 coefficients are associated with 8 different functions. The functions associated with first two (on the left) are scaling and wavelet function with which started. All others are complex shaped function derived from wavelet function. This change in shape poses a problem in interpretation of the original signal. Wavelet packet analysis leads to different basis function.

Read More: Click here...

52
Quote
Author : Prof. S. B. Srivastava
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract: When the design parameter of any equipment is fixed, it indicates the beginning stage of any equipment and at this stage; it is not possible to find out the exact requirement which the equipment has to fulfill. For utility equipments, it can not be anticipated what will be the actual operating requirements. The engineering concept during commissioning of the equipment is well guided by the experts and their guidance is limited for only a short period. With the passage of time and change in environment, the operating condition of the equipment changes. At this stage, the performance of the equipment/industries shows a downward trend. Sometimes, the system, situations and circumstances of working change. It affects the operating condition of Industries or Institutions even incurring huge losses. Every work is followed by certain technicality and if the technicality is deviated, the out come will affect performance. This is the main reason behind the continuous analysis and methodical approach to improve performance and to reach towards system perfection. Technical Audit is a “tool” to create awareness, develop skills, integrate knowledge, upgrade technicality, increase profitability, productivity: improve working conditions and quality of life. Technical Audit delights Owners of industries and also the Customers.

   Technical Audit is one of the most important Improvement Tool for big Industries and Multi National Companies. It is a well known fact that the improvement aspect always exists everywhere. Only there is the need to identify what should be that improvement. Small-small improvements are not the big things but all the small improvement together makes a heaven. Technical Audit is a systematic approach to study and identify the improvement for system perfection, productivity and profitability.

Index Terms: Technicality, Profitability, Efficiency, Consumption, Performance, Owner’s delight, Standardisation, Idle Spindle, Data, Technical Audit.

Abbreviations: AICTE – All India council of Technical education, ESP – Electro-static precipitators, DM – Demineral, ETP – Effulient treatment plant, DG – Diesel Generator, NDT – Non destrictive testing, RH – Relative humidity, ETME – Emerging trend in mechanical engineering, MMM – Madan Mohan Malviya, DC – direct current, et al – and others

INTRODUCTION                                                             
TTechnical Audit is one of the audits where the facts are searched; facts are studied; facts are indicated and sug-gested. It is not a fault finding Audit. When any Institution / Industry / Enterprise are launched, the situations, circums-tances and conditions are different. With the passage of time all these situations, circumstances and conditions are al-tered. In the starting of any unit, certain practices are stared which are economical at that time but after the some periods, the same practice becomes uneconomical and a burden on the employer. The burning example of this fact is that at the starting of Grasim Nagada (a chemical factory), a system was formed that every employee will be given some liters of milk free of cost when he will come on duty. Since at the starting time the TOTAL OPERATION was on a small scale but after the passage of time the capacity was enhanced to a manifold extent, the number of employees has also increased to a manifold extent. The new plant chemical division has also opened side by side. Now all the employees are demanding for milk and multi millions are expended as the cost of milk. Similarly, for cement plants, earlier the wet process manufacturing was feasible but due to technological development the feasibility of wet process cement units became outdated. The units which have not adopted dry process are now closed. Technical audit gives the right suggestions at right point, at right time for tho-roughfare to system perfection and increase profitability.
In technical Audit the technicality of every system, equipment, process, stores and inventories, spare parts, ad-ministration, commercial activities and each inputs are studied with out any prejudices. During the study the areas where the improvement is possible is highlighted.
Literature Survey:

Survey of the research literature indicates that either the re-search have been directed out on General Auditing Principles or procedures and not on the Effectiveness of Quality Audit itself. This has also been confirmed by Rajendran and Devadasan (2005). The only exception is Health and Milne (2002) and Franka Piskar (2006) who have given some contribution to Value Added Quality Audit.

The contribution of Zutshi and Sohal, (2002) represent the practical   experience of eight prominent auditors with respect to adoption of EMS/ISO 14001 (a quality system) by Australian Organizations. The issues and benefits relating to the quality auditing processes are discussed. The aims of research by Terziovski et al (2002) were to examine the role of non financial auditors and the audit process with respect to the existing ISO 9000 Quality Standards. They concluded that conformance auditing has a role in the early stage of quality system implementation.

However, the effectiveness diminishes as the quality system matures. It has been observed by research results that 89% of the organizations firmly follow implementation of audit recommendations. Audit results, showing thrust on quality audit is recognized [Beecroft, (1996); Pivka and Ursi, (1999); Seddon, (2001); Heras et al., (2002); Magd and Curry, (2003); Fuentes et al., (2003); Pan, (2003); Piskar, (2003); Pivika, (2004);

Marki, (2005)], for their theoretical and empirical work. Bhatt et al. (2004) worked on quality and cost improvements in neonatal prescribing through clinical audit. By completing the audit cycle, improved therapeutic care has been achieved with more accurate drug monitoring target and reduced the drug cost. Similar findings have also been reported by Wickramasinghe and sharma (2005), Smith and Manna (2005), and Souillard et al. (2005). Oliverio Mary Ellen (2007) has given thrust to Audit Quality in U.K. Financial Report Counsel in Feb. 2007. S. Nagata et al, (2008), has given valuable information for improving Product quality through Audit System in April 2008..

Duraisamy, P. & James, Estelle & Lane, Julia & Jee-Peng Tan (1997). in their topic “Is there a quantity–quality tradeoff as enrollment increase, evidence from Tamil Nadu, India, have high lighted that increased enrollment of student requires increased resources and also it decreases the quality. Deolalikar, Anil & Hasan, Rana & Khan, Haider & Quibria, M.G., 1997, have pointed out in their research topic “Competiveness and human resource development” University Library of Munich, Germany, revised 1997, have the importance of human resource in quality education. David de la Croix & Matthias Doepke, 2007, in their topic "To segregate or to integrate- education, politics and democracy”, said that it is the responsibility of Government to provide quality education to their citizen and resources for education should be managed by the Government. Alderman, suggest in their paper that the roll for private delivery of schooling services to poor households in developing countries is of importance if college maintains good resource. Puja Vasudeva Dutta, 2006, suggests about the gap between the wages of teachers and its effect on quality education. Monazza Aslam, 2003, finds in their research the difference in government and private education in Pakistan and quality. Geeta Kingdon & Francis Teal, 2004, points out that the performance of students is related with the wage of the teachers. Geeta Gandhi Kingdon, 1997, describes the condition of female education in India.

Srivastava, S.B., March 2009, “Technical Audit for Improve-ment of Educational Quality” (A case study of Indian Engi-neering colleges where Customer itself is the Input and Final Product), has given good thrust on Technical Audit. Srivastava, S.B., October 2009, “Quality and Profitability Improvement by Technical Audit” a case study of process plant published in   “International Journal of Computer Science and Engineering”, indicates the importance of Technical Audit in process plant.

Srivastava, S.B., October 2009,“Technical Audit to Improve Maintenance Effectiveness” Published in proceedings of National Conference, “Engineering Trend in Mechanical Engineering, ETME 2009  at MMM Engineering. College Gorakhpur, .sponsored by “AICTE”, is one of the important eye openers for industry owners. Again Srivastava, S. B., January 2010, “Manpower Assessment of a Chemical Plant” by Technical Audit – a case study, published in International Journal of “Engineering, Science and Technology” indicating the importance of technical Audit.
.
The present work aims at giving more value to the Technical Audit which will result in the profitability of the organiza-tions e.g. audit of equipment effectiveness, system effective-ness, process effectiveness, method audit etc for reaching a step forward towards Zero defect in product quality. A case study of a Chemical Industry is presented below for the same purpose

Read More: Click here...

53
Quote
Author : Omar Tarawneh, Faudziah Ahmad, Jamaiah Yahaya
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Many websites fail to help companies reach their objectives because they neglect consumers need in their websites developments. The consumer of B2C business plays a significant role in sustaining B2C business organizations. Therefore, companies must identify their consumers’ behavioral characteristics. This study aims to investigate the B2C quality factors from the consumers’ perspective. Specially, investigate the current practice of quality development for B2C ecommerce websites in term of satisfaction, online buying habits, obstacles surrounded B2C ecommerce websites and the factors that effecting and considered in B2C evaluation from consumers perspectives. Data was gathered through questionnaire and interviews methods.  Simple descriptive statistics such as mean, frequency calculation, and percentages were used for analysis.   Out of thirty three factors, only seventeen factors have been found to be important.   These are web site visibility, safety, serviceability, price savings, high responsiveness, online shops credibility, enjoyment and entertainment, websites information, the value of the web, promotive activities, clarity, relevance, diversity of goods, services and information, web documents current and updated, user-friendly web interface, trust or trustworthiness, and accuracy and authority of web documents.

Index Terms— Business to consumer, consumer prespective factors, e-commerece evaluation, likert scale, questionnaire, website evaluation, websites quality.

1   INTRODUCTION                                                                      
The technological advances of the twenty one century have led to significant increase in internet using for commercial purposes [1]. Since, the development of the first commercial website in 1994, E-commerce has grown rapidly. It is predicted that e-commerce usage will increase rapidly during the next years. Laudon and Traver [2] supported this theory and they also predicted in the near future all commerce business will be ecommerce business by the 2050.

In addition, the consumers are no longer bound or loyal to specific times or specific locations if they want to shop; consumers can purchase whatever products or services virtually at anytime and from any place. In other words, online shopping is the process used by the consumer when he/she decides to shop via the internet from anywhere and at anytime, which is known as e-commerce. E-commerce is considered as one of the most important contributions of the information technology revolution [3].

    In general, e-commerce can be defined as a business process of selling and buying products, goods, and services through online communications or via the internet medium [4]. In other words, e-commerce means exchanging goods and services on the Internet as on-line shopping [5]. Indeed, e-commerce is considered one of the best methods for buying and selling products, services, and information electronically. Besides this, e-commerce is also considered one of the factors affecting the way payment is made. As in [6], [7], company interactive communication channel classified for four main types of ecommerce which are Business to Business (B2B), Business to Consumer (B2C), Consumer to Business (C2B), and Consumer to Consumer (C2C). B2B refers to online transaction conducted between business organizations. B2C refers to the transactions that conduct between business and consumers via electronic way. C2B refers to consumers selling their goods or services to busi-ness on online ways. C2C involves the online interaction conducted between consumers.

     There are limited studies in Jordan regarding ecom-merce. These studies focused on challenges and limitation of adopting ecommerce in Jordan, reviews on how Jordan has adapted to some ecommerce challenges, and infrastructural problems that affect ecommerce. Many studies agreed that organizations in Jordan are facing a number of obstacles and barriers which affect the distribution of ecommerce in Jordan.  They claimed that the reasons for limited buying and selling through the Internet were the lack of cooperation between the public and private sectors, lack of trust, infrastructure problem, lack of knowledge, weakness of ecommerce organizations in promoting ecommerce in a good way, high cost of personal computes, high cost of connecting to the Internet, lack of training, and cultural resistance [8],[9].  These studies suggested ecommerce organizations to improve their existing websites so as to improve their business.

     In order to improve quality of ecommerce websites and thus increase online purchasing, important factors affect-ing success of ecommerce websites need to be investigated and addressed specifically on consumers’ perspectives. Many researchers reported that more than seventy five percent of dot.com cpmpanies do not last longer than two years [10],[11],[12]. Many researchers related this failure to the neglecting of consumers’ needs [13],[14], or ignoring the consumers’ element in their website development [15].

     Ecommerce website is considered as the ‘front door’ of an online shop that interacts between the organizations and consumers and many websites fail to help organizations reach their objectives because consumers’ needs are not catered for in the development which, results consumers’dissatisfaction in using the websites.

    Existing literature has pointed out that the consumers’ perspective in website evaluation has not been given due consideration [16], [14], [17]. [15], [14], [18] related this failure to designers who did not take the consumer aspect or human element into consideration in their website development. [19] related this failure to the lack of comprehensive set of criteria for B2C website development, which means there is a need to develop a framework that includes a comprehensive set of characteristics from the user and technical aspects.

    This research presents the findings of a study that was conducted on Jordanian firms. The purpose of the study was to understand the preliminary issues underlying websites quality evaluation, find out the current practices of website quality evaluation, determine consumer factors related to B2C transactions, investigate users’ opinions on the need of websites quality evaluation for B2C websites, investigate the importance of consumer perspectives on  B2C websites evaluation and development and, investigate the mechanisms and procedure that organizations currently follow in their websites de-velopment.

2 RESEARCH APPROACH
Survey method was used for conducting an empirical study on Jordanian organizations. The following sections describe the methodology for the study.

2.1 Questionnaires development and interviews
A five point Likert scale questionnaire was first developed. The questionnaire consist of thirty two questions divided to four main sections:- respondent background, current practices for quality models for business to consumer’s ecommerce websites, websites quality and the obstacles surrounded business to consumers’ websites, and quality factors. Section respondent background consists of nine questions. It presents general information about the respondent / demographic data and some trigger question to increase the reliability of the study such as the gender, educational level, online buying habits, the websites you always fre-quently visit, internet connection type, and online purchasing experiences. See  Table 1.

Read More: Click here...

54
Quote
Author : Ahlam T.Al-Sarraf
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract--  Currently, many scientists believe in the fact that  electronics as a vast modern discipline needs relevant tools and tactics to develop its already living syst-ems.  This technical note contains an everywhere path for the development of computing artificial intelligence techniques, software design  and  database descriptions mark the human needs for a varity of computing  functions:(a) Among which training and testing knowledge of tactical situations (b) Better planning and decision for planning situations interfacing tactical artificial intelligent systems.(c) Providing an experiment for studding tactical decision making. In fact, this studying aims at an-swering some queries found in the above question like what can architects in electronics provide in tactical graphics.This humble research paper is being undertaken to give a workable answer to some or part of these queries found in the area of electronics which is hoped to be reliable source within variety of computing fields.  It will provide an acceptable answer to what is being asked about integration  between soft and hardware made to develop tactical graphics in electronic.  It has become  now possible for the designer to connect the personal computer (Buses) with any outside apparatus by means of designing a suitable card (ADAPTER) fixed on the(EXPA-NSION-SLOT).Thus,the primary aim of this study is to develop  reprogrammable  and  prototype boards aiming producing some reliable features suitable for final use graphic cards.
 
1-INTRODUCTION
    The great role of our age of information has had its clear impact on the mo-dern immense development in the technology of computer, in the fact that ele-ctronics as  a modern discipline engaged the development of relevant tools and tactical to insure and facilitate the role and use of its living systems. This role is not limited only to introducing computer systems and their programs rather than the programming  language statement rules and drawing design programs    We believe that the above note contains an everywhere path for the develop-ment of computing artificial intelligence technique and designs . Many of whi-ch are invested in administration to establish devices of shopping  propaganda and advertisement,which have been enlarged to become greater in the industri-al,scientific and technological fields.It seams that such important role is rather neglected by us the Arabs.The economical sector and the industrial section oc-cupy a vital part in the ability of computer technology since they are directly connectend with the control system,statements resulting from measurements process and adjustment.Soft ware design and databased descriptions mark the human needs for a variety of computing functions such as training and testing knowledge for tactical situations ,better planning decisions inter-facing tactical artificial intelligent systems and providing prototype experim-ent for studying tactical decision making.Yet their astonishing development and surprising per-formance together with their wide spread, made them a pr-oper model choice in this vast area of knowledge.    Here lies the important of this humble study. We consider it as an encouraging beginning concerning the suitability of com-puting cards design,since it does not represent a clue for setting up controlling systems.   It is considered as a try to join the personal computer with the out-side  surrounding  atmosphere with the hope to be a suitabl guide the universi-ty students to set up or form some  advantageous  control-ing system for their own purpose during their study of tactical design.

2- PREVIOUS STUDIES-
   Katz(1989) discusses the graphic tactic to enhance the architectures for high performance computing. Similarly, King  Kuose and Rose (2005)describe ways to imp-rove an accurate performance model, using a hierarchy of programmable interconnect to allow for logic blocks to be interconnected as needed by the system designer. Moreover,in digital circuit,a  flip-flop is a kind of bistable multivibrator an electronic circuit which has two stable states and thereby is capable of sering as one bit of memory.Today,KuoseRose2005 view term flip-flop as one bit memory generally denoted  nontransparent (clo-cked or edge-triggered)devices while the simpler transparent once are often re-ferred to as latches.  It is realized that before the development of low-cost in-tegrated circuits, chains of multivibratorsare found in use as frequency divid-ers. This technique was used in early electronicorgans ,especially insome app-lication of early version system.A transition from one state of minimal free en-ergy requires some form of activation energy to penetrate the barrier Kuose (Ibid )view the time,it take as usually attributed to the relaxation time,since the sy-stem will relax into the next state of  lowest energy again,which will be de-fine in such situation. Wolf (Ibid) and nutt (Ibid) believe that the above roles and activities have been greatly inlarge to include scientific industrial and vast technological fields . Yet, science and engineering have been greatly develop-ped the invention of com-puting devices, which will help scientists and resear-ch students to collect, manipulate and interpret relevant data with much greater speed, accuracy and precision.

AIM OF THE PAPER 3-
The primary aim of  this paper to answer the question of what can archit-ects, electronics and computers offer to people with vision impairments.  In fact, this ques-tion includes immense roles of the endless information available in these days and the noticeable development in the technologyof computers,which are not only made to introduce operating systems and their feeding programs like,but are originally utilized to develop different types of  administration  in human  activities made in rel-evance toshopping,propaganda advertisement and the like. Here lies the importance of this paper with its remarkable aim in this study to encourage a start concerning suitable cards design. Humbly speaking,this research paper isn’t made to present a noticeable process for setting up sophisticated controlling design or watching syste-ms for we consider it a try to join the personal computer with the outside human activities. In the same time, a tactical graphics study TGS that makes a design tactical source hardware,architecture and standard for graphics card leading primarily for targeting free software,source operating system. Hence,this study will be developing programmable and prototype boards aiming at producing some noticeable featured and end _user graphics cards.

4-   PROCEDURAL METHODOLOGY-
    The procedural methodology and function of the circuit relevantly used in this paper is mainly based on the processes of conversion of digital signal provided by the computer into analog signal used widely within industrial application. Thus,all components of the procedural principle are built on the cards known asXt board in order to facilitate the connection. This card is composed from 62 pins based on two directions A and  B where each direction is composed  of 31 pins devoted to transfer or to achieve the communication between the computer and the card . The path way used in the computer is of the type ISA (Instruction Set Architecture ). It is one of the ad-equate methods known to be used in connection and communication.

4-1    Contents for Methods-
It is realized that the adequacy of the procedural methods need the following componential units (see the diagram,above),U1 and U2(74lS244, OCTAL BUFFERS) are needed to accomplish a protection circuit between the input and output represented by the computer concerning the out put of data to the card. U1 and U2 are considered as the component of the circuit which are made of three case buffers. The interface buffering require several additional  integrated circuit  to be added to the circuit load of computer,this method is hope to eliminate any potential problem from far out which is measure of the number of  logical gate inputs driven by the current from a sig-nal gate output. This buffering system is accomplished by  using three74LS244s chips which is in fact an octal buffer  and line driver with three state outputs,is made to provide unidirectional buffering for sixteen computer times. On the other hand,the chip is design  to improve both the performance and density for three state memory of address,clock and bus- oriented receivers and transmitters.

Read More: Click here...

55
Quote
Author : Madhusudhan, Narendranath S, G C Mohan Kumar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract- The rate of solidification affects the microstructure, quality and mechanical properties of the castings. The analysis of heat transfer in centrifugal casting is very complex due to rapid solidification, rotating mould, opaque mould and high temperature. As the grain size is directly depending on the rate of solidification of the casting, based on grain size the rate of solidification of the centrifugal casting can be determined. Grain size has been measured for the gravity castings at different cooling rates and using this result rate of solidification of the centrifugal castings have been determined which are produced at different rotational speeds.

Keywords- Centrifugal Casting, Microstructure, Rate of solidification, Rotational speed.
 
1 INTRODUCTION 
Centrifugal casting is a process of producing hollow castings by causing molten metal to solidify in rotating mould. The operations include in centrifugal casting are rotation of mold at a known speed, pouring the molten metal and extraction of the casting from the mold. Solidification is quite rapid and hence good metallurgical quality is achieved as solidification starts from mould inner surface corresponding to casting outer surface, so low melting point impurities are carried by the solidification front to the casting inner surface and gas porosity is also forced at the casting inner surface because of its low density and also fine grain structures are formed [1]. The inner impurity surface can be removed by machining. The casting parameters which influence the solidification structures include, the mold rotational speed, mold dimension, preheating temperature of the mold, pouring temperature and metal composition. But rotational speed of the mold is one of the important process variables which affect the rate of solidification of the molten metal [2]. Determination of temperature distribution during centrifugal casting process and further determination of solidification time of centrifugal castings by experimental techniques is very difficult as the mould rotates at a very high speed during solidification. In view of this accurate data on solidification time of centrifugal casting are not available [3]. But simulation by CFD program can be treated as an attractive and useful tool for modeling centrifugal casting process[4].As the rotational speed is increased the centrifugal force is increased by a square proportion, which may create a strong convection in the liquid pool and then producing a As As a  result, the growth of equiaxed grains is favored [5].The rate of solidification is very important phenomenon as it has a great influence on the So one of the method to study the solidification rate is based on grain size. In this experiment initially gravity castings are produced at different solidification rates and corresponding grain sizes have been determined. Using this results the solidification rates of the centrifugal castings are determined corresponding to different grain sizes of the centrifugal castings produced at different process conditions.
                                                   
2. EXPERIMENTAL DETAILS

Initially gravity castings are made and grain size has been determined corresponding to different solidification rate. Using these results solidification rates of centrifugal castings have been determined.

2.1 Gravity Die Casting

Three types of dies are used with wall thickness of 10 mm, 20 mm and 30 mm which causes different cooling rates cooling rate 1, cooling rate 2, and cooling rate 3. Thermocouple junction is kept inside the mould and molten Tin at about 450o is poured to these moulds. Cooling curves are drawn for the above three cases and slopes of these cooling curves represents the solidification rates (oC/Sec.). The microstructures are obtained and graph of solidification versus grain size has been plotted.

2.2 Centrifugal Casting
 
formation of a fine grain structure by increasing the degree of constitutional supercooling [6]. Generally an area of the casting which is cooled quickly will have a fine structure and area which cools slowly will have coarse grain structure. Several studies have explained about the effect of above mentioned process variables of centrifugal casting, but the influences of casting parameters on solidification morphology and its theoretical and quantitative description are still far from clear understanding.  homogenization of temperature in the bulk liquid [5].   

The   Figure 1 shows the experimental setup of centrifugal casting, which consists of a mild steel cylindrical die fixed to a driving flange. This driving flange is connected to the shaft of a DC motor, where the speed can be varied from 0 to 2000 rpm with high accurate speed controller. The flow of metal into the mould is confined in the horizontally oriented, axially rotating cylindrical die. Centrifugal castings are obtained at three different speeds, 200 rpm, 400 rpm and 800 rpm.

Read More: Click here...

56
"
Quote
Author : Mrs.Waykule J.M., Ms. Patil V.A
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: (i) “texture synthesis” algorithms for generating large image regions from sample textures, and (ii) “inpainting” techniques for filling in small image gaps. The former has been demonstrated for “textures” – repeating two-dimensional patterns with some stochastic; the latter focus on linear “structures” which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches.

Index Terms— Image Inpainting, Texture Synthesis, Simultaneous Texture and Structure Propagation.
 
1   INTRODUCTION                                                                    
 A New algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way.
 Figure 1 show an example of this task, where the foreground person (manually selected as the target region) is automatically replaced by data sampled from the remainder of the image
Fig. 1 Removing large objects from images.  (a) Original pho-tograph.  (b) The region corresponding to the foreground person (covering about 19% of the image) has been manually selected and then automatically removed. Notice that the horizontal structures of the fountain have been synthesized in the
occluded area together with the water, grass and rock tex-tures.

2. PRESENT THEORY AND PRACTICES
 In the past, this problem has been addressed by two classes of algorithms: (i) "texture synthesis" algorithms for generating large image regions from sample textures, and (ii) "inpainting" techniques for filling in small image gaps.  The former work well for "textures" -- repeating two-dimensional patterns with some stochastic; the latter focus on linear "structures" which can be thought of as one-dimensional patterns, such as lines and object contours.

Fig.2 Removing large objects from photographs. a) Original image b) The result of region filling by traditional image inpainting. Notice the blur introduced by the diffusion process and the complete lack of texture in the synthesized area c) The final image where the bungee jumper has been completely removed and the occluded region reconstructed by our automatic algorithm

3. KEY OBSERVATIONS
 3.1 Exemplar-based synthesis suffices
The core of our algorithm is an isophote-driven image sam-pling process. It is well-understood that exemplar-based approaches perform well for two-dimensional textures [1], [11],[17]. But, we note in addition that exemplar-based texture synthesis is sufficient for propagating extended linear image structures, as well; i.e., a separate synthesis mechanism is not required for handling isophotes.
The core of our algorithm is an isophote-driven image sam-pling process. It is well-understood that exemplar-based approaches perform well for two-dimensional textures [1], [11],[17]. But, we note in addition that exemplar-based texture synthesis is sufficient for propagating extended linear image
structures, as well; i.e., a separate synthesis mechanism is not required for handling isophotes.   
Figure 3 illustrates this point. For ease of comparison, we adopt notation similar to that used in the inpainting literature.
The region to be filled, i.e., the target region is indicated by Ω, and its contour is denoted δΩ. The contour evolves inward as the algorithm progresses, and so we also refer to it as the “fill front”. The source region, Ф which remains fixed throughout the algorithm, provides samples used in the filling process.

Fig.3 Structure propagation by exemplar-based texture syn-thesis. (a) Original image, with the target region Ω, its con-tour δΩ, and the source region Φ clearly marked. (b) We want to synthesize the area delimited by the patch   centered on the point p ε ∂Ω. (c) The most likely candidate matches for   lie along the boundary between the two textures in the source region e.g.  and    (d) The best matching patch in the candidates set has been copied into the position occupied by   , thus achieving partial filling of   . Notice that both texture and structure (the separating line) have been propagated inside the target region. The target region Ω has, now, shrunk and its front δΩ.has assumed a different shape.
The user will be asked to select a target region, Ω, manually. (a) The contour of the target region is denoted as δΩ. (b) For every point p on the contour δΩ, a patch Ψp is constructed, with p in the centre of the patch. A priority is calculated based on how much reliable information around the pixel, as well as the isophote at this point. (c) The patch with the highest priority would be the target to fill. A global search is performed on the whole image to find a patch, Ψq that has most similarities with Ψp. (d) The last step would be copy the pixels from Ψq to fill Ψp. With a new contour, the next round of finding the patch with the highest continues, until all the gaps are filled.

Read More: Click here..."

57
"
Quote
Author : Dr Irfan Zafar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—This research project is a longitudinal field study designed to examine the antecedents and consequences of Office Internet use in middle graded employees of a government organization related to the research and development field of work. Does using the Internet affect Professional’s development? Do employees become efficient in their work? Does professional skills and performance suffer or improve? Does it help them to complete their official assignments? In this research study, a wealth of opinions and anecdotal evidence has attempted to answer these basic questions. At one extreme are the Internet enthusiasts who view Internet use as the panacea for all that plagues society, including inadequacies in the organizational system. At the other extreme are the Internet alarmists who view Internet use as undermining the very fabric of society, including the healthy development of its workmanship. Most people fall somewhere between these extremes. Most are waiting for research to answer these questions. This research is primarily aimed at finding the solution to the issues   agitating their minds pragmatically and in the light of detailed research undertaken.

Index Terms— Middle Gradeded officers, Internet Use, Social behavior, Professional skills, Official assignments. 
                                                                   
1   INTRODUCTION
Does the use of Internet affect the Professional develop-ment, efficiency; skills, output and performance of the em-ployees in an organization are some of the pertinent questions which this research study will attempt to determine.

A group of 50 officers of an organization were selected to participate in the study of the social, professional and working effects of internet use during office hours. The said people have been provided with the facility of internet at their offices during the office timings.

The major effects under study are        (a) how frequently people use the internet during their working hours, (b) social behavior of the people using the interne; with their colleagues and staff, (c) improvement of the professional skills and knowledge and (d) the effect of the internet use on their work and tasks assigned by the organization/superiors.     

2   MATERIALS AND METHODS

The selected participants in the project were all the medium-income middle graded officers (grade 17-19) containing male (80%) and female (20%), married (34%) and unmarried (66%). More than 90% of the selected people having age be-tween 25-35 years and 10% having 35-45 years

How frequently do People use the Internet during working hours?

Numerous surveys have attempted to measure how frequently people use the Internet at offices. Estimates vary from as high as several hours a day to as low as 1 hour daily, depending on how Internet use is measured (e.g., self-report, automatically recorded). It depends also on the following factors e.g., age of person, type of assignments and tasks. Despite high variability in the empirical estimates, general perception is that children spend a great deal of time online. Mostly people use the internet as a research tool while nearly every person uses E-mail daily.

This research observed multiple measures of Internet use to permit a more fine-grained analysis of how people are spending their time online. Analysis was carried out of the internet usage of the selected persons in the following dimensions; Email, Chatting, Research, Office assignments solutions, Entertainments, E-shopping.

3   DOES INTERNET USE AFFECT PEOPLE'S SOCIAL BEHAVIOR?

Few studies and inconsistent findings render uncertain whether using the Internet has any influence on people's behavior with their colleagues and staff. On the one hand, time spent online isolates the people from each other. On the other hand, the Internet facilitates communication with geographically distant workmates and friends, and makes it easier to communicate frequently with those nearby. Two independent reviews of this research have concluded that there are few documented social effects, either positive or negative. We examined two types of social outcomes that may be influenced by Official's Internet use: the behavior and links of the officers with their colleagues and workmates and secondly their management and behavior with staff persons.

Internet's social impact may depend on using of tools to build new relationships and/or strengthen existing ones. Social impact may also depend on personal and situational factors which have yet to be identified. Alternatively, it may be that Internet use has no social impact. Like media that have preceded it (e.g., books), the Internet may be seamlessly integrated into people's ongoing lives.

4   DOES INTERNET USE AFFECT PEOPLE'S OFFICIAL ASSIGNMENTS?

As was the case for social outcomes, few studies have ex-amined the relationship between person's Internet use and its effect on the official assignments of the related person.

The efficiency of the workers (officer or staff) of an organiza-tion depends in the field of work of the related person, types of its assignments assigned to him, nature and competency of the person to extract the solution from the resources.

Read More: Click here..."

58
"
Quote
Author : Jožica Bezjak A. Professor
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract—The aim of this work was to analyse the mechanisms of hindered internal passivation of silver based alloys which was obtained by the modification of basic chemical composition. A generalisation of the phenomenon, experimental verification and the estimated range of micro-element concentration is also introduced. The ability for inoculation of a particular alloy is determined by the differences between the formation energies of oxides, as well as their crystallographic similarity. Therefore, for the investigated Ag-Zn alloys, Mg was selected as the micro-alloying element. The influence of 0.001 up to 0.5 mass % of Mg added to the selected alloys was analysed. By changing the chemical composition,internal passivation was hindered, internal oxidation rate was increased, and considerably greater (redoubled) depths of internal oxidation were achieved.

Keywords - internal oxidation, modification, hindrance of passivation

1   INTRODUCTION
Ag, Cu and Ni based alloys with small additions of noble alloying elements like indium, tin, and antimony oxidise internally if they are exposed to oxidation atmosphere at elevated temperatures.For internal oxidation of a particular alloy, the following conditions also have to be fulfilled:

•   the base metal must have high solubility of oxygen
•   the alloying element has to be strongly electronegative
•   the diffusion rate of oxygen into the base metal has to be of some levels of magnitude higher than the diffusion rate of the alloying elements the concentration of the alloying elements has to be inside certain limits
•   sufficient partial pressure of oxygen has to be ensured in the atmosphere

If the selected alloy is isothermally annealed at an elevated temperature and all other above mentioned conditions are fulfilled,dissolution of oxygen begins on the surface of the alloy, and its diffusion into the interior, as well as its reaction with atoms of the alloying element occurs. The effect of the oxidation reaction is a precipitation of the finely dispersed oxide of the less noble component in the metal matrix (Pictures 1a and Picture 1a: the effect of oxidation reaction

Picture 1 a, b: Concentration gradient of oxygen and the alloying element in the zone of internal oxidation, as well as in the non-oxidised part of the alloy(1).

2   EXPERIMENTAL WORK
In internally oxidised Ag based alloys, where passivation normally occurs, we tried to hinder passivation with small modifications of chemical composition. The hindrance of passivation, undisturbed oxidation of the main alloying element and growth of the internal oxidation zone were obtained with small additions of micro-alloying elements (from  0.001 up to 0.5 mass %) which possess very large free energy of oxide formation.During the selection of the alloys chemical composition, the following criteria were taken into consideration:

-   the selected Ag based alloys are mono-phased
-   the concentration of the main alloying element is approximately half the critical one (N (3))
-   on the basis of the Ag-Zn binary phase diagram and relatively small free formation energy for ZnO, Zn was selected as the main alloying element
-   in the investigated binary alloying system (Ag-Zn), the added micro-alloying element has very large free oxide formation energy in comparison with the free oxide formation energy of the main alloying element
-   the concentrations of added Mg were relatively small (between 0.5 and 0.001 mass %); therefore, this element is  designated as a micro-alloying element
-   inoculation with Mg was analysed from two standpoints: differences in free formation energy of oxides and appropriate crystallographic features with regard to the silver matrix (Table 1).

On the basis of the above mentioned criteria, the selected Ag-Zn and Ag-Zn-Mg are given in Table 2 for observation of the passivation phenomenon and the conditions for its hindrance.


Read More: Click here..."

59
Others / A Simplified Pipeline Calculations Program: Liquid Flow (1)
« on: February 13, 2012, 04:06:05 am »
"
Quote
Author : Tonye K. Jack
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— and Program Objective - A multi-functional single screen desktop companion program for piping calculations using Microsoft EXCELTM with its Visual basic for Applications (VBA) automation tool is presented. The program can be used for the following piping geometries – circular, rectangular, triangular, square, elliptical and annular. Fluid properties are obtained from built-in fluid properties functions.

Index Terms— engineered spreadsheet solutions, liquid pipline flow, pipeline design, pipeline fluid properties, piping program, pipeline sizing.

1   INTRODUCTION                                                                     
THE piping designer will often be saddled with the task of designing for different pipe configurations (circular, square ducts, etc.). Conducting such piping designs, can often involve repetitive calculations whether for simple horizontal pipelines or piping of complex terrains. Modern computer- assisted - tools are now often employed as aids in achieving these, if time and cost permits. Often times, for minor changes to existing installations or retrofitting, a customer (pipeline owner) would contract an engineering consultancy to conduct an analysis check that will involve desktop routine calculations such as determining pressure drops, or head loss, flow rate or pipe geometry (diameter, length,  cross-sectional area, etc.) that can be assigned to an engineer for quick answers. Simple spreadsheet calculators can be developed to aid such small routine calculations. One such program is shown here with all required equations to assist in developing one.

2 REQUIRED GENERAL EQUATIONS FOR INCOMPRESSIBLE FLOW

Reynolds Number:
(1)

Flow Velocity:   
 (2)
Area:              (3)                        



Head Loss:
   

(4)                  

Friction Factor:  The friction factor, f, is obtained as follows:

For Laminar Flow: The applicable equations for laminar flows (Re≤2100) can be defined in terms of a laminar flow factor, Lf, which varies depending on the pipe geometry. The equation is of the form:

      (5)
   
For Turbulent Flow, the friction factor, f is obtained by the Colebrook-White equation



    (6)      

Flowrate:

   For Laminar Flow:
      

              (7)                              
For Turbulent Flow:   

            (8)                                            

(9)

Range of application:    10-6 ≤ (ε/D) ≤ 2 x 10-2
      3 x 103 ≤ Re ≤ 3 x 108

Solution for Diameter:


          (10)                  

          (11)

Range of application:    10-6 ≤ (ε/D) ≤ 2 x 10-2
      3 x 103 ≤ Re ≤ 3 x 108

Pressure Drop

                  (12)

Shear Stress in Wall:


             (13)

Power required to pump through the line:   

                (14)


3 PIPE GEOMETRY AND FRICTION FACTOR
3.1 CIRCULAR SECTION PIPE:


The Laminar Flow factor is defined by the relation:           

Lf  = Laminar Flow factor = f. Re = 64            (15)

For Turbulent Flow, f, is obtained by the Colebrook-White formula, “(6),”.

Also, for Turbulent Flow within the limits defined below, explicit values for the friction factor, f is obtained by the Swamee-Jain relationship, “(16),”.

   



                  (16)


Range of application:    10-6 ≤ (ε/D) ≤ 2 x 10-2
      3 x 103 ≤ Re ≤ 3 x 108

The Microsoft Excel TM Solver Add-in, has two built-in interpolation search solution methods – the Newton method and the Conjugate Gradient method. By rewriting the equation to be solved in the solution form required (see “17,”) in the Microsoft ExcelTM cells, the Solver Add-in option dialog box under the Tools menu, allows for desired constraints to be set as follows:

Set Target Cell:
Equal To:
Subject to:   Guess value:

        (17)            
The Microsoft ExcelTM Goal Seek option is also useful.
Furthermore, the solution method provides for limiting the number of iterations, the degree of precision desired and the level of convergence (i.e. the decimal floating points). The error margin involved in the iteration calculation is indicated by the Tolerance percentage.  Care should be exercised to avoid a risk of having a circular reference – repeated recalculation of particular cell values as input and output.

Miller [1], suggest that a single iteration will produce a result within 1% of the Colebrook-White formula, if the initial esti-mate is calculated from the Swamee-Jain equation.

Read More: Click here..."

60
"
Quote
Author : Tarawalie Ismail Foday, Wengang Xing, Guangcheng Shao, Chunli Hua
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract— Partial root-zone drip irrigation was tested to investigate effect of water use efficiency on growth and yield of hot pepper in a greenhouse condition. This study was conducted to compare effect of partial root-zone drip irrigation (PRDI) and examine how it affected soil water distribution, water use, growth, photosynthesis rate, stomatal conductance, transpiration rate and yield of hot pepper. The experiment was designed with three irrigation Schedule of 30%, 50% and 100% of ETo respectively, on four stages of plants growth; (i) Seedling and vegetative, (ii) flowering and fruit setting (iii) Vigorous fruit bearing (iv) Late fruit bearing. Irrigation water amount was calculated according to daily evaporation. There were nine treatments rows and irrigation was carried out three times per week.Results showed that, the average moisture content on both sides on each treatment was relatively constant or rose slightly as the highest was 25.20±0.23 and the lowest 21.05±069 for the right side while for the left side the highest was 24.66±0.68 and the lowest 21.59±0.22 as shown in table 3.2 and 3.3. High photosynthesis rate was recorded in treatment 1 with an initial irrigation schedule of 30% of ETo at seedling and vegetative growth, whereas, high stomatal conductance and transpiration rate were recorded in treatment 9 (control row) with 100% irrigation schedule of ETo throughout the four stages of plant growth. At 150 days after transplant hot pepper plants were harvested. The result further showed that moderate water at 50% of ETo  during Vigorous fruit bearing can increase yield production and this was manifested in Treatment seven with irrigation schedule of 50% of ETo had the highest harvest yield of  3501g followed by treatment 9 (control) with 2982g and the lowest was recorded on treatment 6 with 1239g.The result also indicated that, treatments  1 and  8 also responded low to yield with 1489.86g and 1506.17g respectively, and also to dry biomass and water use efficiency, however, treatments 4, 2 and 3 recorded moderate yield. Fig 3.6 gives the full details of data analysis on yield, dry biomass and two water use efficiency.

Index Terms— partial root-zone drip irrigation; irrigation frequency; water use efficiency (WUE); hot pepper; growth stage; ETo Evapotranspiration

1   INTRODUCTION                                                                     
It would be much difficult to meet the food requirements in the future with the declining water resources and li-mited clean water reservoirs in the future, as 70% ~ 90% of the available water resources is used in food production. To cope with the water shortage problem, it is necessary to adopt effective water-saving agricultural countermeasures [19]. Efficient use of water by irrigation is becoming increasingly important. Agronomic measures such as varying tillage practices, mulching and anti-transpirants can reduce the demand for irrigation water and improve irrigation water use efficiency (IWUE). Development of novel water saving irrigation techniques represents another option for increase water use effi-ciency.

During the last two decades, water–saving irrigation tech-niques such as deficit irrigation (DI) and partial root zone drying (PRD) or alternative irrigation (AI) have been widely developed and tested for field crops and fruit trees. Most recently, these irrigation techniques are also being tested in vegetable crops such as tomatoes and hot pepper etc. [23]. In this paper, the principles of water-saving irrigation strategies such as the PRD mode and it’s prospective for improving irrigation and crop water use efficiency in horticultural and agricultural production were discussed.
Particularly, the effective use of irrigation water has become a key component in the production of field crops and high-quality fruit crops in arid and semi-arid areas. Irrigation has been the major driving force for agri-cultural development in these areas for some time. Efficient water use has become an important issue in recent years under the critical situation of water resource shortage in some areas. Much effort has been paid to develop techniques such as RDI (regulated deficit irri-gation), CAPRI [controlled alternate partial root-zone irrigation or partial root-zone drying (PRD in the literature)] to improve field and fruit crop water use efficiency [5], [6], [16], [17], [20], [9].

The natural soil has been constituted by physically and chemically weathered consequence of the rock, therefore it exists universally in nature. The Earth’s surface layer with about 0.5-1.0m deep is constituted by soil and organic humus; it is often used for cultivation, and then called as the agricultural soil. Water takes up a higher percentage of the world and plays a key role in making earthly temperature equilibrium and concurrently being a main factor for changing the conformation of the earthly surface ceaselessly.

Nowadays, the tendency to develop irrigation in many countries is the sensible exploitation of existing hydraulics project systems and strengthening on the depth of irrigation techniques and methods to raise economic effect based on the utilizable effect of water resources. Selection and application of a sensible irrigation method is directly effective and of critical importance because irrigation techniques play a crucial role in water supply and distribution for crops directly and decide water losses in some extent at the field.

Besides, the current common irrigation techniques, such as flood and canal irrigation etc, still have a high degree of water losses.

Increasing water use efficiency (WUE) is one of the main strategic goals for worldwide researchers as well as decision makers due to water scarcity and continuing high demand of water for agricultural irrigation. With the low efficiency of irrigation water utilization,   about another more 50% percent water is required; indeed which part could be met by increasing the effectiveness of irrigation. However, the agricultural irrigation uses over 70% of the world clean water and most of which is specially used in the protected environment [11]. Meantime, it is quite costly to use clean water and chemical solutions as fertilizers. In addition, the fast growing industrial sector competes with agriculture for water resources and the pollutants emitted became the source of most water pollution, which will push the agricultural activities to remote areas where there might be water scarcity and salinity as major problems.

The basic purpose of irrigation is supply of enough water into soil to ensure crops have the best development and growth. Traditional popular irrigation methods do not maintain suitable moisture for crop requirement in developing and growing, the extent of change in soil moisture is fairly significant (higher or lower than suitable moisture). Water saving irrigation technique is the best water supply technique and contributing to considerably higher productivity and quality of crops. Therefore, the development of water saving irrigation technology is urgent, and it will open up glorious prospects to plant industrial crops, fruit-trees, vegetables and other crops that have high economic values.

Traditional irrigation principles and methods have been challenged and modified [14]. Ideally WUE should be improved by reducing leaf transpiration. Stomata control plant gas exchange and transpiration/water loss and investigation has shown that stomata may reduce their opening according to the available amount of water in the soil [3], [22]. The advantage of such a regulation is that the plant may delay the onset of an injurious leaf water deficit and so enhance their chance of survival with unpredictable rain-fall, the so-called optimization of water use for CO2 uptake and survival [12], [2]. Recent evidence has shown that such a feed-forward stomatal regulation works through a chemical signal, the increased concentration of abscisic acid (ABA) in the xylem flow from roots to shoots. Part of the root system in drying soil can produce a large amount of ABA while the rest  of the root system in wet soil can function normally to keep the plant hy-drated [23]. The result of such a response is that, plant can have a reduced stomatal opening in the absence of a visible leaf water deficit.

In this study, in order to determine the effect of partial root-zone drip irrigation on hot pepper plants growth in a green house, an efficient partial root-zone irrigation field work was designed and carried out at a greenhouse of Key Laboratory of Agricultural Engineering Water saving-park at Jiangning campus of Hohai University, Jiangsu Province, Nanjing, China. In this experiment, quantitative monitoring of indoor positioning systems and practical validation approach and mechanism combined with production practice were applied.

As in the term “partial root-zone drip irrigation”, “partial” means that at least some part of the soil water content was controlled above a certain percentage of their field capacity and the root system was divided into two parts, North and South and water was applied to the parts.
Nine rows were established with 22 plants per row, com-parisons were made in terms of plant growth, shoot physiology and water use efficiency (WUE) using irrigation frequencies; 30%, 50% and 100% respectively at various growth stages.

Read More: Click here..."

Pages: 1 2 3 [4] 5 6 ... 22