Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - IJSER Content Writer

Pages: [1] 2 3 ... 22
1
Quote
Author : Shoush K. A., Mohamed M. K., Zaini H. and Ali W. Y.
International Journal of Scientific & Engineering Research Volume 4, Issue 10, October-2013
ISSN 2229-5518
Download Full Paper : PDF

Abstract-Measurement of static electricity is of critical importance in assessing the proper electric properties of car seat covers and their suitability to be used in application to enhance the safety and stability of the driver. The present work aims to measure the static electric charge generated from the contact and separation of materials used as car seat covers and clothes. Different materials of car seat covers and clothes were tested to measure the voltage generated from electric static charge .It was found that voltage generated by the contact and separation of the tested upholstery materials of car seat covers against the materials of clothes showed great variance according to the type of the materials. The materials tested showed different trend with increasing load. The contact and separation of the tested against polyamide textiles generated negative voltage, where voltage increased down to minimum then decreased with increasing load. The behaviour can be interpreted on the fact that as the load increased the two rubbed surfaces, charged by free electrons, easily exchanged the electrons of dissimilar charges where the resultant became relatively lower voltage. High density polyethylene displayed relatively lower voltage than cotton and polyamide textiles, while polypropylene textiles displayed relatively higher voltage than that shown for high density polyethylene. The variance of the voltage with load was much pronounced. Voltage generated from polyester textiles showed reasonable values. Remarkable voltage increase was observed for contacting synthetic rubber. This observation can limit the application of synthetic rubber in tailoring clothes.Based on the experimental results the materials of car seat covers can be classified according to their electric properties. Materials of high static electricity can be avoided and new materials of low static electricity can be recommended.

Keywords-Electric static charge, voltage, cotton, polyester, textiles, upholstery materials, car seat covers, triboelectrification

覧覧覧覧覧覧覧覧覧覧

1 INTRODUCTION
he wide use of polymer fibres in textiles necessitates to study their electrification when they rubbing other surfaces. The electric static charge generated from the friction of different polymeric textiles sliding against cotton textiles, which used as a reference material, was discussed, [1]. Experiments were carried out to measure the electric static charge generated from the friction of different polymeric textiles sliding against cotton under varying sliding distance and velocity as well the load. It was found that increase of cotton content decreased the generated voltage. Besides, as the load increased voltage generated from rubbing of 100 % spun polyester specimens increased. Besides, mixing polyester with rayon (viscose) showed the same behavior of mixing it with cotton. Generally, increasing velocity increased the voltage. The voltage increase with increasing velocity may be attributed to the increase of the mobility of the free electrons to one of the rubbed surfaces. The fineness of the fibres much influences the movement of the free electrons. The electrostatic charge generated from the friction of polytetrafluoroethylene (PTFE) textiles was tested to propose developed textile materials with low or neutral electrostatic charge which can be used for industrial application especially as textile materials, [2]. Test specimens of composites containing PTFE and different types of common textile fibers such as cotton, wool and nylon, in a percentage up to 50 vol. % were prepared and tested by sliding under different loads against house and car padding textiles. Ultra surface DC Voltmeter was used to measure the electrostatic charge of the tested textile composites. The results showed that addition of wool, cotton and nylon fibers remarkably decreases the electrostatic discharge and consequently the proposed composites will become environmentally safe textile materials.
Research on electrostatic discharge (ESD) ignition hazards of textiles is important for the safety of astronauts. The likelihood of ESD ignitions depends on the environment and different models used to simulate ESD events, [3]. Materials can be assessed for risks from static electricity by measurement of charge decay and by measurement of capacitance loading, [4]. Tribology is the science and technology of two interacting surfaces in relative motion and of related subjects and practices. The popular equivalent is friction, wear, and lubrication, [5]. Tribological behavior of polymers is reviewed since the mid-20th century to the present day. Surface energy of different coatings is determined with contact adhesion meter. Adhesion and deformation components of friction were discussed. It was shown how load, sliding velocity, and temperature affect friction. Different modes of wear of polymers and friction transfer were considered, [6]. The ability to engineer a product痴 tactile character to produce favorable sensory perceptions has the potential to revolutionize product design. Another major consideration is the potential for products to produce friction-induced injuries to skin such as blistering, [7, 8]. Sports activities may cause different types of injuries induced by friction between the skin and sport textiles. Focusing on runners who are often bothered with blisters, the textile貿oot skin interface was studied in order to measure and predict friction. The characteristics of mechanical contacts between foot, sock and shoe during running were determined. It was found that textiles with conductive threads did not give ignitions provided they were adequately earthed, [9]. When isolated, all textiles were capable of causing ignitions regardless of the anti-static strategy employed.

Read More: Click here...

2
Quote
Author : N M Eman, M S Alam, S M Khurshed Alam, Q M R Nizam
International Journal of Scientific & Engineering Research Volume 4, Issue 10, October-2013
ISSN 2229-5518
Download Full Paper : PDF

Abstruct
The static and spherically symmetric Morris-Thorne traversable wormhole solutions in the presence of cosmological constant are analyzed. We matched an interior solution of a spherically symmetric traversable wormhole to a unique exterior vacuum solution at a junction surface. The surface tangential pressure on the thin layer of shell is deduced. The specific wormhole solutions are constructed with generic cosmological constant.

I INTRODUCTION
Wormholes are handles or tunnels in the spacetime topology connecting two separate and distinct regions of spacetime. These regions may be part of our Universe or of different Universes. The static and spherically symmetric traversable wormhole was first introduced by Morris and Thorne in their classic paper [1]. From the stand point of cosmology,the cosmological constant Λ, served to create a kind of repulsive pressure to yield a stationary Universe. Zel壇ovich [2] identified Λ with the vacuum energy density due to quantum fluctuations. Morris-Thorne wormholes with a cosmological constant Λ have been studied extensively, even allowing Λ to be replaced by a space variable scalar field. These wormholes cannot exist, however, if Λ are both space and time dependent. Such a Λ will therefore act as a topological censor.
In this article, we introduce an exact black hole solution of the Einstein field equations in four dimensions with a positive cosmological constant to electromagnetic and conformally coupled scalar fields. This solution is often called a Martinez-Troncoso-Zanelli (MTZ) black hole solution. In agreement with recent observations [3], this black hole only exists for a positive cosmological constant Λ, and if a quartic self-interaction coupling is considered. Static scalar field configurations such as those presented here, which are regular both at the horizon as well as outside, are unexpected in view of the no-hair conjecture [4]. The conformal coupling for the scalar field is the unique prescription that guarantees the validity of the equivalence principle in curved spacetime [5]. In the literature, a number of traversable wormhole solutions with cosmological constant are available [6-21]. A general class of wormhole geometries with a cosmological constant and junction conditions was analyzed by De Benedictis and Das [9], and further explored in higher dimensions [10]. It is of interest to study a positive cosmological constant, as the inflationary phase of the ultra-early universe demands it, and in addition, recent astronomical observations point to 0>Λ. Lobo [12], with the intension of minimizing the exotic matter used, matched a static and spherically symmetric wormhole solution to an exterior vacuum solution with a cosmological constant, and he calculate the surface stresses of the resulting shell and the total amount of exotic matter using a volume integral quantifier [13]. The construction of traversable wormhole solutions by matching an interior wormhole spacetime to an exterior solution, at a junction surface, was analyzed in [13-15]. A thin-shell traversable wormhole, with a zero surface energy density was analyzed in [15], and with generic surface stresses in [14]. A general class of wormhole geometries with a cosmological constant and junction conditions was explored in [9], and a linearized stability analysis for the plane symmetric case with a negative cosmological constant is done in [17].
Morris-Thorne wormholes, with Λ = 0, have two asymptotically flat regions spacetime. By adding a positive cosmological constant0>Λ, the wormholes have two asymptotically de-Sitter regions, and by adding a negative cosmological constant, 0<Λ, the wormholes have two asymptotically anti-de Sitter regions. We analyze asymptotically flat and static traversable Morris-Thorne wormholes in the presence of a cosmological construct. An equation connecting the radial tension at the mouth with the tangential surface pressure of the thin-shell is derived. The structure as well as several physical properties and characteristics of traversable wormholes due to the effects of the cosmological term are studied.
This article is organized as follows: In Sec. II we studied Einstein痴 field equations and total stress-energy with a cosmological constant Λ. In Sec. III, we introduce an exact black hole solution with electromagnetic and conformally coupled scalar fields. The junction conditions and the surface tangential pressure are discussed in Sec.

Read More: Click here...

3
Quote
Author : Adedapo Ayo Aiyeloja and Adekunle Anthony Ogunjinmi
International Journal of Scientific & Engineering Research Volume 4, Issue 10, October-2013
ISSN 2229-5518
Download Full Paper : PDF

Abstract Economic aspects of grasscutter farming and their implications for sustainable adoption and conservation were studied in Ondo, Osun and Oyo States, southwest Nigeria. Data were collected through questionnaire administration from 4 Local Government Areas in Ondo and Osun States while they were collected in 5 Local Government Areas in Oyo State where grasscutter farming has been adopted. Thirty grasscutter farms were randomly selected from 150 farms in the three states, thus, 20% of the farms were selected. Data were on demographics of the grasscutters farmers, amount invested and income generated from 2003 to 2005. Analyses of data were through descriptive statistics, student痴 t-distribution, multiple regression and cost benefit analysis. Rate of return on investment and its trends for the enterprise were also determined. The results indicated that the enterprise was below poverty line in each of the three states. Osun State had the highest cost benefit ratio with 3.64 while Ondo State had the least with 1.77. Also, Osun State had the highest rate of return on investment while Ondo State had the least. The trend in the rate of return on investment showed that Oyo State had the highest with R2 of 0.9934, while Ondo State had the least with R2 of 0.7135. The study concluded that grasscutter farming is relatively young and as such profitability and its poverty alleviation potentials may take several years of investment to materialize.

Index Terms Economic, grasscutter farming, poverty line, sustainable adoption, conservation.

覧覧覧覧覧  覧覧覧覧覧

1 INTRODUCTION
URAL communities in many parts of Africa, Asia, central Europe and the Americas are increasingly concerned about losing self-sufficiency as their local wild popula-tions of animals used for bushmeat dwindles because the wildlife biomass of tropical forests is generally low [1]. Wild-life hunting may be sustained but only where human popula-tion densities are low [2]. It has been suggested that for people depending exclusively on wild meat, hunting may not be sus-tainable if human population densities are greater than 1 or 2 person/km2 [3]. Unrestricted access to valued but vulnerable species may provide a high initial harvest, but this will merely be a temporary 澱onanza followed by loss of local self-sufficiency and higher effort or prices to get the species else-where [1].
The shortage of animal protein in the third world countries can be ameliorated by improving the existing conservation programme of wildlife particularly the domestication of ro-dents that are tractable, prolific, and widely accepted to the public for consumption [4]. Captive breeding of game species as a possible way to satisfy local demand without compromis-ing the wild stock has also been recommended by several au-thors [5, 6, 7, 8].
This has obvious attractions where bushmeat fetches a high price [9], and logically, it could lead to reduced demand for wild caught specimens [8]. Again, captive rearing of rodents and enclosures might augment the bushmeat supply from the wild [10]. Grasscutter or canerat has been suggested as one of the minilivestock having potential for domestication. Grasscutter rearing has been stated to have health related ad-vantages including better nutrition from consumption of meat [11]. There is also strong evidence that local diets in some parts of Africa frequently include non-conventional livestock such as canerats that make significant contributions to the nu-tritional well-being of marginal households [12, 13].
Economic viability of grasscutter farms depends on the so-cio-economic context of the farm. If the farm is placed near urban centers where bushmeat prices and demand are high, a middle-sized cane rat farm can certainly be profitable [14]. In Libreville, Gabon痴 capital city, for example, wild cane rat meat is sold at 2.8 US$ /kg (1 US$= 695 FCFA) but farmed animals are sold at 5 US$/Kg without any difficulty [14]. A World Bank study showed that small-scale cane rat farming with a yearly stock of 260 animals (40 reproductive females) was the most profitable system of animal exploitation in Ghana, followed by poultry and rabbit farming [15].
A farm of this size could easily reach a profitability threshold of between 350 and 400 US$ /year with the sale of 14 to 20 animals for meat at 5 US$/Kg [14]. Several authors in different African countries seem to agree that a small-scale farm of 40 reproductive does is the most profitable scale of production for that species and that well managed cane rat farms can substantially contribute to local economies and pro-duce enough profit to make a living [16, 17]. It has been noted that grasscutter breeders generally earn two (2) times more than what they invested in the grasscutter husbandry [18]. This is a crucial point for the development of grasscutter farming in Africa that deserves further analysis or investigation [14]. Generally speaking, canerat farming profits are variable depending on the country and the area where the farm is based and show better prospects of economic success in peri-urban areas where demand for bushmeat is higher, transport costs are limited and game is sold at high prices. In rural areas, hunting management of wild canerats certainly shows more promise than farming since these rodents are abundant, and their capture reduces predation on and damages to feeding crops. Moreover, prices in rural areas are at least two times lower than those paid in urban centres [19] and spending money in producing animals that are abundant in the wild seems unrealistic, unless hunting is prohibited and respect of the law can be guaranteed [14]. Studies indicate that grasscut-ter farming possesses environmental related advantages such as reduction in poaching and bushfires [11]. It also reduced bushfires caused by poachers [11, 20, 21].
There is a large body of literatures on grasscutter domesti-cation, especially in the last twenty years and some enterpris-es specialized in its rearing are already in existence in Nigeria and other parts of West Africa. In the savanna area of West Africa, people have traditionally captured wild grasscutters and raised them at home. As an extension of this, organized grasscutter husbandry has been initiated. Many researchers have reported the potential inherent in domesticated grasscut-ter in West Africa [22, 23, 24, 25] and reported various degrees of successful domestication of grasscutter in Ghana, Benin and Nigeria. It has also been reported that grasscutter contributes to both local and export earnings of countries like Kenya, Be-nin Republic and Nigeria [26]. Its meat, said to resemble suck-ling pig, often sells for more per kilogram than chicken, beef, pork or lamb. It is the preferred, and perhaps most expensive meat in West Africa. Indeed, in Ivory Coast it sells for about U$9 per kilogram [27]. With prices like that, grasscutter is cul-inary luxury that only the wealthy can afford. If domestication of this wild species is successful in providing meat at a price similar to that of poultry, markets would be unlimited. In an effort to capitalize on the markets for this delicacy, agricultur-al extension services of Cameroon, Ghana, Ivory Coast, Nige-ria and Togo and particularly Benin are already encouraging farmers to rear grasscutter as backyard livestock. The need to evaluate the profitability and economic viability of grasscutter farming as well as the implications for sustainable and contin-ued adoption of the technology and conservation justifies the present study.

2 MATERIALS AND METHODS
The study areas-Ondo, Osun and Oyo States are in Southwest of Nigeria. Ondo State lies between latitudes 50 451 and 60 051E. It is bounded on the east by Edo State and Delta States, on the north by Ekiti and Kogi States and to the south by the Bight of Benin and the Atlantic Ocean. Osun State covers an area of approximately 14,875 square kilometers, lies between longi-tude 040 331E and latitude 070 281N, and is bounded by Ogun, Kwara, Oyo, and Ondo States in the South, North, West, and East respectively. Oyo State also lies between latitude 070 001N and longitude 030 001E. Oyo State is bounded by the States of Kwara on the north, Osun on the east, Ogun on the south and by Republic of Benin on the west.
The climate of southwest Nigeria is tropical in nature and it is characterized by wet and dry seasons. The temperature ranges between 210C and 340C while the annual rainfall ranges between 1250mm and 3000mm. The wet season is associated with the southwest monsoon winds from the Atlantic Ocean while the dry season is associated with the northeast trade winds from the Sahara desert. The vegetation of southwest Nigeria is made up of freshwater swamp and mangrove forest at the coastal belt, the lowland rainforest stretches to Ogun and parts of Ondo State while secondary forest is towards the northern boundary where derived and southern Guinea sa-vanna exist [28].

Read More: Click here...

4
Quote
Author : Nulla, Yusuf Mohammed
International Journal of Scientific & Engineering Research Volume 4, Issue 10, October-2013
ISSN 2229-5518
Download Full Paper : PDF

AbstractThis research had investigated ownership structure and its impact on CEO compensation system in TSX/S&P and NYSE indexes companies from the period 2005 to 2010. The totaled of two hundred and forty companies were selected through random sample method. The research question for this study was: is there a relationship between CEO compensation, firm size, accounting firm performance, and corporate governance, among owner-managed and management-controlled companies?. To answer this question, thirty six statistical models were creat-ed. It was found that, there was a relationship between CEO compensation, firm size, accounting performance, and corporate governance, both in the owner-managed and management-controlled companies, except for the relationship between CEO bonus and firm size of owner-managed companies.

Keywords: Accounting Performance, Corporate Governance, Corporate Ownership, Accounting Earnings, TSX/S&P CEO compensation, and NYSE CEO compensation.

覧覧覧覧覧  覧覧覧覧覧

1 INTRODUCTION
he purpose of this research is to understand the influence of firm ownership on CEO compensation as a combined study of TSX/S&P and NYSE indexes companies, from e period 2005 to 2010. That is, extent of influence of owner-controlled and management-controlled companies in CEO compensation system. This interesting and important study in the executive compensation area will reveal some scientific methodologies or trends to understand the nature of CEO con-tract under respective ownerships. This study was conducted also in the influence of, over the past decade, Canadian and United States public had raised concerns over large bonuses declared to CEOs by their board of directors. The failure to understand the determinants of CEO compensation by public had led to blaming CEOs of rent grabbing; misused of its power towards board; and its monopolization of the compen-sation system. Thus, these ever growing concerns bring to foreground conclusion the need to further study CEO com-pensation system especially the effect of type of ownership on CEO compensation, as one important variable of executive compensation study.
The CEOs and other executives would like to elimi-nate the risk exposure in their compensation packages by de-coupling their pay from performance and linking it to a more stable factor, firm size. This strategy indeed deviates from ob-taining the optimum results from principal-agent contracting. In general, previous studies had found a strong relationship between CEO compensation and firm size but the correlation results were ranged from nil to strong positive ratios. The var-iables used in previous studies as a proxy for firm size were either total sales, total number of employees, or total assets. Therefore, firm size needs to be studied with CEO cash com-pensation in greater detail such as using both total sales and total number of employees.
The most researched topics in the executive compen-sation are between CEO compensation and firm performance. Although executive compensation and firm performance had been the subject of debate amongst academic, but there was little consensus on the precise nature of the relationship as such, further researched in greater detail need to be conducted to understand in finer terms the true extent of the relationship between them. As such, this research had unprecedentedly used eight variables to attest with CEO compensation, that is, return on assets (ROA), return on Equity (ROE), earnings per share (EPS), cash flow per share (CFPS), net profit margin (NPM), book value per common stock outstanding (BVCSO), and market value per common stock outstanding (MVCSO).

The relationship between CEO compensation and corporate governance (CEO Power) was not attested exten-sively in the past, especially in Canada. In fact, only few credi-ble researched papers were written. That is, CEO power only had been the subject of recent focus among researchers, pri-marily due to the effect of researchers had failed to find the strong relationship between CEO compensation, firm size, and firm performance. The variables used in previous studies as a proxy for corporate governance such as, CEO age; CEO ten-ure; and CEO turnover, were found to have negligible to weak relationship with CEO compensation. In addition, third party data collection, different population samples such as industry and market, and use of different statistical methods, all had led to a divergence in results. Therefore, corporate governance needs to be studied with CEO compensation on an extensive basis such as using, CEO age, CEO stocks outstanding, CEO stock value, CEO tenure, CEO turnover, management 5 per-cent ownership, and individuals/institutional 5 percent own-ership.

2 LITERATURE REVIEW
2.1 CEO COMPENSATION AND FIRM SIZE
Prasad (1974) believed that executive salaries appear to be far more closely correlated with the scale of operations than its profitability. He also believed that executive compensation is primarily a reward for previous sales performance and is not necessarily an incentive for future sales efforts. McEachern (1975) believed that executives are risk averse. They can reduce or eliminate risk exposure in their compensation package by linking it to a more stable factor, firm size. Gomez-Mejia, Tosi, and Hinkin (1987) believed that firm size is a less risky basis for setting executives pay than performance, which was subject to many uncontrollable forces outside the managerial sphere of influence. Deckop (1988) believed that a strong sales compensation relationship would suggest that CEOs are given an incentive to maximize size rather than profitability. Tosi and Gomez-Mejia (1994) believed that measurement of firm size is the composite score of standardized values of reported total sales and number of employees. Gomez- Mejia and Barkema (1998) defined the relationship between CEO compensation and firm size as 菟ositive. That is, CEOs in large companies make higher income than CEOs in small companies. This is supported by Finkelstein and Hambrick (1996), who believed that firm size is related to the level of executive compensation. This is further supported by Murphy (1985), who find that holding value of a firm constant, firm whose sales grow by 10% will increase CEO salary or bonus between 2% and 3% Therefore, it shows that size pay relation is causal, and CEOs can increase their pay by increasing firm size, even when increase in size reduces the firm痴 market value. Shafer (1998) shown that pay sensitivity, which measured as change in CEO wealth per dollar and change in firm value, falls with the square root of firm size. That is, CEO incentives are 10 times higher for a $10 billion firm than for a $100 million firm.

2.2 CEO COMPENSATION AND FIRM PERFORMANCE
LINKAGE
According to previous studies conducted in the United States and the United Kingdom, CEO compensation is believed to be weakly related to firm performance. Loomis (1982) argued that pay is unrelated to performance. Henderson and Fredrickson (1996), and Sanders and Carpenter (1998, 2002) argued that CEO total pay may be unrelated to performance but it related to organizational
complexity they manage. Likewise, studies conducted by Murphy (1985), Jensen and Murphy (1990), and Joskow and Rose (1994) find similar conclusions. Jensen and Murphy (1990) argued that incentive alignment as an explanatory agency construct for CEO pay is weakly supported at best. That is, objective provisions of principal agent contract are not comprehensive enough to effectively create a direct link between CEO pay and performance. They find that pay performance sensitivity for executives is approximately $3.25 per $1000 change in shareholder wealth, small for an occupation in which incentive pay is expected to play an important role. This is supported by Tosi, Werner, Katz, and Gomez-Mejia (2000), who find that overall ratio of change in CEO pay and change in financial performance is 0.203, an accounting for about 4% of the variance. This weak relationship is explained by Borman & Motowidlo (1993) and Rosen (1990), who stated that archival performance data focuses only on a small portion of a CEO痴 job performance requirements as such, it is difficult to achieve a robust conclusion. According to Jensen and Murphy (1990) who believed that CEO bonuses are strongly tied to an unobservable performance measure. They believed that if bonuses depend on performance measures observable only to the board of directors,
they could have provided a significant incentive. They believed that one way to detect the existence of such phantom performance measures are to examine the magnitude of year to year fluctuations in CEO compensation. They believed that such fluctuations signifies CEO pay is unrelated to accounting performance. In addition, they argued that although bonuses represent 50% of CEO salary, such bonuses are awarded in ways that are not highly sensitive to performance. And the variation in CEO pay can be explained by changes in accounting profits than stock market value. Overall, they believed that pay performance sensitivity remains insignificant.

Read More: Click here...

5
Quote
Author : Samedin Krrabaj, Xhelal Susuri
International Journal of Scientific & Engineering Research Volume 4, Issue 10, October-2013
ISSN 2229-5518
Download Full Paper : PDF

Abstract Very high development of the computer technology has contributed to increase the accuracy, quality and productivity in the metalworking industry with distortion. In this paper, are described some key problems in the process of the designing of the tools for blanking and punching, and to solve this problems is compiled and is used the program for generating technological parameters, that provides solutions which with a high safety can be realized in real conditions. The development of modern processes for the processing of sheet in the design phase requires the support of the FEM numerical methods and powerful CAD software痴. If is achieved the full integration then offered to us real conditions and competitive advantages. The defined model in this paper is constructed on a PC and integrated with a Solid Works CAD system, and provides the basis for analysis and simulation of the process which should enable to us to solve the optimal construction of the tool.
Index Terms CAD system, Finite Element Methods, the generating, modeling, the parameters, optimization, simulation.

覧覧覧覧覧  覧覧覧覧覧

1 INTRODUCTION
ese days, it is impossible to speak of any of the blanking and punching parameters process without implementing the finite element method in the modeling field and continuum behavior also and in the con-structive resolution analysis of the tool. Modeling and simula-tion offer many possibilities to solve various problems in the processing blanking process. In our case to solve this problem is worked a software program which gives us in the first step of working two options for the order of the parts in the tape and allows us to choose the most constructive possible solu-tion. With fully use of 3D design opportunities in the cross cutting of the integrity of the tool, can be transmitted the aimed space level of the virtual model and for any mutual contact of the cutting parts of the tool with the strip material. In this paper, special attention was paid to the finite element method which is undoubtedly powerful element for the nu-merical simulation of the blanking - punching process. Select-ed criterion for the optimizing the whole process represents the real values of the reached space between mobile upper elements of the tool and the cutter plate in the area of separa-tion of the material. This parameter has a crucial role in the quality of ready part, the required accuracy of geometry and timely exploitation tool. On the other side, large space values between the working elements reduce the quality of ready part as well as in the side surfaces and also in direction of the accuracy of its geometry.
For these reasons, software code, for the potential users al-lows the introduction of some space values for which are con-sidered to be necessary for the case given in the tool working surfaces and allows to continuous monitoring the changing of the blanking and punching force.

2 BLANKING AND PUNCHING TOOL DESIGN
2.1 The geometric model of the sheets parts
CAD model of the examined detail in our case is presented in figure 1. The model of willing detail is generated in Solid Works as a solid model. However, since the FEM systems of the sheet processing require surface models, then when neces-sary with a simple command is made the transformation which opportunities exist in almost all CAD systems.
Determining the optimal model especially is important in the determination of the complex contours of the willing detail, where except of the part of the processed contours should be taken into account the contours of the eventual connection of
TH 覧覧覧覧覧覧覧覧
Samedin Krrabaj, Dr.sc. Department of Production and Automation FME, Prishtina University, Kosovo, +37744143575, E-mail: samedinkrrabaj@gmail.com
Xhelal Susuri, Msc. Technical High School in Prizren, Kosovo, +37744218081, E-mail: xhelalsusuri@gmail.com
Fig. 1. CAD model ready for production

the detail parts if it is complex. These are also very important input data痴 for simulation of the process which is based on the model given in the experiment.
2.2 The programming solution for the designing of the tools
In order to achieve the objectives in this dissertation paper, all our efforts have resulted in the creation of a program called "prog. Blank" with the help of which is enabled the automa-tion of tool modeling process for processing punching and blanking with the parts of the sheet(. This program has mod-ules which enable the automation of the process of placement the part of the draws in Solid Works.

Read More: Click here...

6
Electronics / Dyon Solutions in Non-Temporal Gauge
« on: February 18, 2012, 02:35:06 am »
Quote
Author : Vinod Singh and D C Joshi
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract
Employing the Cabbibo芳errari type non- Abelian field tensor we consider the  gauge theory under the non-temporal gauge conditions and show that the obtained solutions are dyonic and have finite energy.

Index Term: Dyon Solutions, Non-Abilian Field Tensor, Gauge Field Theory

1. Introduction
In 1930s Dirac1 advanced the idea that isolated magnetic poles might exist. The idea of magnetic monopoles got a boost in 1970s when 稚 Hooft2 and Polyakov3 showed that in gauge field theories in which the symmetry group is spontaneously broken possess classical solutions with the natural interpretation of magnetic monopoles. Soon the Julia and Zee痴4 conjecture was seen as the non-Abelian analogue of Schwinger痴 Abelian dyons5. The interest on monopoles and dyons generated by Dirac1, 稚 Hooft2, Polyakov3 and Julia and Zee4 has remained undiminished and extensive theoretical and experimental works on the related topics have been undertaken6-21, 30.
Since, the solutions which were interpreted as magnetic monopoles were originally found in SO(3) gauge group and this group being small for unifying electromagnetic  and  weak  interactions, larger  gauge  groups  like SU(3) were explored8-12, 22, 23 . A key factor of such theories is the twin combination of the choice of gauge and choice of gauge field tensor. Theories have in general followed the approach of Julia and Zee4 and employed usual Yang-Mills type field tensor and have used temporal gauge conditions to arrive at monopole solutions and obtained dyon solutions in non-temporal gauge.
 In 1960s, Cabbibo and Ferrari24 developed a two potential field tensor for developing a theory of Abelian dyons and Yang Mills type field tensor continued to be used for dyon solutions in non-Abelian gauge theories. One of the authors (DCJ) has in earlier papers11 developed a Cabbibo-Ferrari24 type field tensor for non-Abelian fields and employed12-13 it on non-Abelian gauge theories with electric and magnetic sources. Using the same field tensor and the Kyriakopoulos22 technique we show in the previous paper that the dyon solutions be obtained in the temporal gauge (31). The Kyriakopoulos (22) technique under the temporal gauge conditions reduced the gauge field equations into the first order differential equations whose solutions depicted a set of dyon solutions. Extending the analysis in the present paper we examine the  gauge under the non- temporal gauge conditions and find that in this case too we obtain the finite energy dyon solutions but unlike the previous case they emerge as the solutions of second order differential equations. The paper has been divided into six sections. Section 2 defines the Lagrangian density, the gauge group of the theory, field equations and matrix notation .The ansatz for obtaining the solutions has been presented in section 3. The solutions have been shown to have finite energy in section 4.the adjoining solutions be obtained in section 5. That the obtained solutions belong to electric and magnetic charges has been shown in section 6 to which then follow the concluding remarks.
2. The Gauge Group and the Lagrangian Density
 In this section we briefly recapitulate the steps from the previous paper(31).
The system whose gauge group is  , is described by the Lagrangian density
    (1)
where(31)
    (2a)
and its dual is
    (2b)
in which gauge fields   and   transform as
          (3a)
and          (3b)
where   is a gauge function
             (4)
with   the real functions of space-time and   representing the group generators of   group obeying
             (5)
The  are the   structure constants with a, b, c running from 1 to 8.  , where  (a = 1, 2, 8) are eight Gell-Mann matrices25.
The   in the Lagrangian density (1) indicates the products in which the fields have been assumed mutually non-interacting. As a result of this assumption the mutual interaction terms, i.e. the cross-terms, disappear leaving
          (6)
 
 (7)
and   
             (8)
where
          (9)
and                (10)
with
       (11)
             (12)
                (13)
and                (14)
The covariant derivative   which expressed as
             (15)
transform as
             (16)
The potential energy   in the Lagrangian density (1) describe the self interaction of field   and  has the form
          (17)
in which   and   are real constants with . The fields   may denote the Higgs26, 27 triplet fields.
The Euler-Lagrange variations of the Lagrangian density (1) with respect to  ,  ,   and   lead to the field equations
       (18)
       (19)
          (20)
and          (21)
   Introducing the notation
                (22a)
and                (22b)
and also express the Higgs field  as
    (22c)
where  with (a = 1,2,..,8) the Gell-Mann matrices (25), we may express the field equations (18) to (21) in matrix notation as
       (23)
       (24)
       (25)
       (26)
respectively. It is obvious from the above that11
3. The Ansatz
 In the previous paper31 the gauge field obeyed the temporal gauge conditions and here temporal parts  and  do not vanish we were required to have the ansatz 28
    
 
    (27)
where  , x1, x2 and x3 being the components of distance three-vector. We also introduce the three-vector functions   and   expressed by28
                (28)
                (29)
                (30)
                (31)
                (31)
             (32)
and29
           (33)
          (34)
             (35)
and                (36)
where    are purely   dependent.
The ansatz for the Higgs fields   as before28, 29
             (37)
and             (38)
where the coefficients   and   too are purely r-dependent. We also introduce the vector
4. Finite energy Solutions.
   In earlier paper defined31
    (39)
 (40)
where   have been defined in equations (28) to (30) and
                (41)
          (42)
                (43)
             (44)
and
       (45)
       (46)
where
             (47)
             (48)
          (49)
          (50)
with similar relations with   and  .
As shown in the following subsection, the ansatz (33), (34), (37) and (38) allow us to write the field equations (18)(21) in terms of field equations without   indices.
We use the same ansatz and notations as used in the earlier paper(30) for temporal gauge. We also employ the ansatz for non temporal gauge(22)
we can express the space-time component of  and   as(28)
 (51)
Where
             (52)
             (53)
             (54)
             (55)
and
       (56)
where
              (57)
             (58)
             (59)
             (60)
Now we look at the field equations 31 (23) to(26) and separate their space and time components. Using eqyuations (51) and (56) the respective space and timecomponents of (23) and (24) can jbe expressed as
 (61)
     (62)
          (63)
and             (64)
where  and  are (39) and (40) for the space and time parts of eqs (25)and (250, we observe their V = 0 and find that, due to the static nature of fields and the ansatz  (25) and (26) vanish leaving the space parts as
          (65)
          (66)
    Now we first look at the set of eqs. (61), (62) and (65) that contain the space parts  of the gauge field  . Using eqs (34) in these equations we can calculate the individual terms as

Read More: Click here...

7
Quote
Author : Prof. Malay Niraj, Praveen Kumar, Dr. A. Mishra
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract -- Present study is an approach for finding the suitable maintenance practice and frequency of maintenance with the help of criticality factor of equipment it is based on failure mode evaluation and criticality analysis. Criticality means the failure probability of the equipment is very high. The miner failure of critical equipment may leads to sever impact on the performance of the equipment. So critical equipment needs very high degree of maintenance activity and maintenance frequency to prevent any failure. This model has implemented in process industry and many OEE like factor has been improved.

Key-words- FMECA, Criticality Factor, and Overall Equipment Effectiveness

1.   INTRODUCTION
The Failure Modes and Effects Criticality Analysis (FMECA) is really an extension of the FMEA, focusing on the quantitative parameters for a criticality assigned to each probable failure mode, and is discussed below. A widely accepted military standard for conducting FMEAs is Mil-Std-1629. This military standard details the specifics in conducting a FMEA. Like any analytical tool, if used and implemented correctly the FMEA is a powerful design engineering aid, and is used in the aerospace, military, automotive and space sectors. These industries have their own variance on how to and why conduct a FMEA, however their intent is the same. For instance NASA focuses on the qualitative aspect of failure modes and their effect on a system, rather than a quantitative approach, which would not be the case in conducting a FMECA as opposed solely to a FMEA. Supporting the NASA FMEA process is a Critical Items List (CIL). This list contains all the failure modes that would have catastrophic effects on a system or mission. The Failure Modes and Effect (Criticality) Analysis is termed as a bottoms up analysis. The FMEA is based on an qualitative approach, whilst the FMECA takes a Quantitative approach and is an extension of the FMEA, assign a criticality and probability of occurrence for each given failure mode. Maintenance is now a significant activity in industrial practice. According to Halasz et al [1] on the 1996 costs of maintenance across 11 Canadian industry sectors. "In addition to every dollar spent on new machinery. An additional 58 cents is spent on maintaining existing equipment. This amounts to repair costs of approximately $15 billion per year". .As a consequence. The importance of maintenance optimization becomes obvious. According to a survey conducted by Jensen [2] based on MATH DATAB.ASE of STY. From 1972 to 1994, the number of publications with keyword "Reliability" is $3521 and in addition. 1909 papers have keywords "Maintenance" or "Repair". These papers account for about 0.8% of all mathematical publications which are related to reliability and maintenance. This shows the importance of this field and in the meantime. The difficulty of providing a complete overview on the subject. Several intensive surveys can be found in the journal of Naval Research Logistics Quarterly. Where Pieskalla and Voelker [3] has 259 references. Sherif and Smith [4] has an extensive bibliography of 52.1 references and Valdez- Flores and Feldman [5] has 129 references. Certainly it is getting harder and harder to grasp this huge and growing field. Attempting to summarize this field with several universal optimization models is definitely infeasible. The different maintenance policies are used depending on the characteristic of the equipment. The complexity of maintenance planning is through higher because of some characteristic that distinguish from other types of scheduling (Noemi & William, [6]). Waeyenberg and Pintelon, [7] proposes a maintenance policy decision model to identify the correct maintenance policy for a particular component.

2 .CRITICALITY ANALYSES
Criticality Analysis
Criticality analysis is based on failure mode evaluation analysis.
Criticality means the failure probability of the equipment is very high. The miner failure of critical equipment may leads to sever impact on the performance of the equipment. So critical equipment needs very high degree of maintenance activity and maintenance frequency to prevent any failure
Where,
 
Frequency Factor: It is a number awarded depending on the frequency of failure. More the no. of failure more is the value given to the factor.

Protection Factor: It is a number awarded on the account of ease to protect the equipment from failure. Minimum no. is given when protection against the failure is easy. Maximum no. is given when protection against the failure is very difficult.

Severity Factor: Severity factor represents the effect level of failure on the equipment on the basis of down time, scrap rate and safety
Down time factor
It is the no. awarded in accordance with the failure time associated to the equipment. More the down time more is the factor, less the down time less is the factor.

Scrape rate factor

If the chances to scrap the whole equipment or component in the case of failure are high then the scrap factor value is taken more and in the case of less chance to a scrape the equipment or component factor value is taken less.
Safety factor
It represents risk associated in the case of failure. If the chances of injury (both man and machine) are high in the case of equipment failure more is the value given to the safety factor and less the chances of injury, less is the value given to the safety factor. On the basis criticality factor of all the component of the any industry is calculated.
This process is given the name failure mode effect and criticality analysis (FMECA).
The factors associated to the criticality analysis have different impact level on criticality of the equipment so different range or weightage is provided to them

Read More: Click here...

8
Networking / Future Internet Plan Using IPv6 Protocol
« on: February 18, 2012, 02:32:54 am »
Quote
Author : Krishna Kumar Mohbey, Sachin Tiwari
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract Internet users are increases day by day then they want to access data more fastly and safely, so that higher capability internet services are very important. Today痴 internet has the most of limitations which is important to remove. In future internet we used IPv6 protocol instead of IPv4 which have the larger address. It is important because the no. of users and system quantity are larger. In this paper we prepare the scope of future internet which will provide higher data transfer rates and high speed accessing to user. By designing new architecture and using new protocol version we can fastly access live TV and Multimedia data streaming on our computer. We can also enjoy the live video conferences because internet speed will be faster and powerful. Here we also describe the term dynamic caching which is important for accessing same data streaming on multiple places on the same time.

Index Terms Future Internet (FI), FI Entry Point (FI-EP), IPv4, IPv6, Dynamic Caching

1   INTRODUCTION                                                                    
 
TODAY, Internet is the most important information ex-change ecosystem. It has become the core communication environment not only for business relations, but also for social and human interaction. The immense success of Internet has created even higher expectations for new applications and services, which the current Internet may not be able to support. Advances in video capturing and encoding have lead to massive creation of new multimedia content and applications, providing richer immersive experiences: 3D videos, interactive environments, network gaming, virtual worlds, etc. Thus, scientists and researchers from companies and research institutes world-wide are working towards realizing the Future Internet.
The Future Internet (FI) is expected to be a holistic information exchange ecosystem, which will interface, interconnect, integrate and expand today痴 Internet, public and private intranets and networks of any type and scale, in order to provide efficiently, transparently, timely and securely any type of service (from best effort information retrieval to highly-demanding, performance critical services) to humans and systems. This complex networking environment may be considered from various interrelated perspectives: the networks & infrastructure perspective, the services perspective and the media & information perspective.

The Future Media Internet is the FI viewpoint that covers the delivery, in-the-network adaptation/enrichment and consumption of media over the Future Internet ecosystem.

2 TODAY担 INTERNET DATA DELIVERY LIMITATIONS
Here we define that how the content discovery, re-trieval and delivery take place in the current Internet. Users want text, audio, videos from YouTube or weather informa-tion, but they do not know or care on which machine the desired data or service reside. Information/content retrieval and delivery may be realized by today痴 Internet network architecture as shown in Figure 1. The network consists of: a) Content Servers or Content Caches (either professional or user generated content and services), b) centralized or clustered Search Engines, c) core and edge Routers and optionally Residential Gateways (represented as R1 to R5) and d) Users connected via fixed, wireless or mobile terminals.

Figure 1:  Today痴 Internet Architecture

The first step is Content Discovery by the Search Engines: the Search Engines crawl the Internet to find, classify and index content and/or services. The second step is Content Discovery by the User: the user queries a Search Engine and gets as feedback a list of URLs where the content is stored. The last step is Content Delivery/Streaming: the user selects a URL and the content is delivered or streamed to him.

In order to show with an example the limitations of to-day痴 Internet, let us consider the simple case of the delivery of a popular video from Content Server (e.g. a YouTube server). If a few dozen of users from a large building block request a video, the same video chunks will be streamed a few dozen of times. If a neighborhood has a few dozen of blocks, and a city a few hundreds neighborhoods, the very same video chucks will traverse the same network links thousands of times. If we continue aggregating at country and world-wide level, we will soon run out of existing bandwidth just for a single popular video stream.

This means that the three steps of content discovery and delivery can be significantly improved:
(In the network) dynamic caching: If the content could be stored/cached closer to the end users, not only at the end-points as local proxies but also transparently in the network (routers, servers, nodes, data centre), then content delivery would have been more efficient.
匹ontent Identification: If the routers could identi-fy/analyses what content is flowing through them, and in some cases are able to replicate it efficiently, the search en-gines would gain much better knowledge of the content popularity and provide information -even when dealing with 斗ive video streams.
逼etwork topology & traffic: If the network topology and the network traffic per link were known, the best end-to-end path (less congestion, lower delay, more bandwidth) would be selected for data delivery.
匹ontent Centric Delivery: If the content caching location, the network topology and traffic were known, more efficient content-aware delivery could be achieved based on the content name, rather than where the content is initially located.
疋ynamic Content Adaptation & Enrichment: If the con-tent could be interactively adapted and even enriched in the network, the user experience would be improved.
3 High-level Future Internet Network Architecture

We envision an FI architecture which will consist of different virtual hierarchies of nodes (overlays), with different functionalities. In Figure 3, 3 layers are depicted; however this model would be easily scaled to multiple levels of hierarchy (even mesh instantiations, where nodes may belong to more than one layer) and multiple variations, based on the content and the service delivery requirements and constraints.
In a realistic roll-out scenario, the FI deployment is expected to be incremental. This is because we expect that today痴 existing legacy network nodes (core routers, switch-es, access points) will not only remain and will even be the majority for a number of years; thus the proposed architec-ture should be backwards compatible with current Internet deployment. As shown in Figure 2, the Service/Network Provider Infrastructure Overlay is located at the lower layer. Users are considered as Content Producers (user generated content) and Consumers (we can then call them 撤rosumers).
Figure 2: FI high level architecture

This Network Infrastructure Overlay is the service, ISP and network provider network infrastructure, which consists of nodes with limited functionality and intelligence (due to the cost of the network constraints) . Content will be routed, assuming basic quality requirements and if possible and needed cached in this layer. The medium layer is the Distributed Content/Services Aware Overlay. Content-Aware Network Nodes (e.g. edge routers, home gateways, terminal devices) will be located at this overlay. These nodes will have the intelligence to filter content and Web services that flow through them (e.g. via deep packet Inspection or signalling processing), identify streaming sessions and traffic (via signalling analysis) and provide qualification of the content. This information will be reported to the higher layer of hierarchy (Information Overlay). Virtual overlays (not shown in the figure) may be considered or dynamically constructed at this layer. We may consider overlays for specific purposes e.g. content caching, content classification (and depending on the future capabilities, indexing), network monitoring, content adaptation, optimal delivery/streaming. With respect to content delivery, nodes at this layer may operate as hybrid client-server and/or peer-to-peer (P2P) networks, according to the delivery requirements. As the nodes will have infor-mation about the content and the content type/context that they deliver, hybrid topologies may be constructed, custo-mized for streaming complex media such as Scalable Video Coding (SVC), Multi-view Video Coding (MVC). At the highest layer, the Content/Services Information Overlay can be found. It will consist of intelligent nodes or servers that have a distributed knowledge of both the content/web-service location/caching and the (mobile) network instantiation/ conditions. Based on the actual network deployment and instantiation, the service scenario, the service requirements and the service quality agreements, these nodes may vary from unreliable peers in a P2P topology to secure corporate routers or even Data Centers in a distributed carrier-grade cloud network. The content may be stored/cached at the Information Overlay or at lower hierarchy layers. Though the Information overlay we can be always aware of the content/services location/caching and the network information. Based on this information, a decision on the way that content will be optimally retrieved and delivered to the subscribers or inquiring users or services can be made.

Read More: Click here...

9
Quote
Author : Er.Neha Gulati,Er.Ajay Kaushik
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

AbstractIn the imaging process of the remote sensing ,there was degradation phenomenon in the acquired  images. In order to reduce the image blur caused by the degradation, the remote sensing images were restored to give prominence to the characteristic objects in the images.the images were restored. IMAGE restoration is an important issue in high-level image processing..The purpose of image  restoration  is  to estimate the  original  image  from  the  degraded  data. It  is widely used  in  various  fields  of  applications,  such  as  medical  imaging, astronomical  imaging,  remote  sensing,  microscopy  imaging, photography  deblurring,  and  forensic  science,  etc. Restoration is beneficial to interpreting and analyzing the remote sensing images. After restoration, the blur phenomenon of the images is reduced. The characters are highlighted, and the visual effect of the images is clearer. In  this  paper different image  restoration  techniques  like  Richardson-Lucy  algorithm, Wiener filter,  Neural Network,Blind Deconvolution.

KeywordsImage  Restoration,Degradation model, Richardson-Lucy algorithm,Wiener filter, Neural Network,Blind Deconvolution.
 
I.   INTRODUCTION

For the space remote sensing camera, many factors will cause image degradation during the image acquisition process,such as aberration of the optical system, performance of CCD sensors, motion of the satellite platform and atmospheric turbulence [14]. The degradation results in image blur, affecting identification and extraction of the useful information in the images.The degradation phenomenon of the acquired images causes serious economic loss. Therefore, restoring the degraded images is an urgent task in order to expand uses of the images.There are several classical image restoration methods, for example Wiener filtering, regularized filtering and Lucy-Richardson algorithm. These methods require the prior knowledge of the degradation phenomenon [16][19], which be denoted as the degradation function of the imaging system, i.e.,the point spread function (PSF). As the operational environment of the remote sensing camera is special and the atmospheric condition during image acquisition is various, it is usually impossible to obtain accurate degradation function.The field of image restoration  (sometimes  referred  to  as  image  deblurring  or image deconvolution) is concerned with the reconstruction or estimation of the uncorrupted image from a blurred and noisy one. Essentially, it  tries to perform an operation  on the image that is the inverse of the imperfections in the image formation system. The remote sensing images dealt with in this paper have high resolution. With the PSF as parameter, the images can be restored by the various techniques.

II.   RELATED WORK
The task of deblurring an image is image deconvolution; if
the blur kernel is not known, then the problem is said to be 澱lind.For a survey on the extensive literature in this area, see [Kundur and Hatzinakos 1996]. Existing blind deconvolution methods typically assume that the blur kernel has a simple parametric form, such as
a Gaussian or low-frequency Fourier components. However, as illustrated by our examples, the blur kernels induced during camera shake do not have simple forms, and often contain very sharp edges.Similar low-frequency assumptions are typically made for the inputimage, e.g., applying a quadratic regularization. Such assumptions can prevent high frequencies (such as edges) from appearing in the reconstruction. Caron et al. [2002] assume a power-law distribution on the image frequencies; power-laws are a simple form of natural image statistics that do not preserve local structure. Some methods [Jalobeanu et al. 2002; Neelamani et al. 2004] combine power-laws with wavelet domain constraints but do not work for the complex blur kernels in our examples.

Deconvolution methods have been developed for astronomical im-ages [Gull 1998; Richardson 1972; Tsumuraya et al. 1994; Zarowin 1994], which have statistics quite different from the natural scenes we address in this paper. Performing blind deconvolution in this do-main is usually straightforward, as the blurry image of an isolated star reveals the point-spread-function.

Another approach is to assume that there are multiple images avail-able of the same scene [Bascle et al. 1996; Rav-Acha and Peleg 2005]. Hardware approaches include: optically stabilized lenses [Canon Inc. 2006], specially designed CMOS sensors [Liu andGamal 2001],and hybrid
imaging systems [Ben-Ezra and Nayar 2004]. Since we
would like our method to work with existing cam-eras and imagery and to work for as many situations as possible, we do not assume that any such hardware or extra imagery is available.

Recent work in computer vision has shown the usefulness of heavy-tailed natural image priors in a variety of applications, including denoising [Roth and Black 2005], superresolution [Tappen et al.2003], intrinsic images [Weiss 2001], video matting [Apostoloff and Fitzgibbon 2005], inpainting [Levin et al. 2003], and separating
reflections [Levin and Weiss 2004]. Each of these methods is effectively 渡on-blind, in that the image formation process (e.g., the blur kernel in superresolution) is assumed to be known in advance.Miskin and MacKay [2000] perform blind deconvolution on line art images using a prior on raw pixel intensities. Results are shown for small amounts of synthesized image blur. We apply a similar variational scheme for natural images using image gradients in place of intensities and augment the algorithm to achieve results for photo-graphic images with significant blur.

III.   IMAGE DEGRADATION
THEORY 
A. Image degradation model
As Fig 1 shows, image degradation process can be modeled as a degradation function together with an additive noise, operates on an input image f(x,y) to produce a degraded image g(x,y) [4]. As a result of the degradation process and noise interfusion, the original image becomedegraded image, representing image blur in different degrees.If the degradation function h(x, y) is linear and spatially invariant, the degradation process in the spatial domain is expressed as convolution of the f(x,y) andh(x, y) , given by

g(x,y)=f(x,y) * h(x,y)+n(x,y)               (1)

 
Figure1. Image degradation model

According to the convolution theorem , convolution of two spatial functions is denoted by the product of their Fourier transforms in the frequency domain.Thus,the degradation process in frequency domain can be written as

G(u,v)=F(u,v)H(u,v)+N(u,v)                (2)
B.  Image restoration theory
The objective of image restoration is to reduce the image blur during the imaging process. If we know the prior knowledge of the degradation function and the noises, the inverse process against degradation can be applied for restoration, including denoising and deconvolution. In frequency domain, the restoration process is given by the expression

  F(u,v)=G(u,v)-N(u,v)                          (3)
                   H(u,v)

Because restoration will enlarge the noises, denoising is done before restoration to remove the noises. Denoising can be performed both in the spatial domain and in the frequency domain. The usual method is to select an appropriate filter according to the characters of the noises to filter out the noises. Spatial convolution is defined as multiplication in the frequency domain, and its inverse operation is division.
Therefore, deconvolution is carried out in the frequency domain as a rule. At last, the inverse Fourier transform is done to F(u,v) to complete the restoration.

C.  Blurring
Blur  is  unsharp  image  area  caused  by  camera  or  subject movement, inaccurate focusing, or the use of an aperture that gives  shallow  depth  of  field  [7].  Blur  effects  are  filters  that make  smooth  transitions  and  decrease  contrast  by  averaging the pixels next to hard edges of defined lines and areas where there are significant color transition [15].

Read More: Click here...

10
Others / Effect of Nanofluids in a Vacuum Single Basin Solar Still
« on: February 18, 2012, 02:30:23 am »
Quote
Author : M. Koilraj Gnanadason, P. Senthil Kumar, G.Jemilda, S.Raja Kumar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract Clean water is a basic human necessity and without water life will be impossible. The provision of fresh water is becoming an increasingly important issue in many areas of the world. Among the non-conventional methods to desalinate brackish water or seawater, is solar distillation. The solar still is the most economical way to accomplish this objective. Tamilnadu lies in the high solar radiation band and the vast solar potential can be utilized to convert saline water to potable water. Solar distillation has low yield, but safe and pure supplies of water in remote areas. The attempts are made to increase the productivity of solar still by using different absorbing materials, depths of water, heat storage medium, nanofluids and also by providing low pressure inside the still basin. Heat transfer enhancement in solar still is one of the key issues of energy saving and compact designs. The use of additives is a technique applied to enhance the heat transfer performance of water in the still basin. Recently, as an innovative material, nanosized particles have been used in suspension in conventional solar still water. The fluids with nanosized solid particles suspended in them are called 渡anofluids. The suspended metallic or nonmetallic nanoparticles change the transport properties, heat transfer characteristics and evaporative properties of the base fluid. Nanofluids are expected to exhibit superior evaporation rate compared with conventional water. The aim of this paper is to analyze and compare the enhanced performance in a vacuum single basin solar still using nanofluids with the conventional water. They greatly improve the rate of evaporation and hence the rate of condensation on the cooler surface.

Keywords Solarstill; Nanofluid ;Nanoparticles; productivity.

1     INTRODUCTION
Water is essential to life. The origin and continuation of mankind is based on water. The supply of drinking water is an important problem for the developing countries. The increasing world population growth together with the increasing industrial and agricultural activities all over the world contributes to the depletion and pollution of fresh water resources. Worldwide drought and desertification are expected to increase the problem [1]. The importance of supplying potable water can hardly be overstressed. Water is an abundant natural resource that covers three quarters of the earth痴 surface. However, only about 3% of all water sources is potable. Less than 1% fresh water is within human reach and the rest is ice. Even this small fraction (ground water, lakes and rivers) is believed to be adequate to support life and vegetation on the earth. About 25% of the world does not have access to quality and quantity of fresh water and more than 80 countries face severe water problem [2]. In some instances, the salinity is probably too high for water to be considered as fresh drinking water; instead it is called brackish water. Salinity is usually expressed in parts per million (ppm). In such cases, fresh water has to be either transported for long distances or connected with an expensive distribution water network at extremely high cost for a small population [3]. Solar distillation is one of the available methods for water distillation and sunlight is one of the several forms of heat energy that can be used to power that process. Solar stills can easily provide enough water for drinking and cooking needs of the family. Also distilled water can be used for industrial purpose as it is cleaner [4].
 In this context, distilled water evaporation rate is enhanced by using solar still made up of Copper sheet instead of Cast Iron. The attempts are also made to increase the productivity of water by painting black coating inside the basin and providing low pressure inside the still. But the novel approach is to introduce the nanofluids in solar still with conventional water. The poor heat transfer properties of these conventional fluids compared to most solids are the primary obstacle to the high compactness and effectiveness of the system. The essential initiative is to seek the solid particles having thermal conductivity of several hundred times higher than those of conventional fluids. An innovative idea is to suspend ultrafine solid particles in the fluid for improving the thermal conductivity of the fluid [6]. The fluids with solid-sized nanoparticles suspended in them are called nanofluids. The suspended metallic or nonmetallic nanoparticles change the transport properties, heat transfer characteristics and evaporative rate of the base fluid. The carbon nanotube (CNT)-based nanofluids are expected to exhibit superior heat transfer properties compared with conventional water in the solar still and other type of nanofluids and hence the increase in the productivity and efficiency of the solar still [7].

2    SOLAR STILL
As the available fresh water is fixed on the earth and its demand is increasing day by day due to increasing population and rapidly increasing of industry, hence there is an essential and earnest need to get fresh water from the saline/brackish water present on or inside the earth. This process of getting fresh water from saline/ brackish water can be done easily and economically by desalination [3]. The solar stills are simple and have no moving parts and it can be used anywhere with lesser number of problems. The operation of solar still is very simple and no special skill is required for its operation and maintenance [4]. The use of solar energy is more economical than the use of fossil fuel in remote areas having low population densities, low rain fall and abundant available solar energy. Various parameters affect both efficiency and the productivity of the still.  The distilled water production rate can be increased by varying design of the solar still, depths of water, salt concentration, location and different absorbing materials, evaporative techniques and use of nanofluids [9].

2.1  Distillation is the same as Rainwater Process
Desalination is the one of the most important methods of getting potable water from brackish and sea water by using the free energy supply from the sun.  In nature, solar desalination produces rain when solar radiation is absorbed by the sea and causes water to evaporate. The evaporated water rises above the earth痴 surface and is moved by the wind. Once this vapor cools down to its dew point, condensation occurs, and the fresh water comes down as rain. The same principle is used in all manmade distillation systems using simple scientific principle of Evaporation and condensation. There are several types of solar stills, the simplest of which is the single basin still. But the yield of this is low and falls in the range of 3-4 litres per day per square metre [5].

2.2   Working of Solar Still
In conventional basin type solar still, the still consists of a shallow airtight basin lined with a black, impervious material, which contains Brackish or saline water. Solar radiation received at the surface is   absorbed effectively by the black surface and heat is transferred to the water in the basin. Temperature of the water increases and it increases the rate of evaporation. A sloping transparent glass cover is provided at the top. Water vapour produced by evaporation rises upward and condenses on the inner surface of the glass cover which is relatively cold. Condensed water vapour trickles down into the trough and from there it is collected in the storage container as distilled water.  The distilled water from a solar still has excellent taste when compared with commercially distilled water since the water is not boiled (which lowers pH). They are made of quality materials designed to stand up to the harsh conditions produced by water and sunlight. Provision is made to add water in the stills. Purified drinking water is collected from the output collection port as distillate.

3    EXPERIMENTAL SETUP
3.1   Solar Still Made Up of Copper
 As shown in the figure 1, solar still consists of a shallow triangular basin made up of Copper sheet instead of Cast Iron. As Copper has higher thermal conductivity of 401 W/mK comparatively higher than Cast Iron, rate of heat transfer to water in the still is more. The bottom of the basin is usually painted black to absorb sun痴 heat which in turn increases the evaporation rate. Top of the basin is covered with a glass of 4mm thick. Tilted fixed 32o so as to allow maximum transmission of solar radiation and helps the condensed vapour to trickle down into the trough, built in channel in the still basin. The edge of the glass is sealed with a tar tape so as to make the basin airtight. Entire assembly is placed on a stand structure made up of M.S angles. The outlet is connected to   a   storage container through a pipe.
The   basin   liner   is   made   of   a copper   sheet   of 900x400x50mm and 1.5 mm thickness.  The copper sheet is painted by red-lead primer then by matt-type black paint. used in the still.
 
Fig. 1. Experimental setup

Glass  cover  has  been  sealed  with  silicon  rubber which plays an important role to promote efficient operation as  it  can  accommodate  the  expansion and contraction between dissimilar materials. A thermo cool of 2.5 cm thickness with thermal conductivity of 0.045W/mK is used as insulating material to reduce the heat losses from the bottom and the side walls of the solar still.  The still is filled with the brackish water in a thin layer. The outer box is made by plywood. When sun radiation is coming on the solar still, the glass cover is heated. And due to heating of glass cover temperature of the water inside the solar still is increases and it forms vapour.  Such vapour has low density so goes upward and sticks to glass cover means it condenses. And due to slope it will go downward and collect in glass. Researches in heat transfer have been carried out over the previous several decades, leading to the development of the currently used heat transfer enhancement techniques. The use of additives is a technique applied to enhance the heat transfer performance of water in the still basin. Recently, as an innovative material, nanosized particles have been used in suspension in conventional solar still [7]. The fluids with nanosized solid particles suspended in them are called 渡anofluids. The suspended metallic or nonmetallic nanoparticles change the transport properties and heat transfer characteristics of the water in the still. Thus the water temperature in the basin increases. The carbon nanotube (CNT)-based nanofluids are expected to exhibit superior heat transfer properties compared with conventional water and other type of nanofluids [8].

Read More: Click here...

11
Networking / Performance Evaluation of LEACH Protocol in Wireless Network
« on: February 18, 2012, 02:28:55 am »
Quote
Author : M.Shankar, Dr.M.Sridar, Dr.M.Rajani, Dr.Soma V.Chetty
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

AbstractWireless micro sensor networks lend themselves to trade-offs in energy and quality. By ensuring that the system operates at a minimum energy for each quality point, the system can achieve both flexibility and energy efficiency, allowing the end-user to maximize system lifetime. Simulation results show that the proposed adaptive clustering protocol effectively produces optimal energy consumption for the wireless sensor networks, and resulting in an extension of life time for the network. The preparation phase is performed only once before the set-up phase of the first round. The processes of following set-up and steady-state phases in every round are the same as LEACH. Simulations show that LEACH can achieve as much as a factor of 8 reductions in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated

Index Terms Cluster, energy efficiency, LEACH protocol, network lifetime, wireless sensor networks
 
1   INTRODUCTION
A number of technologies currently exist to provide users with wireless connectivity. The challenges in the hierarchy of: detecting the relevant quantities, monitoring and collecting the data, assessing and evaluating the information, formulating meaningful user displays, and performing decision-making and alarm functions are enormous. The information needed by smart environments is provided by Wireless Sensor Networks [1], which are responsible for sensing as well as for the first stages of the processing hierarchy. The security has become a big task in wired and wireless networks. Sensor networks are self-organized networks, which makes them suitable for dangerous and harmful situations, but at the same time makes them easy targets for attack. For this reason we should apply some level of security so that it will be difficult to be attacked, especially when they are used in critical applications. Wireless Sensor Networks (WSNs) are special kinds [2, 3, 4, 5] of Ad hoc networks that became one of the most interesting areas for researchers to study. The most important property that affects these types of network is the limitation of the available resources, especially the energy. This organization provides some energy saving, and that was the main idea for proposing this organization.
LEACH (Low Energy Adaptive Clustering Hierarchy) added another interesting issue to this kind of network.  By analyzing the advantages and dis-advantages of conventional routing protocols, LEACH (Low- Energy Adaptive Clustering

Hierarchy) Protocol was developed a clustering based protocol that minimizes energy dissipation in sensor networks. This work focuses on LEACH (Low-Energy Adaptive Clustering Hierarchy), a communication protocol for micro sensor networks. LEACH collects data from distributed micro sensors and transmits it to a base station. LEACH Low-Energy Adaptive Clustering Hierarchy (or LEACH) was one of the first major improvements on conventional clustering approaches in wireless sensor networks. Conventional approaches algorithms such as MTE (Minimum-Transmission-Energy) or direct-transmission do not lead to even energy dissipation throughout a network. LEACH provides a balancing of energy usage by random rotation of cluster heads. The algorithm is also organized in such a manner that data-fusion can be used to reduce the amount of data transmission.

Figure1. Cluster organization for sensor networks

Types of Broadcast

1. Probalistic
1. Distance Mode
2. Location Mode
3. Counters Mode

2. Deterministic

    1. Self Pruning
    2. Scalable broadcasting
    3. Ad hoc broadcasting
    4. Cluster based
    5. Simple flooding

The development of clustered sensor networks has recently been shown to decrease system delay, save energy while performing data aggregation and increase system throughput. These are strong motivational points behind selecting LEACH as the baseline protocol for the analytical study. Also LEACH has a few but very significant disadvantages like it assumes all the nodes to have same energy, which is not the case always in real-time problems, its cannot be applied for mobile nodes, failure of cluster-heads creates a lot of problems and it doesn稚 take into account that the systems might have multiple base stations. Low Energy Adaptive Clustering Hierarchy (LEACH) is an energy-efficient hierarchical-based routing protocol. Our prime focus was on the analysis of LEACH based upon certain parameters like network lifetime, stability period, etc. and also the effect of selective forwarding attack and degree of hetero- geneity on LEACH protocol.

2   LEACH PROTOCOL
LEACH (Low Energy Adaptive Clustering Hierar-chy) is a hierarchical-based routing protocol which uses random rotation of the nodes required to be the cluster-heads to evenly distribute energy consumption in the network. Sensor network protocols are quite simple and hence are very susceptible to attacks like Sinkhole attack, Selective forwarding, Sybil attack, Wormholes, HELLO flood attack, Acknowledgement spoofing, altering, replaying routing information. For example, Selective forwarding and HELLO flood attack affects networks with clustering based protocols like LEACH.

2.1 Description
Heinzelman introduced a hierarchical clustering algorithm for sensor networks, called Low Energy Adaptive Clustering Hierarchy (LEACH). LEACH arranges the nodes in the network into small clusters and chooses one of them as the cluster-head. Node first senses its target and then sends the relevant information to its cluster-head. Then the cluster head aggregates and compresses the information received from all the nodes and sends it to the base station. Low Energy Adaptive Clustering Hierarchy (LEACH) is the first hierarchical cluster-based routing protocol for wireless sensor network which partitions the nodes where these data is needed using CDMA (Code division multiple access). Remaining nodes are cluster members this protocol is divided into rounds; each round consists of two phases.

Set-up Phase
(1) Advertisement Phase
(2) Cluster Set-up Phase

Steady Phase
(1) Schedule Creation
(2) Data Transmission
Set-up Phase
Each node decides independent of other nodes if it will become a CH or not. This decision takes into account when the node served as a CH for the last time (the node that hasn't been a CH for long time is more likely to elect itself than nodes that have been a CH recently). This is done according to a threshold value, T (n). The threshold value depends upon the desired percentage to become a cluster-head- p, the current round r, and the set of nodes that have not become the cluster-head in the last 1/p rounds, which is denoted by G. Based on all messages received within the cluster, the CH creates a TDMA schedule, pick a CSMA code randomly, and broadcast the TDMA table to cluster members every node wanting to be the cluster-head chooses a value, between 0 and 1. If this random number is less than the threshold value, T (n), then the node becomes the cluster- head for the current round. Then each elected CH broadcasts an advertisement message to the rest of the nodes in the network to invite them to join their clusters. Based upon the strength of the advertisement signal, the non-cluster head nodes decide to join the clusters. In the set-up phase, the cluster head nodes are randomly selected from all the sensor nodes and several clusters are constructed dynamically 

Read More: Click here...

12
Quote
Author : C. Vijayaraghavan, Dr. D. Thirumalaivasan, Dr. R. Venkatesan
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract -  During the past five decades, natural hazards such as floods, earthquakes, severe storms and tropical cyclones, droughts, wild land fires, and also manmade disasters such as Nuclear disaster, oil spills, and terrorist attacks have caused major loss of human lives and livelihoods, the destruction of economic and social infrastructure, as well as environmental damages. Disaster reduction is both an issue for consideration in sustainable development agenda and a cross cutting issue relating to social, economic, environmental and humanitarian sectors. These important features have to analyze and there is a need to study. Though, in recent years the Open GIS technology standards have been developed by several agencies, which provide the basis for utilization of geographic information services, also gives an opportunity for data interoperability, data integration and data sharing between different emergency management agencies, However finding suitable services and visualization of geospatial information for decision makers is still a crucial task. Objective of this paper is to assess the state of art literature review in different methodologies of utilizing geospatial technology in managing both natural and manmade disasters dedicated by different authors and also to find new direction in this important area.

Index Terms: Natural disasters, Geographical Information system, man-made disasters, Nuclear disaster

1.   INTRODUCTION
A disaster is defined as a serious disruption of the functioning of a community or a society causing widespread human, material, economic or environmental loss that exceed the ability of the affected community or society to cope using its own resources. Disasters interrupt the society by claiming lives, creating victims and destroying infrastructures and houses. Disasters also have negative impacts on the environment as they affect natural resources. Therefore, considering society, economy and environment as the three main components of sustainable development, disasters have a negative impact on sustainable development which making appropriate management of disaster a necessity.

When a disaster occurs, funds and budgets that have been assigned for development purposes are diverted to respond to that disaster and returning quality of life to normal. It is estimated that 7080% of information is resolvable to geographic location, therefore the nature and characteristics of geographic information (GI), and the way in which it is used, is paramount in managing crises effectively. Therefore spatial data and related technologies have proven to be crucial for effective collaborative decision-making in disaster management. However, current studies show that although spatial data can facilitate disaster management, there are substantial problems with collection, access, dissemination and usage of required spatial data for disaster management. Such problems become more serious in the disaster response phase with its dynamic and time-sensitive nature.

2.   NATURAL AND MANMADE DISASTERS
   Disasters may be natural or manmade. Natural disaster like earthquake, flood, tsunami, etc will occur and it is not possible to prevent them. Nevertheless their effect on human health and property can be mitigated effectively if appropriate anticipatory measures based on Geospatial technology in a timely, coordinated manner is planned and implemented in each phase of operation like relief, rehabilitation and reconstruction. Earthquakes are caused by the motion of tectonic plates - individual sections that make up the earth's surface like panels on a football. Immense strain accumulates along fault lines where adjacent plates meet. When the rock separating the plates gives way, sudden seismic ground-shaking movements occur. The point where the seismic activity occurs is the epicenter, where the earthquake is strongest. The seismic waves usually travel out from the epicenters, sometimes creating widespread destruction as they pass. Earthquakes lead the list of natural disaster in terms of damage and human loss and they affect very large areas, causing death and destruction on a massive scale. Cyclones are the deadliest of all natural disasters worldwide associated with strong winds, storm surges, heavy precipitation and floods. The property damage caused by winds depends on quality of construction and maximum wind speed. Storm surges which are rapid increase in sea-level along the coast due to strong winds driving the water ashore cause maximum damage. When rain falls, it drains down from hillside to streams, along rivers and out into sea. When this rainfall is incessant, the land becomes saturated and the natural drainage system fails. The upper reaches of rivers quickly fill and force the excess water downstream. In the lower reaches water flows slower. Here the river swells and begins to break its banks. This results in flooding of the plains especially the low-lying flat wide areas in the lower reaches of a river.
A tsunami is a chain of fast moving waves caused by sudden trauma in the ocean. They can be generated by earthquakes, volcanic eruption or even impact of meteorites. Tsunamis are not tidal waves as they are not caused by changes in tides. They are most common around the edge of the Pacific, where more than half the world's volcanoes are found. These seismic surges can assault coastlines, often with little or no warning. Tsunamis are rare in the Indian Ocean. The tsunami of December 2004 started with an earthquake off the north Sumatra Coast which generated devastating tsunami waves affecting several countries in South East Asia. Landslides are frequent and annually recurring phenomenon in hilly areas. Outward and downward movement of mass, consisting or rock and soils, due to natural or man-made causes is termed as landslide. High intensity rainfall triggered most of the landslides. As long as landslides occur in remote, unpopulated regions, they are treated as just another denudation process sculpting the landscape, but when occur in populated regions; they become subjects of serious study. Most of the landslides occur due to exhaustive deforestation for the development of urbanization and plantation. In these areas rainwater directly penetrates into the soil and cause landslides.
Among the manmade disasters, probably the most devastating (after wars) are industrial disasters. These disasters may be caused by chemical, mechanical, civil, electrical or other process failures in an industrial plant due to accident or negligence, which may cause widespread damage within and/or outside the plant. The worst example globally was the Methyl Iso-cynate gas leak in 1984 from the Union Carbide Factory in Bhopal which has so far claimed more than 20,000 lives and injured few million personals besides stunting the growth of a generation born from the affected population. This disaster triggered a completely new legal regime and practices for preventing such disasters. With increased emphasis on power generation through nuclear technology, the threat of nuclear hazards has also increased. The Department of Atomic Energy (DAE) has been identified as the nodal agency in the country in respect of manmade radiological emergencies in the public domain. Nuclear facilities in India have adopted internationally accepted guidelines for ensuring safety to the public and environment. Apart from this there is also a threat of nuclear disaster through terrorist attack in any of the major cities in India. To overcome this kind of disasters proper decision support system is essential.
3. DISASTER MANAGEMENT USING GIS
Most of the Natural and manmade Disaster management activities can be accomplished faster by the help of Geographic Information system (GIS), a computerized data base, analyze and visualization system of spatial data. Geographic Information Systems (GIS) provide a range of techniques which allow ready access to data, and the opportunity to overlay graphical location-based information for ease of interpretation.  They can be used to solve complex planning and management problems. The disaster management consists of three important phases as Pre disaster phase (planning, preparedness and mitigation), On-Disaster phase or impact phase (response, recovery, evacuation, etc.) and post disaster phase (Rehabilitation, damage Assessment, providing food and medical facilities, etc). All phases of emergency management (reduction, readiness, response and recovery) can benefit from GIS, including applications related to disaster management systems, a critical element in managing effective lifelines in an emergency. Considering GIS as underpinning technology for spatial technologies and its role in facilitating data collection and storage as well as facilitating decision-making based on spatial data processing and analysis, GIS is a good tool for improving decision-making for disaster management.
The Geographic Information System (GIS) - based methodologies are now being developed for disaster loss estimation and risk modeling. These data can be used not only for real - time damage assessment but also for long term planning of efficient land use measure and adoption of building codes (minimum construction standards) or retrofitting methods. The easy availability of such maps, which include details of infrastructure, roads, hospital, school, shelters, engineering structure etc. simplify disaster management and rehabilitation efforts.
In this paper, we survey the literature to identify potential research directions in disaster operations, discuss relevant issues, and provide a starting point for interested researchers. From the literature review it has been concluded the most of the authors worked on the following unique three phases of disaster.

Read More: Click here...

13
Quote
Author : Mr.Sumedh.S.Jadhav, Prof.C.N.Bhoyar
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract   Embedded multiprocessor design presents challenges and opportunities that stem from task coarse granularity and the large number of inputs and outputs for each task. We have therefore designed a new architecture called embedded concurrent computing (ECC), which is implementing on FPGA chip using VHDL. The design methodology is expected to allow scalable embedded multiprocessors for system expansion. In recent decades, two forces have driven the increase of the processor performance: Advances in very large-scale integration (VLSI) technology and Micro architectural enhancements. Therefore, we aim to design the full architecture of an embedded processor for realistic to perform arithmetic, logical, shifting and branching operations. We will be synthesize and evaluated the embedded system based on Xilinx environment. Processor performance is going to be improving through clock speed increases and the clock speed increases and the exploitation of instruction- level parallelism. We will be designing embedded multiprocessor based on Xilinx environment. These data have been gathered by synthesis. Implementation was achieved by the provision of low complexities in terms of FPGA resource usage and frequency. In addition, the design methodology allows scalable embedded multiprocessors for system expansion.
 
Index Terms Embedded System design, FPGA ,Memory Architecture, Real time Processor, Scheduler, VHDL Environment, Xilinx Environment.
 
1 INTRODUCTION
In  recent decades, two forces have driven the increase of the processor performance: Firstly, advances in very large-scale integration (VLSI) technology and secondly micro architectural enhancements [1].
     Processor Performance has been improve  through clock speed Increases and the exploitation of instruction-level Parallelism. While transistor counts continue to increase, recent attempts to achieve even more significant increase in single-core performance have brought diminishing returns [2, 3]. In response, architects are building chips With multiple energy-efficient processing cores instead of investing the whole transistor count into a single, complex, and power-inefficient core [3, 4]. Modern embedded systems are design as systems-on a-chip (SoC) that incorporate single chip multiple Programmable cores ranging from single chip multiple programmable cores ranging from processors to custom designed accelerators. This paradigm allows the reuse of pre-designed cores, simplifying the design of billion transistor chips, and amortizing costs.

In the past few years, parallel-programmable SoC (PPSoC)have Successful PPSoC are high-performance embedded multiprocessors such as the STI Cell [3] .They are dubbed single-chip heterogeneous multiprocessors (SCHMs) because they have a dedicated processor that coordinates the rest of the processing units. A multiprocessor design with SoC like integration of less-efficient, general-purpose processor cores with more efficient special-purpose helper engines is project to be the next step in computer evolution [5].
    First, we aim to design the full architecture of an embedded processor for realistic throughput. We used FPGA technology not only for architectural exploration
but also as our target deployment platform because we believe that this approach is best for validating the feasibility of an efficient hardware implementation.
     This architecture of the embedded processor resembles a superscalar pipeline, including the fetch, decode, rename, and dispatch units as parts of the in-order front-end. The out of-order execution core contains the task queue, dynamic scheduler; execute unit, and physical register file. The in order back-end is comprised of only the retire unit. The embedded architecture will be implement using the help of RTL descriptions in System VHDL.
    We will integrate the embedded processor with a shared memory system, synthesized this system on an FPGA environment, and performed several experiments using realistic benchmarks. the methodology to design and implement a microprocessor or multiprocessors is presented. To illustrate it with high detail and in a useful way, how to design the most complex practical session is shown. In most cases, computer architecture has been taught with software simulators [1], [2]. These simulators are useful to show: internal values in registers, memory accesses, cache fails, etc. However, the structure of the microprocessor is not visible.
    In this work, a methodology for easy design and real Implementation of microprocessors is proposed, in order to provide students with a user-friendly tool. Simple designs of microprocessors are exposed to the students at the beginning, rising the complexity gradually toward a final design with two processors integrated in an FPGA; each of which has an independent memory system, and are intercommunicated with a unidirectional serial channel.

     2   MULTIPROCESSOR
       Multiprocessor system consists of two or more
Connect processors that are capable of communicating. This can be done on a single chip where the processors are connected typically by either a bus. Alternatively, the multiprocessor system can be in more than one chip, typically connected by some type of bus, and each chip can then be a multiprocessor system. A third option is a multiprocessor system working with more than one computer connected by a network, in which each
Computer can contain more than one chip, and each chip can contain more than one processor.
     A parallel system is presented with more than one task, known as threads. It is important to spread the workload over the entire processor, keeping the difference in idle time as low as possible. To do this, it is important to coordinate the work and workload between the processors. Here, it is especially crucial to consider whether or not some processors are special-purpose IP cores. To keep a system with N processors effective, it has to work with N or more threads so that each processor constantly has something to do. Furthermore, it is necessary for the processors to be able to communicate with each other, usually via a shared memory, where values that other processors can use are stored. This introduces the new problem of thread safety. When thread safety is violated, two processors (working threads) access the same value at the same time. Some methods for restricting access to shared resources are necessary. These methods are known as thread safety or synchronization. Moreover, it is necessary for each processor to have some private memory, where the processor does not have to think about thread safety to speed up the processor. As an example, each processor needs to have a private stack. The benefits of having a multiprocessor are as follows:
1. Faster calculations are made possible.
2. A more responsive system is created.
3. Different processors can be utilized for different Tasks. In the future, we expect thread and process parallelism to become widespread for two reasons: the nature of the Applications and the nature of the operating system. Researchers have therefore proposed two alternatives Micro architectures that exploit multiple threads of Control: simultaneous multithreading (SMT) and chip multiprocessors (CMP). Chip multiprocessors (CMPs) use relatively simple.
    Single-thread processor cores that exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores. Wide-issue superscalar processors exploit instruction level parallelism (ILP) by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit thread-level parallelism (TLP) by executing different threads in parallel on Different processors.

3 EMBEDDED SYSTEMS AND SYSTEM DESIGN       
3.1 Characteristics of Embedded Systems
The earliest embedded systems were banking and transaction processing systems running on mainframes and arrays of disks. The design of such a system entails hardware- software co-design: given the expected number and type of transactions to be made in a day, a hardware configuration must be chosen that will support the expected traffic and a software design must be created to efficiently make use of that hardware. When microprocessors are used to create specialized, low-cost products, engineering costs must be reduced to a level commensurate with the cost of the underlying hardware.
    Because multiprocessors can be used in such a wide range of products, embedded systems may need to meet widely divergent criteria. Examples of embedded systems include:
   simple appliances, such as microwave ovens, where the multiprocessor provides a friendly interface and advanced features;
   an appliance for a computationally intensive task, such as laser printing;

Read More: Click here...

14
Quote
Author : Prithvijit Chakrabarty
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract This paper describes a formula to predict the number of prime numbers between a known prime  'P' and its square, when all primes up to 善 are known.
   The formula is developed by considering a continuous section of the number line between P and P2. The length of this section is repeatedly divided by primes below P to obtain the number of primes in the region. The process is similar to the Sieve of Eratosthenes. However, instead of eliminating the multiples of primes below P, it eliminates the number of multiples of these primes. This reduces it to a simplifiable algebraic expression that can easily be implemented using programs.

Index Terms Divisibility, factors, multiples, number, prime, range, square.

1   INTRODUCTION                                                                      
ANY number can be tested to be a prime by checking divi-siblity by primes below its square root. If we consider a number P2, where is a known prime, then, division of P2 with primes numbers below must be performed to test its primality. In other words, if P2 is composite, it is guaranteed to be divisible by prime numbers below P.
   Similarly, if there are composites between the num-bers P and P2, then they must be divisible by primes below their square root. As they are less than P2, their square root will be less than P and they will be divisible by primes below P. Thus, all composite numbers from P to P2 are guaranteed to be divisble by primes below P.
your paper.

2  NUMBER OF MULTIPLES OF PRIMES IN A RANGE OF CONSECUTIVE NATURAL NUMBERS
If we consider 'n' consecutive natural numbers, then, half of them them will be odd and half will be even, i.e, in this range, there will be   even and  odd numbers, or  multiples of 2. Similarly, there will be  multiples of 3 in this range. However, some of these multiples will be divisible by 2 as well. Thus, to find numbers divisible only by 3 and a number greater than 3, we must find the number
of multiples of 3 among the numbers excluding the multiples of 2. In a range of n, numbers, there will be  such numbers. Similarly, there will be  multiples of 5 that are divisible only by numbers greater than 5 and 5. In this manner, the number of multiples of any prime 'p' in a range of 'n' con secutive natural numbers can be found by subtracting the total
number of multiples of primes below 'p' from 'n' and dividing it with 'p'.

3   CALCULATING THE NUMBER OF PRIME NUMBERS FROM P TO P2
The process of finding multiples of primes mentioned above can be used to find the total number of multiples of all primes up to P in the range of P to P2. As all composite numbers in this range are multiples of these primes, we can obtain the number of composite numbers in this range.
   In this range, the number of consecutive natural numbers present is (P2-P). If this value is  , then,
   Number of multiples of 2=          (= )
   Number of multiples of 3=          (= )
From the previous step,  =    
Thus, the number of multiples of 3 can also be represented as            (1)
   Number of multiples of 5=       (= )
However, from the previous step,  is 
Thus, we may write the number of multiples of 5 as                  (2)

   Number of multiples of 7=       (= )
As  is  , the number of multiples of 7 will be              (3)
   Number of multiples of 11=       (= )
The previous step show that is  .
Hence, the number of multiples of 11 will be              (4)

      In this manner, the number of multiples of all the prime numbers less than P can be found.
Now, from the expressions (i), (ii), (iii), (iv) the general pattern the numbers follow can easily be deducted.
If we consider the series formed by these numbers, then, the ith term will be as follows:
 
where,
pi-1 is the  (i-1)th  prime number,
pi is the  ith  prime number
ni-1 is the numerator of the (i-1)th term,
di-1 is the denominator of the (i-1)th term.

The total number of composite numbers that will be present between P and P2 will be:
             
   
where i varies from 0 to k such that P is the kth prime number.  The only exception to this is the number of multiples of 2(as there is no (-1)th prime.)
   As the number of composite numbers can be found, the remaining numbers in the range must be prime. Thus, the number of primes from P to P2 is:
                    (excluding the prime number P).
or,
        (5)

where ci-1 is the coefficient of   of the (i-1) th term.

   The most important property of the result (equation (v)) is that the presence of a large number of prime numbers can be detected by knowing a small number of primes. For example, there are only 7 primes below 11. If we know only these 7 primes, we can detect the presence of 25 other primes that are present between 11 and 121. Similarly, by finding out just 3 more primes, we can find the number of primes between 19 and 324. Thus, the range of the formula increases by a great extent with every prime number found.

4   CONCLUSION
This algorithm efficiently finds the number of primes in a given range. Though it does not predict the actual primes that will be present from P to P2, when implemented using programs, the number of primes can be predicted quickly and with great ease due to the simplicity of the formula.

Read More: Click here...

15
Engineering, IT, Algorithms / An Automatic Voice-Controlled Audio Amplifier
« on: February 18, 2012, 02:23:43 am »
Quote
Author : Jonathan A. Enokela and Jonathan U. Agber
International Journal of Scientific & Engineering Research Volume 3, Issue 1, January-2012
ISSN 2229-5518
Download Full Paper : PDF

Abstract The delivery of the proper quality of audio signals to the audience in the entertainment, public and other environments is of great, and sometimes critical, importance. This always requires that the audio signals be of the correct intensity to the hearing of the audience, especially if the signals come from different sources. This work presents a system which automatically fades out the main stream signal when signals from other sources are received. By arranging the circuit such that the signal from the other sources continuously drives a pair of bipolar junction transistors towards heavier saturation, the mainstream signal was attenuated by as much as 3 dB.

Index Terms Audio Amplifier, Electronic Control, Attenuation, Voice Control, Public Address System, Audio Fading.

1    INTRODUCTION
IN many instances in public addressing environment, radio stations, television houses and in other places, the need for two signals to be simultaneously sent to the listeners arises. In almost all cases the audio signals will have to operate in such a way that one source is attenuated while the other is amplified for the listeners to have their attention drawn to the one that is amplified momentarily. In a radio house, for instance, the announcer might want to put out an urgent message to the listeners while the music that he has been playing at the background will be attenuated. Most existing facilities require that the announcer use his hand to control the volume of the music being played at the background while he makes his announcements. This process has some drawbacks: in the first place, the degree of attenuation that the announcer imposes on the amplifier is highly subjective. This results in the background music being either too loud or too faint. Secondly, a manual control will wear away with time.
The system being proposed operates in such a way that the amount of attenuation will be proportional to the loudness of the announcer痴 voice and immediately the announcer stops talking, the music being played would be restored to its original volume.

2    SYSTEM BLOCK DIAGRAM
The block diagram of the proposed system is depicted in figure 1. Under normal conditions of operation, the signal input, designated as line input, is the signal that is transmitted to the output through the line and the mixer amplifiers. When a signal is input at the microphone (MIC) input, however, this signal is amplified by the block called MIC amplifier and is passed through the mixer amplifier to the output. Simultaneously the output signal from the MIC amplifier operates the attenuator which under the control of this signal attenuates the output from the line amplifier and reduces the amount of line input signal that is transmitted to the output. The amount of line input signal that is transmitted to the output depends on the strength of the signal from the MIC input.

3    SCHEMATIC DIAGRAM

A schematic diagram that can be used to realise the block diagram of figure 1 is depicted in figure 2. The line amplifier is built around the operational amplifier (Op Amp) IC1 [1], [2], [3] and there is a further amplification after attenuation by IC3, while IC5 is the mixer amplifier.
 The amplification of the MIC signal is done by IC2, while a further amplification by IC4 ensures enough signal level for rectification by the diodes. The positive half cycle of the signal is rectified by D2 and D3, while D1 and D4 rectify the negative half cycle. It is observed that distortion of the line signal results if only one half cycle is used for control. The transistors Q1 and Q2 form the controlled attenuator.
 
   Fig.2: Schematic Diagram of the Voice-Controlled Amplifier
 
4    SPECIFICATIONS

The Voice-Controlled Amplifier (VCA) is expected to be incorporated into existing systems. This implies that the input and the output signal levels should be compatible with commercially available audio equipments [4]. Thus the following specifications are obtainable:

 Line input:    300mV, 10kΩ
 Mic. input:    20mV, 100Ω
 Output:          1V, 10kΩ
 Frequency Response:   20Hz 18 kHz.

5    CIRCUIT ANALYSIS AND DESIGN [3], [5]
Each stage of the circuit can be isolated and analysed individually and then designed. Let us consider first the line input stage indicated in figure 3.
The circuit shown in Figure 3 is basically a non-inverting amplifier stage. The capacitor C3 controls the low frequency response while the high frequency response is controlled by C17. The capacitor C1 is chosen so that it has a very low reactance at the lowest frequency of interest. The gain of this amplifier stage is given by (1).

Read More: Click here...

Pages: [1] 2 3 ... 22