Author Topic: Application of Reliability Analysis: A Technical Survey  (Read 2650 times)

0 Members and 1 Guest are viewing this topic.


  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Application of Reliability Analysis: A Technical Survey
« on: April 23, 2011, 05:15:07 pm »
Author : Dr. Anju Khandelwal
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -

Abstract— The objective of this paper is to present a survey of recent research work of high quality that deal with reliability in different fields of engineering and physical sciences. This paper covers several important areas of reliability, significant research efforts being made all over the world. The survey provides insight into past, current and future trends of reliability in different fields of Engineering, Technology and medical sciences with applications with specific problems.

Index Terms— CCN, Coherent Systems, Distributed Computing Systems, Grid Computing, Nanotechnology, Network Reliability, Reliability.

THIS Traditional system-reliability measures include reliability, availability, and interval availability. Re-liability is the probability that a system operates without interruption during an interval of interest under specified conditions. Reliability can be extended to in-clude several levels of system performance. A first per-formance-oriented extension of reliability is to replace a single acceptable-level-of-operation by a set of perfor-mance-levels. This approach is used for evaluating net-work performance and reliability. The performance-level is based on metrics derived from an application-dependent performance model. For example, the perfor-mance-level might be the rate of job completion, the re-sponse time, or the number of jobs completed in a given time-interval. Availability is the probability that the sys-tem is in an operational state at the time of interest. Availability can be computed by summing the state probabilities of the operational states. Reliability is the probability that the system stays in an operational state throughout an interval. In a system without repair, relia-bility and availability are easily related. In a system with repair, if any repair transitions that leave failed states are deleted, making failure states absorbing states, reliability can be computed using the same methods as availability. Interval availability is the fraction of time the system spends in an operational state during an interval of interest. The mean interval availability can be computed by determining the mean time the system spends in operational states. Mean interval availability is a cumulative measure that depends on the cumulative amount of time spent in a state.
For example, how can traditional reliability assessment techniques determine the dependability of manned space vehicle designed to explore Mars, given that humanity has yet to venture that far into space? How can one determine the reliability of a nuclear weapon, given that the world has in place test-ban treaties and international agreements? And, finally, how can one decide which artificial heart to place into a patient, given neither has ever been inside a human before? To resolve this dilemma, reliability must be: 1) reinterpreted, and then 2) quantified. Using the scientific method, researchers use evidence to determine the probability of success or failure. Therefore, reliability can be seen as an image of probability. The redefined concept of reliability incorporates auxiliary sources of data, such as expert knowledge, corporate memory, and mathematical modeling and simulation. By combining both types of data, reliability assessment is ready to enter the 21st century. Thus, reliability is a quantified measure of uncertainty about a particular type of event (or events). Reliability can also be seen as a probability.

Reliability is a charged word guaranteed to get attention at its mere mention. Bringing with it a host of connota-tions, reliability, and in particular its appraisal faces a critical dilemma at the dawn of a new century. Tradition-al reliability assessment consists of various real-world assessments driven by the scientific method; i.e., conducting extensive real-world tests over extensive time periods (often years) enabled scientists to determine a product’s reliability under a host of specific conditions. In this 21st century, humanity’s technology advances walk hand in hand with myriad testing constraints, such as political and societal principles, economic and time considerations, and lack of scientific and technology knowledge. Because of these constraints, the accuracy and efficiency of traditional methods of reliability assessment become much more questionable. Applications are the important part of research. Any theory has importance, if it is useful and applicable. Many researchers are busy these days applying concepts of Reliability in various fields of Engineering and Sciences. Some important applications are given here:
2.1   Nano-Technology
Nano-reliability measures the ability of a nano-scaled product to perform its intended functionality. At the nano scale, the physical, chemical, and biological properties of materials differ in fundamental, valuable ways from the properties of individual atoms, mole-cules, or bulk matter. Conventional reliability theories need to be restudied to be applied to Nano-Engineering. Research on Nano-Reliability is extremely important due to the fact that nano-structure components account for a high proportion of costs, and serve critical roles in newly designed products. In this paper, Shuen-Lin Jeng et al.[1] introduces the concepts of reliability to nano-technology; and presents the work on identifying various physical failure mechanisms of nano-structured materials and devices during fabrication process and operation. Modeling techniques of degradation, reliability functions and failure rates of nano-systems have also been discussed in this paper.
Engineer’s are required to help increase reliability, while maintaining effective production chedulesto produce current, and future electronics at the lowest possible cost. Without effective quality control, devices dependent on nanotechnology will experience high manufacturing costs, including transistors which could result in a disruption of the continually steady Moore’s law. Nano Technology can potentially transform civilization. Realization of this potential needs a fundamental understanding of friction at the atomic scale. Furthermore, the tribological considerations of these systems are expected to be an integral aspect of the system design and will depend on the training of both existing and future scientists, and engineers in the nano scale. As nanotechnology is gradually being integrated in new product design, it is important to understand the mechanical and material properties for the sake of both scientific interest and engineering usefulness. The development of nanotechnology will lead to the introduction of new products to the public. In the modern large-scale manufacturing era, reliability issues have to be studied; and results incorporated into the design and manufacturing phases of new products. Measurement and evaluation of reliability of nano-devices is an important subject. New technology is developed to support the achievement of this task. As noted by Keller, et al. [2], with ongoing miniaturization from MEMS towards NEMS, there is a need for new reliability concepts making use of meso-type (micro to nano) or fully nano-mechanical approaches. Ex-perimental verification will be the major method for uvalidating theoretical models and simulation tools. Therefore, there is a need for developing measurement techniques which have capabilities of evaluating strain fields with very local (nano-scale) resolution.

2.2   Computer Communication Network
Network analysis is also an important approach to model real-world systems. System reliability and system unreliability are two related performance indices useful to measure the quality level of a supply-demand system. For a binary-state network without flow, the system unreliability is the probability that the system can not connect the source and the sink. Extending to a limited-flow network in the single-commodity case, the arc capacity is stochastic and the system capacity (i.e. the maximum flow) is not a fixed number. The system unreliability for (+ 1), the probability that the upper bound of the system capacity equals can be computed in terms of upper boundary points. An upper boundary point is the maximal system state such that the system fulfills the demand. In his paper Yi-Kuei Lin [3] discusses about multicommodity limited-flow network (MLFN) in which multicommodity are transmitted through unreliable nodes and arcs. Nevertheless, the system capacity is not suitable to be treated as the maximal sum of the commodity because each commodity consumes the capacity differently. In this paper, Yi-Kuei Lin defines the system capacity as a demand vector if the system fulfils at most such a de-mand vector. The main problem of this paper is to meas-ure the quality level of a MLFN. For this he proposes a new performance index, the probability that the upper bound of the system capacity equals the demand vector subject to the budget constraint, to evaluate the quality level of a MLFN. A branch-and-bound algorithm based on minimal cuts is also presented to generate all upper boundary points in order to compute the performance index.

In a computer network there are several reliability problems. The probabilistic events of interest are:
* Terminal-pair connectivity
* Tree (broadcast) connectivity
* Multi-terminal connectivity

These reliability problems depend on the net-work topology, distribution of resources, operating envi-ronment, and the probability of failures of computing nodes and communication links. The computation of the reliability measures for these events requires the enume-ration of all simple paths between the chosen set of nodes. The complexity of these problems, therefore, increases very rapidly with network size and topological connectivity. The reliability analysis of computer communication networks is generally based on Boolean algebra and probability theory. Raghavendra, et al. [4] discusses various reliability problems of computer networks including terminal-pair connectivity, tree connectivity, and multi-terminal connectivity. In his paper he also studies the dynamic computer network reliability by deriving time-dependent expressions for reliability measures assuming Markov behavior for failures and repairs. This allows computation of task and mission related measures such as mean time to first failure(MTFF) and mean time between failures (MTF).

Read More: