Professional Documents
Culture Documents
SEM 1 2014/2015
CHAPTER 1
The History of Reliability and Safety
Technology
Failure Data
Therefore, failure rates of engineered components were not required, as they are now, for use
in prediction techniques and consequently there was little incentive for the formal collection
of failure data.
Another factor, component parts were individually fabricated in a craft environment.
The advent of the electronic age, accelerated by the Second World War, led to the need for
more complex mass-produced component parts with a higher degree of variability in the
parameters and dimensions involved.
The experience of poor field reliability of military equipment throughout the 1940s and 1950s
focused attention on the need for more formal methods of reliability engineering. This gave
rise to the collection of failure information from both the field and from the interpretation of
test data.
Failure rate databanks were created in the mid-1960s as a result of work at such
organizations as UKAEA (UK Atomic Energy Authority) and RRE (Royal Radar Establishment,
and RADC (Rome Air Development Corporation, US).
UK)
The availability and low cost of desktop personal computing (PC) facilities, together with
versatile and powerful software packages, has permitted the listing and manipulation of
incident data with an order of magnitude less effort. Fast automatic sorting of data
encourages the analysis of failures into failure modes.
With the rapid growth of built-in test and diagnostic features in equipment, a future trend
ought to be the emergence of automated fault reporting.
Hazardous Failures
In the early 1970s the process industries became aware that, with larger plants involving
higher inventories of hazardous material, the practice of learning by mistakes was no longer
acceptable.
Methods were developed for identifying hazards and for quantifying the consequences of
failures. They were evolved largely to assist in the decision-making process when developing
plants.there
External
to identify
were
to come
or
By modifying
the mid-1970s
was pressures
already concern
overand
thequantify
lack of risk
formal
controls
forlater.
regulating
those activities which could lead to incidents having a major impact on the health and safety
of the general public.
The Flixborough incident in June 1974 resulted in 28 deaths and focused public and media
attention on this area of technology. Successive events such as the tragedy at Seveso in Italy
in 1976 right through to the Piper Alpha offshore and more recent Paddington rail and Texaco
incidents
have kept the
thatpredicted
interest alive
and resulted
in guidance
and legislation
Oil
TheRefinery
techniques
for quantifying
frequency
of failures
were originally
applied to
assessing plant availability, where the cost of equipment failure was the prime concern. Over
the last twenty years these techniques have also been used for hazard assessment.
Maximum tolerable risks of fatality have been established according to the nature of the risk
and the potential number of fatalities. These are then assessed using reliability techniques.
The need for failure rate data to support these predictions has therefore increased and
Chapter 4 examines the range of data sources and addresses the problem of variability within
and between them.
In fact, Figure 1.1 gives some perspective to the idea of reliability growth. The design
reliability is likely to be the figure suggested by a prediction exercise. However, there will be
many sources of failure in addition to the simple random hardware failures predicted in this
way.
Thus the achieved reliability of a new product or system is likely to be an order, or even more,
less than the design reliability. Reliability growth is the improvement that takes place as
modifications are made as a result of field failure information. A well-established item, perhaps
with tens of thousands of field hours, might start to approach the design reliability. Section
12.3 deals with methods of plotting and extrapolating reliability growth.
END