Professional Documents
Culture Documents
10
PIPELINE SYSTEM
OPERATIONAL
RELIABILITY/AVAILABILITY
ASSESSMENT
623
100%
2001 reliability
99.77%
December throughput
33,111,000 bbls
1,068,000 bpd
2001 throughput
362,131,000 bbls
992,000 bpd
RS = Ri
i -1
Eq. (10.1)
i.e., the reliability of the system Rs is the product (P) of the reliabilities Ri of the com
ponent parts. Availability, particularly steady-state availabilities can also assessed in the
same way. Thus:
n
AS = Ai
i -1
Eq. (10.2)
RS = Ri
i -1
Eq. (10.1)
i.e., the reliability of the system Rs is the product (P) of the reliabilities Ri of the com
ponent parts. Availability, particularly steady-state availabilities can also assessed in the
same way. Thus:
n
AS = Ai
i -1
Eq. (10.2)
Reliability(%) =
A common approach is to determine the reliability every month with the elapsed time
equal to the number of hours in the month. This can then be trended to track compression
or pumping unit reliability and make comparisons between different types of units.
Eq. (10.3)
Unavailability is simply the opposite concept and may be defined by the following
equation:
Eq. (10.4)
An alternate approach based on elapsed time similar to that for reliability takes the
elapsed time and subtracts both time due to failures and time for maintenance.
Availability(%) =
10.1.4.3.2 Failure Rate Data Historical failure rate data are used to help quantify
the availability of various equipment items. The data are often characterized using two
parameters. The first of these is the mean time between failures (MTBF).
MTBF is calculated as the total operating time of all items in the data sample
divided by the number of failures. Thus, it is equal to the inverse of failure frequency.
For example, a component with an MTBF of 20,000 hours has a failure frequency of 5
10-5 per hour.
The second parameter is the mean time to restore (MTTR). This is the amount of cal
endar time required to restore a failed piece of equipment back to operable condition. For
a given data sample, MTTR is calculated as the total calendar time required to repair all
failures divided by the number of failures.
Since MTBF represents the average amount of operational time between failures and
MTTR is the average amount of downtime after a failure, availability can be rewritten as
follows:
Eq. (10.5)
The OR gate is used when the output event will occur if at least one of the input
events occurs (i.e., either occurrence of event A OR event B will cause event C to occur
(see Figure 10.3 below).
Note that the input to each gate is shown at the bottom and output is noted at the top.
Usually, the resultant event is shown as a rectangular shape linked to the top of the gate
below. Gates are not generally connected directly together, but rather are separated by events.
For the analysis of systems with multiple redundancies, a useful gate is the qualified
OR gate, sometimes referred to as the N/M or X/Y gate. This gate is similar to the OR gate,
except that a specified number of inputs are needed to occur in order for the output event
to occur. Y represents the total number of possible failures and X is the number of failures
required for the output event to occur. For example, for a 2/5 OR gate, two (or more) of
the five components must fail in order for system failure to occur. Thus, the standard OR
gate may be thought of as an X/Y OR gate with X equal to 1. An AND gate may be thought
of as an X/Y OR gate with X equal to Y. See Figure 10.4 for a summary of the more impor
tant symbols used in fault tree analysis.
Eq. (10.6)
This relation is simply a consequence of the fact that events A and A are mutually
exclusive (i.e., both cannot occur simultaneously, and they are exhaustive, i.e., they repre
sent the entire range of possible outcomes and so their probabilities sum to 1, i.e., 100%).
Thus if event A represents system success, then A represents system failure and:
P failure = 1 - Psuccess
Eq. (10.7)
For any two independent events A and B, with probabilities PA and PB, respectively,
the probability that A and B both occur is the intersection of sets A and B as given below:
PA and B = P (A B) = PA PB , where is a symbol for multiplication
Eq. (10.8)
For any number of independent events A, B, , N with probabilities PA, PB, , PN,
respectively, the probability of all events occurring simultaneously is:
n
Pall = P Pi = PA PB ......PN
i =1
Eq. (10.9)
Thus, in a fault tree, for independent events that are connected by an AND gate, the
probability of the output event of the gate is calculated by multiplying the probabilities for
the inputs.
For the same two events A and B, the probability that at least one of them occurs is
the union of sets A and B and is given as follows:
PA or B = P (A B) = PA + PB (PA PB )
Eq. (10.10)
Pall = P = PA + PB + ...... PN
i =1
Eq. (10.12)
Thus, in a fault tree, for independent events that are connected by an OR gate, theprob
ability of the output event of the gate is calculated by adding the probabilities of the inputs.
REFERENCES
Abernathy, R. B., 2006. The New Weibull Handbook, Fifth Edition, published and distributed by Robert B.
Abernathy.
APSC, 2002. Alyeska Pipeline Service Company, http://www.alyeska-ipe.com/Inthenews/Week
lynews/2002/ar011102.html.
Barringer, P., 2003. Predict Future Failures From Your Maintenance Records, International Maintenance
Conference, December 710, 2003, pp. 112.
Barringer, P., 2004. Predict Failures: Crow-AMSAA 101 and Weibull 101, International Mechanical
Engineering Conference, Kuwait, December 58, 2004, pp. 114.
Barringer, P., 2010. www.barringer1.com.
BSI (British Standards Institute), 1994. BS 5760-2: Reliability of Systems, Equipment and Components,
British Standard Institution, UK
Center for Chemical Process Safety (CCPS), 1989. Guidelines for Process Equipment Reliability Data,
American Institute of Chemical Engineers, New York , NY.
Center for Chemical Process Safety (CCPS), 1992. Guidelines for Hazard Evaluation Procedures, Ameri
can Institute of Chemical Engineers, New York, NY.
Chmilar, W.S., 1996. Pipeline Reliability Evaluation and Design using the Sustainable Capacity Methodol
ogy, Proc., ASME Int. Pipeline Conf., Calgary, Alberta, Canada, Jun., pp. 803810.
Clement, P.L., 1990. Event Tree Analysis, 2nd Ed., Sevedrup.
Ekstrom, T., 1992. Reliability Measurements for Gas Turbine Warranty Situations, Proc., Int. Gas Turbine
and Aeroengine Congress and Symp. and Exposition, 92-GT-208, Cologne, Germany, Jun.
Energy Solutions International, 2004. PipelineStudioTGNET, Houston, Texas.
Green, A.E., and Bourne, A. J., 1966. Safety Assessment With Reference to Automatic Protective Systems
for Nuclear Reactors, U.K. Atomic Energy Authority Health and Safety Branch, Rep. AHSB (S) Rl17, Part
3, Risley, Lancashire.
Heilmann, P.C., and Byran, N.N., 1978. How to Use Probability Theory in Pipeline design, Pipeline and
Gas Journal, Mar.
IEC (International Electrotechnical Commission), 2003. Standards 60300 Dependability Management
Systems.
RA = 1 (1 RA1) (1 RA2)
Eq. (10.18)
RB = 1 (1 RB1) (1 RB2)
Eq. (10.19)
and so on. By making the necessary substitutions in Eq. (10.13), the equation for Rs is
obtained.
Eq. (10.20)
Using this procedure, it is possible to write down system reliability expressions cor
responding to Figures 10.8.
10.1.6.6 Standby Units
Figure 10.9 represents a standby arrangement of units (such as in compressor or pump
stations) in which one unit (unit A) is working and the other ( unit B) is not used until the
working unit fails, when it takes over from unit A for the duration of the mission or until
failure. If it can be assumed that unit B does not fail or deteriorate while idle, then the reli
ability Rs of the system is given by:
t
Eq. (10.21)
Where t is the mission time, RA(t) is the probability of unit A surviving the entire mis
sion, and the integral is the probability that unit A fails before the end of the mission but
unit B then survives for the remainder of the mission. In this context f(t) is the probability
density function (PDF) of the time to failure (TTF) of the working unit.
Simulation
serial number
1
2
3
4
5
6
7
8
9
10
13.1
1.1
10.1
0.1
11.9
16.7
5.9
2.7
13.3
14.9
Total
89.8
Mean
Component 1
8.98
Min.
Max.
Sum
1.6
30.1
1.0
39.8
9.1
16.1
9.3
20.0
10.1
18.2
1.6
1.1
1.0
0.1
9.1
16.1
5.9
2.7
10.1
14.9
13.1
30.1
10.1
39.8
11.9
16.7
9.3
20.0
13.3
18.2
14.7
31.2
11.1
39.9
21.0
32.8
15.2
22.7
23.4
31.1
155.3
62.6
182.5
243.1
Component 2
15.52
6.26
18.25
24.31
this way are statistically indistinguishable from the real data. For example, histograms
of failure times appear similar and there are the same correlations, or lack of them, as in
the real data.
As an example, a system can be considered that consists of two components, each
with constant failure rate and mean times to failure (MTFs) of 10 h and 15 h,
respectively. Table 10.2 shows results obtained from 10 simulations of the system.
Columns 2 and 3 show simulated component data. According to the simulation, com
ponent1 (see column 2) has an MTF of 8.98 h, which is close to the true MTF of 10 h.
The difference can be explained by statistical variability. Statistical tests would show
no difference between this set of artificially generated data and genuine data that would be
recorded from 10 items that were from a population with constant failure rate and MTF of
10 h. Similar remarks apply to the data in column 2, which give a simulated MTF for com
ponent 2 of 15.53 h compared with the true value of 15 h.
The validity of the technique is illustrated by consideration of columns 4 and 5 in
Table 10.2, which represent the series and parallel configurations, respectively, of the
original two-component system. Column 4, the series system simulation using the tech
niques of RBD, is calculated to have an MTF of 6 h. The result of the simulation is 6.26 h.
This result is close to the expected figure even though only 10 simulations have been
done. There is no statistical difference between the simulated data and real data (note the
mean). Column 5 simulated active redundancy and column 6 standby redundancy. The
means are close to the values of 19 h and 25 h, respectively, calculated using the RBD
techniques and no statistical test could tell that the data are the result of a simulation
rather than real data.
10.1.7.1 Advantages and Limitations
The principal benefits of Monte-Carlo simulation are as follows:
No simplifying assumptions is required
It is possible to obtain a wide variety of data
It is easy to adjust the simulation and hence to analyze
It has very broad application and can be used to model a wide variety of situations,
including those that have complex failure and repair distributions
Reference
2.40 103
3.77 101
Filter/Separator Blockage
8.76 103
2.36 103
1.19 102
2.57 103
2.63 102
1.05 102
2.63 102
Lees (1976)
5.96 101
Lees (1976)
ities These data can be used to quantify the basic event frequencies. Some of the source of
the failure rate data is also cited. Table 10.4 presents the mean time to restore data that
represent the best experience available in the industry. The MTTR is the total downtime
associated with a given failure and includes detection time, reaction time, travel time and
repair time.
Phase I (6.21 106 m3/Day): 264 km, 24, 275 km 22 and 153 km 16, plus 1 sta
tioncompressor at KMP 0 (1+1 unit)
Phase II (7.63 106 m3/Day). Facilities of Phase 1, plus 1 added unit at Station 1
and a new Station 2 at KMP 264 (1+1 unit)
Phase III (8.81 106 m3/day): Facilities of Phase 2 plus one added unit each at sta
tions 1 & 2, plus a new station 3 at KMP 630(1+1 unit)
The main cause for unavailability of gas at delivery points on a single-line pipeline is
unavailability of one or more of the gas-turbine-driven compressor units. Since availabil
ityof these units is crucial, installation of a spare at each compressor station was justified.
However, the cases of either two-compressor unit outages simultaneously, i.e., due to
repairs, or failure of an operating unit with the spare on outage for maintenance are still
possible.
While the likelihood of these cases may seem extremely remote, the strict require
mentsof the sponsor necessitated their consideration.
Transient analysis was used to determine the time response for supply pressure at each
delivery point during the cases described above. This response varied considerably
depending on where the unit outage occurred, i.e., which compressor station and for which
Phase of operation.
Sample graphical presentation for Phase III is provided in Figure 10.12. At each deliv
ery point, curves defining delivery pressure decline and recovery during unit outage and
restart were generated for each station and for each phase.
10.3.1.3 Reliability Analysis Based on Reliability Block Diagram
The probabilistic model of the pipeline operation described above combined with tran
sientanalysis of pipeline was undertaken to determine the pipeline availability and
hence reliability.
Availability model was built in Visual Basic on an Excel spreadsheet using Crystal
Ball providing Monte-Carlo simulation technique. The procedure consisted of construct
ingRBDs (Figure 10.13) building and running a Monte-Carlo model and post-processing
the output for each phases 1, 2 and 3.
cumulative outages over 1, 5, 10 and 26 year periods, which predicted the following
unscheduled shutdown:
95% confident of not exceeding 1 day in any continuous 1-year period (> 99.7%
available)
99% confident of not exceeding 2 days in any continuous 5 year period (> 99.9%
available)
99% confident of not exceeding 3 days in any continuous 10-year period (> 99.91%
available)
MFDT =
1
lt
2
Eq. (22)
where lis the failure rate and tis the test interval. Note that this formula assumes that, on
average, a component can fail in the middle of the test interval.
The frequency of the simultaneous failure can then be calculated by multiplying the
failure frequency of the item by the MFDT of its spare.
10.3.2.5.4.2 Pressure Reducing Valves It was assumed that the spare valve is
tested once every two weeks. Thus, the probability of the valve being in a failed state is:
1
(9.99 10 1 ) /year 2 weeks 1 year/(52 weeks) =1.92 10 2
2
Eq. (23)
Note: 9.99X10-1+ failure rate for CV+ Pneumatic Controller + Pressure Transmitter)
10.3.2.5.4.3 Turbine Meters It can be assumed that the turbine meters are tested for
dormant faults once every three months. Thus, the probability of a redundant meter being
in a failed state is:
1
(2.63 10 1 ) /year 3 months 1 year/(12 weeks) = 3.29 10 3
2
Eq. (24)
Failure Rate
(per year)
1.19 102
2.63 102
3.82 102
5.96 101
3.77 101
2.63 102
9.99 101
1.92 102
2.63 102
8.65 105
8.76 103
6.62 102
3.11
Comments
External Leakage
Leakage Modes
Rupture in main pipeline
Rupture in pipeline lateral
Small leak in main pipeline
Rupture city-gate metering station
Total external leakage
Failure Rate
(per year)
Comments
2.57 103
2.36 103
1.05 102
2.40 103
1.78 102
Based on 345 km
Based on 400 km
Based on 345 km
Based on 0.8 km
Sum of the above external leakage modes
Failure Rate
(per year)
1.19 102
2.63 102
3.82 102
1.19 102
2.63 102
3.82 102
6.88 101
1.15
1.84
Comments
3.11
1.78 102
1.84
4.97
MTBF
4 months
56 years
6.5 months
2.4 months
fulfill its intended function. This solution makes use of the downtime estimates associated
with the various failures.
For this solution, each basic event is assigned an unavailability, as indicated by thefol
lowing equation, based on the fundamentals introduced in Section 1:
Eq. (25)
Thus, the unavailability for each basic event is the product of the MTTR and the
failure rate. It must be noted that this approximation assumes that the MTBF is much
greater than the MTTR. The fault tree is then solved in a fashion similar to the reliabil
itysolution, except that the unavailability values are used to describe the basic events
rather than failure rates.
It should be noted that the unavailability for the standby PRV and turbine meter can
be calculated using the approach described in Section 10.3.2.5.4.1, Mean Fractional Dead
Time, since the unavailability for these dormant faults is dependent upon test interval
rather than repair time.
The fault tree depicting this solution is given in Figure 10.17. Tables 10.6 provide an
alternative presentation of these results.
10.3.2.8 Summary
Two fundamental types of results were produced. The first of these is a measure of sys
temreliability expressed in terms of both the expected number of failures per year and
the MTBF. The second type of result is system availability, presented as the percentage
of time the pipeline is expected to be operating as intended. Availability differs from
reliability in that it accounts for the length of downtime associated with the different fail
ures. The results of the study are summarized in Table 10.7.
TABLE 10.6. Summary of Availability Calculations
Equipment
ESD valve
MOV-spurious operation
Pressure transmitter
Total for ESD valve
PRV (not in parallel)
Pneumatic control valve
Pneumatic controller
Pressure transmitter
Total for PRV
transmitter and pneumatic controller
PRV (in parallel)
Turbine meter (not in parallel)
Turbine meter (in parallel)
Filter/separator
Total for single city-gate
Total for all city-gates
Unavailability
1.63 105
1.80 105
3.43 105
8.84 104
3.44 104
3.00 105
1.26 103
5
2.42 10
3.00 105
9.87 108
6.00 106
6.46 105
3.04 103
Comment
Unavailability
2.11 105
5.38 106
7.20 105
1.32 105
1.12 104
Comments
Based on 345 km
Based on 400 km
Based on 345 km
Based on 2,400 ft
Sum of the above external leakage modes
Unavailability
1.63 105
1.80 105
3.43 105
9.11 104
1.03 103
1.94 103
3.26 105
1.80 105
5.06 105
Comments
City-gates
External leakage
ESD valves on pipeline
Total
Unavailability
Availability
3.04 103
1.12 104
1.94 103
5.09 10-3
99.7%
99.9%
99.8%
99.5%
Type of Failure
Equipment failure
in city-gate metering
station
External leakage from
pipeline, lateral, or
city-gate metering
station
False trip of ESD
valves on pipeline
or lateral
TOTAL
Total Failure
Rate
(per year)
MTBF
(Months)
Unavailability
Availability
Relative
Contribution
to System
Unavailability
3.11
3.04 103
99.7%
59.7%
1.78 10-2
672
1.12 104
99.9%
2.2%
1.84
6.5
1.94 103
99.8%
38.1%
4.97
2.4
5.09 103
99.5%
100%
Units Running
Units Offline
ABC
BC
AC
AB
A
B
C
0
0
A
B
C
BC
AC
AB
ABC
Two offline
Three offline
Number of Independent
Combinations
1
3
Number of
Independent
Combinations
Probability of Occurrence
Percent of
Time
3!
0!(3 - 0)!
91.2673%
One offline
3!
1!(3 - 1)!
8.4681%
Two offline
3!
2!(3 - 2)!
0.2619%
Three offline
3!
3!(3 - 3)!
0.0027%
TOTAL =
100.0000%
REFERENCES
Abernathy, R. B., 2006. The New Weibull Handbook, Fifth Edition, published and distributed by Robert B.
Abernathy.
APSC, 2002. Alyeska Pipeline Service Company, http://www.alyeska-ipe.com/Inthenews/Week
lynews/2002/ar011102.html.
Barringer, P., 2003. Predict Future Failures From Your Maintenance Records, International Maintenance
Conference, December 710, 2003, pp. 112.
Barringer, P., 2004. Predict Failures: Crow-AMSAA 101 and Weibull 101, International Mechanical
Engineering Conference, Kuwait, December 58, 2004, pp. 114.
Barringer, P., 2010. www.barringer1.com.
BSI (British Standards Institute), 1994. BS 5760-2: Reliability of Systems, Equipment and Components,
British Standard Institution, UK
Center for Chemical Process Safety (CCPS), 1989. Guidelines for Process Equipment Reliability Data,
American Institute of Chemical Engineers, New York , NY.
Center for Chemical Process Safety (CCPS), 1992. Guidelines for Hazard Evaluation Procedures, Ameri
can Institute of Chemical Engineers, New York, NY.
Chmilar, W.S., 1996. Pipeline Reliability Evaluation and Design using the Sustainable Capacity Methodol
ogy, Proc., ASME Int. Pipeline Conf., Calgary, Alberta, Canada, Jun., pp. 803810.
Clement, P.L., 1990. Event Tree Analysis, 2nd Ed., Sevedrup.
Ekstrom, T., 1992. Reliability Measurements for Gas Turbine Warranty Situations, Proc., Int. Gas Turbine
and Aeroengine Congress and Symp. and Exposition, 92-GT-208, Cologne, Germany, Jun.
Energy Solutions International, 2004. PipelineStudioTGNET, Houston, Texas.
Green, A.E., and Bourne, A. J., 1966. Safety Assessment With Reference to Automatic Protective Systems
for Nuclear Reactors, U.K. Atomic Energy Authority Health and Safety Branch, Rep. AHSB (S) Rl17, Part
3, Risley, Lancashire.
Heilmann, P.C., and Byran, N.N., 1978. How to Use Probability Theory in Pipeline design, Pipeline and
Gas Journal, Mar.
IEC (International Electrotechnical Commission), 2003. Standards 60300 Dependability Management
Systems.