You are on page 1of 26

Authors Accepted Manuscript

Multi-objective reliability redundancy allocation in


an interval environment using particle swarm
optimization

Enze Zhang, Qingwei Chen

www.elsevier.com/locate/ress

PII: S0951-8320(15)00272-0
DOI: http://dx.doi.org/10.1016/j.ress.2015.09.008
Reference: RESS5408
To appear in: Reliability Engineering and System Safety
Received date: 26 March 2015
Revised date: 1 September 2015
Accepted date: 10 September 2015
Cite this article as: Enze Zhang and Qingwei Chen, Multi-objective reliability
redundancy allocation in an interval environment using particle swarm
o p t i m i z a t i o n , Reliability Engineering and System Safety,
http://dx.doi.org/10.1016/j.ress.2015.09.008
This is a PDF file of an unedited manuscript that has been accepted for
publication. As a service to our customers we are providing this early version of
the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting galley proof before it is published in its final citable form.
Please note that during the production process errors may be discovered which
could affect the content, and all legal disclaimers that apply to the journal pertain.
1

Multi-objective reliability redundancy allocation in an interval environment


using particle swarm optimization

Enze Zhang*, Qingwei Chen

School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
*Corresponding author. Tel.: +8615850577123.
E-mail addresses: yzzez8986@gmail.com (E. Zhang).

Abstract
Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of

fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be

imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals

with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective

optimization problem is formulated from the original crisp one, where system reliability and cost are

simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable

of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions

is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding

distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach

has been demonstrated through two numerical examples and a case study of supervisory control and data

acquisition (SCADA) system in water resource management.

Keywords: Reliability optimization; Multi-objective optimization; Particle swarm optimization; Interval number

1. Introduction

The utilization of redundancy plays an important role in enhancing the reliability of a system.

The redundancy allocation problem (RAP) involves the selection of components and a

system-level design configuration to simultaneously optimize some objective functions, such as

system reliability, cost and weight, given certain design constraints. The incorporation of

redundant components improves system reliability, but can also increase system cost, weight, etc.

Thus, a RAP frequently encounters a trade-off between maximization of system reliability and

minimization of system cost and weight.

Traditionally, the RAP has been solved as a single objective optimization problem with the

goal to maximize the system reliability subject to several constraints such as cost, weight, etc.

Various methodologies have been proposed to handle it, e.g., dynamic programming [1], integer
2

programming [2], column generation [3], branch and bound [4], and heuristic/meta-heuristic

approaches such as genetic algorithm [5], variable neighborhood search [6], bacterial-inspired

evolutionary algorithm [7], bee colony algorithm [8], Tabu search [9], swarm optimization [10-13],

and hybrid algorithm [14].

Multiple objectives are often considered simultaneously in practical problems concerning

with system design, which makes the multi-objective formulations of the reliability optimization

problem quite natural. Multi-objective approaches for the RAP can be found in [15-22].

Khalili-Damghani et al. [15] recently proposed a decision support system for solving

multi-objective RAPs. Many studies have demonstrated the effectiveness of the

heuristic/meta-heuristic algorithms in solving multi-objective RAPs. Safari [20] proposed a

variant of the non-dominated sorting genetic algorithm (NSGA-II) to solve a novel mathematical

model for multi-objective RAPs. Zhang et al. [22] proposed a practical approach combining

bare-bones multi-objective particle swarm optimization (MOPSO) and sensitivity-based clustering

for solving multi-objective RAPs.

In spite of the existence of diverse studies that address reliability optimization problems,

most of the research studies are based on the assumption of fixed reliabilities of components. In

real-life situations, however, the reliabilities of individual components are often imprecise under

different operating and environmental conditions. The causes may be improper storage facilities,

the vagueness of human judgment and other factors relating to the environment. Moreover, the

vagaries of manufacturing processes makes it difficult, if not impossible, to produce different

components with exactly identical reliabilities, and thus the issue is subject to uncertainty. To deal

with uncertainty in system reliability problems, stochastic programming [23,24], fuzzy

programming [25-27], interval programming [31-33] and robust optimization [28,29] are

frequently employed. Actually, reliability optimization incorporating uncertainty has become the

subject of many research efforts in the area of reliability engineering over the last decade. Soltani

[34] presented a comprehensive review on reliability optimization with both deterministic and

non-deterministic parameters.

In the stochastic or fuzzy approaches, the probability distributions or membership functions

of the parameters is required to be known in priori. Unfortunately, it is not an easy task to grasp

this information in real engineering applications. An alternative solution to this problem is to use
3

an interval-valued number to represent the imprecise number. As a coefficient, an interval signifies

the extent of tolerance or a region the parameter can possibly take [35,36]. Some researchers have

addressed reliability optimization problems by considering interval-valued component reliabilities

(see, e.g., [31-33,37-45]). In the early years, the problem was formulated by Yokota et al. [37,38]

and Taguchi et al. [39,40] as a nonlinear integer programming problem and solved by using a GA.

Gupta et al. [33] formulated the RAP with interval reliabilities as an unconstrained integer

programming problem with interval coefficients and developed an advanced GA for solving it.

Bhunia et al. [32] solved chance constrained reliability stochastic optimization problem with

resource constraints considering the reliability of each component as an interval number. Sahoo et

al. [31] formulated the problem as constrained multi-objective optimization problems with the

help of their proposed order relations of interval-valued numbers, and then converted the same

into unconstrained single objective ones. Feizollahi et al. [41] proposed a robust deviation

framework to deal with the RAP with interval component reliabilities. More recently, Soltani et al.

[42] presented a redundancy allocation model with choices of a redundancy strategy and

component type and solved it using an interval programming approach. Sadjadi et al. [43]

considered a RAP with choices of active and cold strategy where the components time to failure

follows an Erlang distribution and formulated it through a Min-Max regret method. Roy et al. [44]

considered a multi-objective RAP with both fixed and interval-valued system parameters using

entropy as a constraint for the system stability. Despite their efforts, all these studies fail to find

the trade-off front of the problem since the problem is either formulated as or converted into a

single objective optimization problem, and a multi-objective optimization approach for obtaining

the trade-off curve of RAPs in an interval environment is still rather lacking.

This paper provides an alternative way to solve multi-objective reliability redundancy

allocation problems, modeled in an interval environment. Our motivation is to handle the interval

multi-objective case as-is and apply the particle swarm optimization to obtain the tradeoff curve.

Multi-objective reliability optimization problems with interval-valued objectives have been

formulated and then particle swarm optimization has been applied to solve the problem under a

number of constraints. To deal with imprecision, the imprecise Pareto dominance is defined,

which will be used to compare imprecise individuals. Then the crowding distance measure is

extended to handle imprecise objective functions. Finally, three numerical examples are solved
4

and the trade-off relationship between reliability and cost performance is analyzed.

The rest of the paper is organized as follows. A general interval multi-objective optimization

problem is introduced in Section 2. Section 3 describes the formulation of the interval RAP while

the proposed algorithm for solving the interval optimization problem is discussed in detail in

Section 4. Two numerical examples and a case study are presented in Section 5 to demonstrate the

effectiveness of the proposed algorithm. Finally, the conclusions and suggestions for future

research are provided in Section 6.

2. A general interval multi-objective optimization problem

Very often real-world problems have several conflicting objectives. Even though many of

them can be reduced to a matter of a single objective, it may not adequately represent the problem

being faced. Considering multiple objectives often gives better ideas of the task. If so, there is a

vector of objectives involving M ( 2) conflicting objective functions that must be traded off in

some way. Multi-objective optimization is concerned with the minimization of y that can be the

subject of a number of inequality and equality constraints:

min y(x) = ( y1 (x), y2 (x),, y M (x))T


s.t. gi (x) 0, i = 1,2,, P (1)
h j (x) = 0, j = 1,2,,Q

where the decision vectors x = (x1 , x2 ,, xn )T belong to the feasible region S R which is
n

formed by constraint functions.

Note that because y (x) is a vector, if any of the components of y (x) are competing, there

is no unique solution to this problem. Therefore, it is necessary to establish certain criteria to

determine what is considered as an optimal solution, and this criterion is Pareto dominance.

Without loss of generality, in a minimization problem, for feasible solutions x1 and x 2 , x1 is

said to dominate x 2 (denoted by x1 x 2 ), if and only if both of the following conditions are

true:

x1 is no worse than x 2 in all objectives, i.e., "i = 1,2,, M : yi (x1 ) yi (x 2 )

x1 is strictly better than x2 in at least one objective, i.e., $i {1,2,, M } :


5

yi (x1 ) yi (x2 )

The above relationship is clearly suitable for precise objective values, but when it comes to

objective functions that are imprecise, represented as multidimensional intervals [50] (see Fig. 1),

the normal Pareto dominance relation cease to work and need to be extended, as will be discussed

later. In an interval environment, the original problem (1) is converted into an interval

multi-objective optimization problem as follows:

min y(x,c) = ( y1 (x,c), y2 (x,c),, y M (x,c))T


s.t. ck = [ckL ,ckR ], k = 1,2,, K
gi (x) 0, i = 1,2,, P (2)
h j (x) = 0, j = 1,2,,Q

where c = (c1 ,c2 ,,cK )T are the interval-valued parameter; ckL and ckR are the lower and

upper bounds of the kth component; yi (x) = [ yiL (x), yiR (x)] (i = 1,2,, M) are the ith

interval-valued objective function.

y2 y2
y
y" max(y ) 1R
y2 R
y y" y
y
min(y ) 1L
y' y2 L
y'

y1 y1

Fig. 1. Normal and imprecise objective values

3. Problem formulation

The RAP pertains to a system of n subsystems arranged in series. The ith subsystem

consists of xi components arranged in parallel. Each component potentially varies in reliability,

cost, weight and other characteristics. The use of redundant components improves system

reliability, but also increases system cost and weight. The problem arises then regarding how to

optimally allocate redundant components. The typical structure of a series-parallel system is

illustrated in Fig. 2.
6

1 1 1

2 2 2

x1 x2 xn

Fig. 2. A series-parallel system configuration

3.1. Assumptions

The basic assumptions for the RAP in an interval environment are as follows:

Reliability and cost of each component are imprecise and interval-valued.

Failures of component are mutually statistically independent.

The system will not be damaged when a component failure occurs.

All components are assumed to be non-reparable.

All components are assumed to have binary states, i.e. operating and failure.

3.2. Formulation of the interval RAP

In this paper, the RAP is modeled in an interval environment as a multi-objective

optimization problem, in which the reliabilities and costs of components are interval-valued.

Assuming that the component type of the ith subsystem is identical, the system reliability RS

equals to the product of the interval-valued reliability of all subsystems (see [31-33,35,36] for the

definition of the multiplication operator of interval numbers), i.e.


n
RS (x) RSL (x), RSR (x) (3)
i 1

where RSL (x) [1 (1 riL ) xi ] and RSR (x) [1 (1 riR ) xi ] . The problem is to determine the

number of redundant components xi (i = 1,2,,n) that will maximize the system reliability and

minimize the system cost under given constraints. The mathematical formulation of the problem is

given below:
n
max RSL (x), RSR (x)
i=1

min CSL ,CSR (4)


s.t. g j (x) 0, j = 1,2,,m.
xi Z + , i = 1,2,,n

where [ RSL , RSR ] and [CSL , CSR ] are the interval-valued system reliability and system cost
7

respectively.

4. Proposed approach

To develop an interval MOPSO algorithm for Problem (4), modifications are necessary in

those steps where imprecise solutions are compared. In the following subsections a dominance

relation for interval-valued objective functions is defined. Then, the crowding distance metric is

extended to handle interval objective functions.

4.1. Dominance relation for interval objective functions

As previously mentioned, the Pareto dominance relation is not suitable for interval-valued

objective functions. Thus, a dominance relation therefore has to be defined for comparison of

individuals. Extending Pareto dominance to the interval-valued case, we face the difficulty of

comparison between two interval numbers. Regarding this, a new partial order relation that

compares intervals inside a single objective dimension is defined.

4.1.1. Order relation of interval numbers

An interval number is defined as A = [aL ,aR ] = { a : aL a aR ,a R} where aL and aR

are the left and right limits on the real line R , respectively. If aL aR , then A [a, a] is a real

number. Interval A can also be denoted by A = ac ,aw , where ac and aw are the mid-point

1 1
and half-width of interval A , i.e., ac (aL aR ) and aw (aR aL ) .
2 2
An extensive research on comparing and ranking two interval numbers can be found in [36]

which gives two approaches of comparing two interval numbers. The first one describes an

optimistic decision makers preference index. The second one defines strict and fuzzy preference

ordering from a pessimistic decision makers point of view. This approach is somewhat subjective

since the degree of pessimism of decision makers should be specified beforehand. To overcome

this issue, we have proposed a general definition of order relations without extra knowledge about

the underlying distributions or the decision makers preferences.

Let A [aL , aR ] ac , aw and B [bL , bR ] bc , bw be two intervals, then for ac bc , we

define an order relation IM for which A IM B implies A is superior to or greater than B :


8

(i) if aw bw , then A IM B should hold;

(ii) if aw bw , then A IM B aL bL and .

It is to be noted that when A and B are incomparable using IM , it is considered .

Clearly the order relation IM is reflexive and transitive but not symmetric.

4.1.2. Imprecise Pareto dominance

Applying IM , a relation IP extending the standard Pareto dominance is proposed for the

multi-objective interval-valued case. Assuming the m th objective function of solution xi and

x j are denoted as ym (xi ) and ym (x j ) respectively, the imprecise Pareto

dominance relation IP can be defined as:

(5)

4.2. Crowding distance for interval objective functions

To produce well-distributed Pareto fronts, the global best position of each particle is selected

from the external repository with respect to the diversity of nondominated solutions. IP-MOPSO

employs the crowding distance to estimate the diversity of nondominated solutions. However, the

crowding distance first introduced in [46] is not suitable for interval-valued objective functions,

and thus, a novel crowding distance extending the previous one need to be developed.

Considering two optimal solutions xi and x j , the distance between them can be calculated

as follows:


M
d( f m (x i ), f m (x j ))
D(x ,x ) = m=1
, (6)
i j
V (x i ) +V (x j ) +1

where M indicates the dimension of objective functions; f m (xi ) [ fmL (xi ), f mR (xi )] and

f m (x j ) [ f mL (x j ), f mR (x j )] denote the interval-valued objective functions of xi and x j for the

mth objective function; d ( f m (xi ), f m (x j )) denotes the distance between f m (xi ) and f m (x j )

which can be calculated as follows:


9

( f mL (xi ) f mL (x j ))2 ( f mR (xi ) f mR (x j ))2


d ( f m (xi ), f m (x j )) (7)
2

V (xi ) and V (x j ) denote the volumes of their corresponding hyper-cuboid.

Assuming the two nearest points x j and x k , the crowding distance of xi , denoted by

CD(xi ) , is defined as the sum of D(xi , x j ) and D(xi , xk )

CD(xi ) D(xi , x j ) D(xi , xk ) (8)

It is worth noting that the solutions that lie in the boundary of each dimension of the objective

space are assigned an infinite distance value.

It can be observed from Eq. (8) that if the interval objectives represented as intervals regress

to points, then the distance between xi and x j will take the following form:

M
D(x i ,x j ) = | f m (x i ) - f m (x j ) | (9)
m=1

Accordingly the crowding distance of xi will have the following form:

1 M
CD(xi ) | f m (x j ) f m (x k ) |
2 m 1
(10)

which is consistent with the normal one for the non-interval case. From this point of view, the

crowding distance introduced in [46] is a special case of the one defined here.

4.3. The IP-MOPSO algorithm for interval RAPs

4.3.1. Particle swarm optimization

Particle swarm optimization (PSO), first introduced by Kenney and Eberhart in 1995 [47], is a

stochastic global optimization technique inspired by the paradigm of birds flocking. In the PSO, a

swarm consists of a set of particles that fly around in a multi-dimensional search space. Each

particle represents a potential solution of an optimization problem. During the flight, each particle

adjusts its position according to the experience of itself and other neighboring particles, making

use of the best position they encounter. Considering the i th particle in the swarm at iteration t ,

the personal best position, known as pbesti , is the best position found so far by itself, while the

global best position, known as gbest , is the global best position found so far by neighbors of this
10

particle, and its position and velocity are denoted by xit and vit , respectively. Then, the position

and velocity of this particle at iteration t 1 can be expressed using the following equations:

vit 1 wvit c1r1 ( pbesti xit ) c2 r2 ( gbest xit )


(11)
xit 1 xit vit 1

where c1 and c2 are the acceleration coefficients, which control the influence of gbest and

pbesti on the search process, r1 and r2 are random numbers between 0 and 1, and w is the

inertia weight used to control the impact of the previous velocities on the current one, influencing

the particles ability of exploration in the search space.

4.3.2. Dynamic inertia weight and acceleration coefficient

It is found that the performance of a PSO algorithm could be improved by linearly varying

the inertia weight w [48]. We adopt in IP-MOPSO the time-varying inertia weight proposed by

Shi and Eberhart [48], which is defined as follows:


Tmax t
w ( w1 w2 ) w2 (12)
Tmax

where w1 and w2 are the initial and final values of the inertia weight respectively, t is the

current iteration number, and Tmax is the maximum number of iterations.

Besides using this inertia weight scheme, time-varying acceleration coefficients are applied

with the idea of balancing the ability of exploration and exploitation. This can be carried out by

linearly changing the coefficients c1 and c2 through the time as suggested in [50]:

t
c1 (c1 f c1i ) c1i (13)
Tmax
t
c2 (c2 f c2i ) c2i (14)
Tmax

where c1i and c1 f are the initial and final values of c1 , c2i and c2 f are the initial and final

values of c2 . Here, t and Tmax have the same definition as in Eq. (12).

4.3.3. Constraints handling approach

Since problem (4) is a constrained optimization problem with interval-valued objectives, a

constraint-handling scheme needs to be incorporated to deal with the constraints. The Big-M
11

penalty method introduced in [33] is considered in the IP-MOPSO algorithm, which converts the

constrained optimization problem with interval-valued objective functions into an unconstrained

one by penalizing the unfeasible solutions a large positive number, say M , which can be written

in an interval form [M , M ] .

Let us consider the constrained optimization problem

Maximize [f L , f R ]
(15)
s.t. g j (x) 0, j = 1,2,,m.

Let S = {x : g j (x) 0, j = 1,2,,m} be the feasible space, then the transformed optimization

problem is as follows:

Maximize f (x) f (x) (x) (16)

[0,0], if x S ;
where ( x)
f (x) [ M , M ], otherwise.

Clearly problem (15) is an unconstrained optimization problem with interval objectives.

4.3.4. Description of the proposed IP-MOPSO algorithm

The proposed algorithm, IP-MOPSO, which adopts the imprecise Pareto relation and

extended crowding distance, is implemented as follows.

Step 1: Initialize the population of particles. The personal best position is set as the particle

itself. Evaluate each of the particles using Eq. (16) and store the nondominated ones in the

external repository Rep applying IP relation.

Step 2: Randomly select the global best position from the top 10% of solutions in Rep

with the crowding distance, which is calculated using Eq. (10).

Step 3: Update the velocity and position of the particles according to Eq. (11). A mutation

operator with action range covered narrowed over time is adopted to prevent a premature

convergence to a false Pareto front [22].

Step 4: Evaluate each of the particles and store the nondominated ones in the external

repository Rep applying IP relation.

Step 5: Update the personal best position of the particles. For every particle, between its
12

current position and the previous personal best position, the dominating one, if any, will be

the new personal best position, and any position randomly otherwise.

Step 6: Update the external repository based on the crowding distance suitable for

interval-valued objectives. This update consists of inserting all the currently nondominated

solutions into the repository and eliminating the dominated ones. Since the size of the

repository is limited, whenever it gets full, those particles located in less populated regions

of objective space, i.e., N r solutions with greater crowding distances, are kept in the

repository.

Step 7: Increase the iteration counter. If the maximum number of cycles is reached, the

algorithm is terminated. Otherwise, go to Step 2.

The IP-MOPSO can be outlined using the pseudo-code below:

Algorithm IP-MOPSO
1: t 0
2: FOR i 1 to nPop
3: ( xi , gbesti , pbesti ) Initialize( ); //*initialize the population Pop0 *//
4: ENDFOR
5: F ( Pop0 ) Evaluate( Pop0 ) //*Evaluate each of the particles in Pop0 *//
6: Rep GetNDParticles( Pop0 ) //*Store the nondominated particles in the repository using IP
*//
7: WHILE t Tmax DO
8: FOR i 1 to nPop
9: gbesti GetGbest( )
10: pbesti GetPbest( )
11: xi UpParticle( xi , gbesti , pbesti )
12: xi Mutate( xi )
13: END FOR
14: F ( Popt 1 ) Evaluate( Popt 1 )
15: NDPaticles GetNDParticles( Popt 1 )
16: Rep GetNDParticles(Rep NDPaticles) //*Update the external repository*//
17: IF | Rep | nRep THEN Rep Prune( Rep) //* | Rep | : number of particles in Rep *//
18: t t 1
19: END WHILE
13

5. Numerical examples

Three examples are considered to illustrate and validate the performance of the proposed

techniques for solving multi-objective RAPs with interval-valued reliabilities of components. The

values of the parameters considered in the third example are selected from a real case study.

5.1. Two numerical examples

Example 1: An example problem taken from [33] is solved in the first example, which

consists of five subsystems connected in series. For each subsystem, the component type available

to choose from is identical.


5
Maximize RSL , RSR 1 1 riR ,1 riL i
x

i 1

5
Minimize CSL , CSR CiL , CiR xi exp( xi / 4)
i 1

s.t. (17)
5
g1 ( x) Pi xi2 b1 0
i 1
5
g 2 ( x) Wi xi exp( xi / 4) b2 0
i 1

and xi (i 1, 2,3, 4,5) being a nonnegative integer; where the values of Ci , Pi , Wi , b1 and b2

are listed in Table 1.

Table 1
Data for example 1
i 1 2 3 4 5
ri [0.78,0.82] [0.84,0.85] [0.87,0.91] [0.63,0.66] [0.74,0.76]
Ci [6,8] [5,8] [3,6] [6,9] [3,6]
Pi 1 2 3 4 2
Wi 7 8 8 6 9

b1 110, b2 200.

The problem was solved using the developed algorithm, IP-MOPSO, with a population size

of 30, an archive size of 15 and 50 iterations. The values of c1 , c2 , w1 and w2 have been taken

as 2.5, 0.5, 0.9 and 0.4 respectively. Table 2 presents 15 of the solutions found with their

respective system reliability and cost. The range of system reliability is from [0.2657, 0.3181] to

[0.8839, 0.9132]. Correspondingly, system cost ranges from [52.5326, 84.5089] to [105.9448,
14

168.7731]. These ranges assure the extent of the generated Pareto front. This Pareto front provides

decision-makers different options for system implementation. The results obtained are compared

to the ones from [33], which are listed in Table 3.

From Table 2 and 3, we observe that solution no. 3# is the same as solution no. 2 obtained by

IP-MOPSO, which is indicated in italic type. Solution no. 1# is obviously dominated by solution

no. 4 and 13 shown in boldface. Solution no. 2# is dominated by solution no. 13. Fig. 3 further

shows solutions obtained by IP-MOPSO and from [33], where it can be seen that IP-MOPSO

performs better.

Table 2
Results obtained by IP-MOPSO of Example 1
Solution no. x's Reliability Cost
1 [1,1,1,1,1] [0.2657,0.3181] [52.5326,84.5089]
2 [3,2,2,3,3] [0.8839,0.9132] [105.9448,168.7731]
3 [1,1,1,1,2] [0.3348,0.3945] [56.6267,92.6971]
4 [2,1,1,2,2] [0.5596,0.6238] [73.0030,115.8969]
5 [2,2,2,3,3] [0.8502,0.8888] [97.1351,157.0269]
6 [2,2,2,3,2] [0.8069,0.8494] [92.7303,148.2172]
7 [1,1,1,2,2] [0.4587,0.5286] [64.8148,104.9794]
8 [2,1,2,2,2] [0.6324,0.6799] [77.0971,124.0851]
9 [2,2,2,2,2] [0.7336,0.7819] [83.9206,135.0027]
10 [2,2,3,3,3] [0.8629,0.8954] [101.5399,165.8365]
11 [1,1,2,1,2] [0.3784,0.4300] [60.7208,100.8853]
12 [2,2,2,2,3] [0.7729,0.8182] [88.3254,143.8124]
13 [1,1,2,2,2] [0.5184,0.5762] [68.9089,113.1676]
14 [2,1,1,1,2] [0.4085,0.4655] [64.8148,103.6147]
15 [2,1,2,2,3] [0.6663,0.7115] [81.5019,132.8948]

Table 3
Results obtained in [33] of Example 1
Solution no. x's Reliability Cost
1# [2,1,2,1,3] [0.4864,0.5310] [73.3183,120.6125]
2# [1,2,2,1,3] [0.4625,0.5175] [71.9491,120.6125]
3# [3,2,2,3,3] [0.8839,0.9132] [105.9448,168.7731]
15

180
160
140
120

Cost
100
80
60
40
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Reliability

Fig. 3. Comparison of results obtained by both algorithms

Example 2: The second example considers a system consisting of three subsystems, with an

option of five, four and five types of components in each subsystem, respectively. The maximum

number of components is eight in each subsystem.


3 mi
xij
Maximize RSL , RSR = 1- 1- rijR ,1- rijL
i=1 j=1
3 mi

Minimize CSL ,CSR = [CijL ,CijR ]xij


i=1 j=1

s.t. (18)
mi

1 xij nmax,i , "i = 1,2,3


i=1

xij {0,1,2,,nmax,i }

where xij is the decision variable denoting the number of the jth components used in

subsystem i ; mi is the number of available component types for each subsystem i ; nmax,i is

the user-defined maximum number of components used in subsystem i which is eight in this

case. Table 4 defines the component choices for each subsystem.

Table 4
Component choices of example 2
Component type j Subsystem i
1 2 3
rij cij rij cij rij cij
1 [0.93,0.95] [8,10] [0.96,0.98] [11,13] [0.95,0.97] [9,11]
2 [0.90,0.92] [5,7] [0.85,0.87] [2,4] [0.88,0.90] [5,7]
3 [0.88,0.90] [5,7] [0.69,0.71] [1,3] [0.71,0.73] [3,5]
4 [0.74,0.76] [2,4] [0.65,0.67] [1,3] [0.70,0.72] [2,4]
5 [0.71,0.73] [1,3] [0.66,0.68] [1,3]
16

200
180
160
140
120

Cost
100
80
60
40
20
0
0.4 0.5 0.6 0.7 0.8 0.9 1
Reliability

Fig. 4. Pareto optimal front obtained of Example 2

180
C
160
140 System reliability
maximization
Average cost

120
100
80
60
System cost
40 minimization
B
20 A
0
0.4 0.5 0.6 0.7 0.8 0.9 1
Average reliability

Fig. 5. Pareto optimal front generated by trade-off points

For this case, IP-MOPSO was run considering a population size of 50, and 100 generations.

Fig. 4 shows a sample population plot of IP-MOPSO, where it can be observed that the solutions

are quite diverse and form a broad Pareto optimal front. To better illustrate the trade-off

relationship between reliabilities and costs, Fig. 5 presents the Pareto optimal front generated by

trade-off points with the mid-point of intervals representing objective functions. Each point on this

frontier represents an optimal design configuration with different system reliabilities and costs.

Configuration A , as labeled in Fig. 5, provides the most economically benign system

configuration, which neglects the requirement on reliabilities. On the contrary, configuration C

represents the design configuration of reliability maximization, which leads to the highest cost.

Moreover, there is one obvious turning point on the Pareto frontier, point B as labeled. As

observed, a high increment of reliability is gained if the selection of solutions passes from point

A to point B than when it changes from point B to C . This means that passing from
17

solution A to B brings a small increment in the system cost but a high improvement in the

system reliability.

Table 5
Solutions corresponding to three trade-off points of Example 2
Sol. Points x's Reliability Cost
A [0,0,0,0,1,0,0,1,0,0,0,0,0,1] [0.323334000, 0.352444000] [3,9]
B [0,0,0,2,2,0,2,0,0,1,0,0,0,3] [0.960482384, 0.969989817] [15,35]
C [0,6,1,1,0,5,1,1,0,3,5,0,0,0] [0.999999961, 0.999999993] [147,193]

A decision-maker can pick up any point from the Pareto front according to their specific

design criteria or interest. Once a design point is selected, the system configuration behind it can

be obtained. Table 5 illustrates the solutions representing the design configurations corresponding

to points A , B and C , as well as the respective reliability and cost values.

Now a natural question comes: Does the imprecision Pareto dominance based MOPSO

perform better than the standard distribution assuming one? To investigate this point, we choose

DA-MOPSO (Distribution assuming MOPSO) for comparison purpose, which handles

imprecision by assuming a uniform distribution inside the objective vector. The expectation value

of the objective vector is used for optimization and thus the algorithm becomes a normal one.

Both algorithms were run for 100 generations with a population size of 50. Results were

compared using the following four performance metrics.

Hypervolume metric ( S ) : S metric is introduced by Limbourg et al. to reflect the

approximation performance of a nondominated set X , which has also interval form [50]. It

measures the volume that X dominates restricted by a reference point y ref .

Spacing metric ( SP ) : To evaluate the distribution of solutions throughout the found Pareto

front, we introduce a metric that is an extension of the spacing metric originally proposed in [51].

The spacing or SP metric can be defined as:

1 n
D Di
2
SP (19)
n 1 i 1

where Di = min{D(x i ,x j ) | i, j = 1,2,,n, i j} and n is the number of nondominated


j

solutions; D is the mean value of all Di . A value of zero for this measure indicates that all the
18

nondominated solutions found are equidistantly spaced.

Two-set coverage metric (C ) : In order to evaluate the closeness of the Pareto front found to

the true one that is unknown in advance, we extend the two-set coverage (C ) metric originally

proposed in [52] to the imprecise case. This metric takes a pair of nondominated solution sets X

and Y as inputs. Applying the IP relation, it can be defined as:

|{y Y | $x X , x IP y x = y}|
C( X ,Y ) = (20)
|Y |

where | | denotes the number of members of the solution set. C ( X , Y ) 1 indicates that all

solutions of Y are dominated by or equal to some solutions of X .

Imprecision metric ( I ) : Another performance considered for the imprecise case is the

amount of uncertainty in a population. This might be measured using the I metric [50] as the

added volume of all solutions in the front.

Table 6
Comparison results of the metric C, SP and I.
C SP I
C (IP-*, DA-*) C(DA-*, IP-*) Mean Std. Mean Std.
IP-MOPSO 0.2613 0.0157 0.0036 0.1812 0.0038
DA-MOPSO 0.4706 0.0243 0.0025 0.3709 0.0072

represents MOPSO

0.8

0.7

0.6
S

0.5

0.4
Best case: IP-MOPSO
Best case: DA-MOPSO
0.3 Worsst case: IP-MOPSO
Worst case: DA-MOPSO
0.2
0 20 40 60 80 100
Iteration
Fig. 6. Comparison results of S metric

Detailed plots of S metric over the optimization run as shown in Fig. 6 indicate that
19

IP-MOPSO seems to have a better convergence performance than DA-MOPSO. The C metric

values in Table 6 emphasize this observation since more than 26.1% solutions generated by

IP-MOPSO are dominated by those of DA-MOPSO. However, nearly 47.1% of the solutions

obtained by DA-MOPSO are dominated by those of IP-MOPSO. Besides, the mean and standard

deviation of the SP and I results over 30 independent simulation runs are given in Table 6.

These results indicate that the IP-MOPSO algorithm performs better than DA-MOPSO in terms of

diversity and imprecision.

The conclusion to draw from all these observations is that the imprecise Pareto dominance

based MOPSO (IP-MOPSO) is an interesting alternative to the distribution assuming one

(DA-MOPSO).

5.2. A real case study of SCADA system

Next, the proposed approach for solving interval-valued multi-objective RAP is illustrated

through the design of SCADA system in water resource management.

The SCADA system considered has three main sub-systems, i.e., modems, FEPs, and servers,

which work serially. Modems are responsible for communication between stations and the main

control center. FEPs collect data from stations via a proper protocol. Servers are used for acquiring

information from FEP and sending them to SCADA software [19].

In each subsystem, five types of redundant components can be used. The maximum number

of components is five in each subsystem.


3 mi
xij
Maximize RSL , RSR = 1- 1- rijR ,1- rijL
i=1 j=1
3 mi

Minimize CSL ,CSR = [CijL ,CijR ]xij


i=1 j=1

s.t. (18)
mi

1 xij nmax,i , "i = 1,2,3


i=1

xij {0,1,2,,nmax,i }

where xij is the decision variable denoting the number of the jth components used in

subsystem i ; mi is the number of available component types for each subsystem i ; nmax,i is

the user-defined maximum number of components used in subsystem i which is five in this case.
20

The reliabilities and costs of all the components are selected from the case study in [19], and

considered here with a variation of +/- 2 percent, as presented in Table 7.

Table 7
Component choices of example 3
Component Subsystem i
type j 1 (Server) 2 (FEP) 3 (Modem)
rij cij rij cij rij cij
1 [0.784,0.816] [98,102] [0.931,0.969] [490,510] [0.931,0.969] [1960,2040]
2 [0.882,0.918] [196,204] [0.882,0.918] [441,459] [0.588,0.612] [1764,1836]
3 [0.686,0.714] [78.4,81.6] [0.784,0.816] [392,408] [0.882,0.918] [2050,2150]
4 [0.735,0.765] [83.3,86.7] [0.735,0.765] [343,357] [0.931,0.969] [2131.5,2218.5]
5 [0.833,0.867] [147,153] [0.9506,0.9894] [514.5,535.5] [0.833,0.867] [2009,2091]

An integer-coded scheme is adopted, where the number of redundant components of the

corresponding type is selected as the encoded elements. Several elements compose a particle

representing a candidate solution of the RAP problem. The structure of a particle is illustrated in

Fig. 7, where xij represents the number of redundant component of type j in subsystem i.

Each particle contains a 15-bit integer coded string, which consists of 3 subsystems, with an

option of 5 types of components in each subsystem. This integer-encoding scheme can achieve the

mapping from the particle representation to the RAP to be solved in a convenient way.

x11 x12 x15 x21 x25 x31 x35

subsystem 1 subsystem 2 subsystem 3


Servers FEPs Modems

Fig. 7. Encoding scheme for each particle

0.9

0.8
Reliability

0.7

0.6

0.5

0.4

0.3
2000 4000 6000 8000 10000 12000 14000 16000
Cost

Fig. 8. Pareto optimal front obtained by IP-MOPSO


21

0.9

0.8

Reliability
0.7

0.6

0.5

0.4

0.3
2000 4000 6000 8000 10000 12000 14000 16000
Cost
Fig. 9. Pareto optimal front obtained by IP*-MOPSO

The Pareto optimal front generated by IP-MOPSO is shown in Fig. 8, where it can be seen

that the solutions obtained are quite diverse and form a broad optimal set. The system reliability

value varies from [0.3162, 0.3566] to [0.99994403, 0.99998972], with the corresponding total cost

varying from [2325.6, 2234.4] to [13254, 13796]. To further show the superiority of the proposed

order relation for interval-valued numbers and the Pareto dominance relation, a well-known Pareto

relation > IP proposed in [50], adopting the order relation > IN , is used for comparison purpose.

To tell them apart, the MOPSO algorithm adopting the Pareto relation > IP is named IP*-MOPSO,

with all the other parts of the algorithm exactly the same as in IP-MOPSO. Fig. 8 shows the Pareto

optimal front obtained by IP*-MOPSO. We observe from Fig. 8 and 9 that a wide range of values

is covered by solutions of both algorithms. However, it is evident that the solutions obtained using

IP-MOPSO are better distributed through the tradeoff curve, which means that IP-MOPSO

performs better in terms of diversity of solutions.

0.95
Reliability

0.9

0.85

0.8
2000 4000 6000 8000 10000 12000 14000
Cost

Fig. 10. Pareto front with a lower bound of 0.8 for reliability
22

Table 8
Example system design configurations from the knee region
Sol. No. System configuration diagram Reliability Cost
2 1
5
19 2 1 [0.9956, 0.9993] [7497,7803]
5
2 1
1

2 2
5
13 2 2 [0.9971,0.9998] [7673.4,7986.6]
5
2 2

3
2
4
2
5 1
43 5 [0.9951, 0.9990] [6639.5,6910.5]
5 1
5
5
5

A lower bound of 0.8 for reliability is set to identify the solutions more likely to be chosen by

decision-makers, which are plotted in Fig. 10. To further find the good compromises, solutions from

the knee region are considered. The knee region is formed by the most interesting solutions of the

Pareto front, i.e., those where a small improvement of one objective would lead to a large deterioration

in at least one other objective [18,25]. In this case, solutions from the knee region are those where a

small improvement of reliability would lead to a huge increase of cost. Table 8 illustrates three

solutions from this region representing system design configurations, with their respective

interval-valued reliability and cost.

6. Conclusion

The reliability redundancy allocation problem with interval-valued reliability of each

component has been discussed here, where the reliability and cost of a system are simultaneously

considered and properly traded off. A multi-objective particle swarm optimization algorithm,

which adopts the imprecise Pareto relation and extended crowding distance, is proposed for

solving it. A Pareto frontier can be obtained, which captures all possible types of system design

under certain design criteria and conditions and thus provides a decision maker a full set of design

configurations for implementation. For future research, this paper could be extended in four

directions:
23

1. When the number of objective functions is more than two, the proposed crowding distance

may result in losing spatial information. Thus, a first future research direction is to propose a

crowding distance which can reflect the spatial information well and works better when the

number of objectives increases.

2. It should be noted that the general idea presented here could also be applied to other

system structures such as k-out-of-n, circular, complex, and bridge. Hence, another future research

direction is investigating these different systems.

3. Researchers have rarely considered variability involved in the reliability optimization.

However, decision makers, generally risk-averse, prefer designs with a high reliability assured by

a low variability. Therefore, an important issue remains in the simultaneous optimization of

system reliability, system cost, and their associated variance.

4. The choice of multi-state and multi-choice components, as well as standby redundancy

strategy can be considered. Also, the model can be extended to allow component mixing that

makes the model more complicated.

Acknowledgement

The authors would like to thank the editor, the associate editor and the anonymous reviewers

for their insightful comments and suggestions that help us to improve the quality of the paper. This

work is supported by the National Natural Science Foundation of China under Grant No.

61333008.

References

[1] Yalaoui A, Chtelet E, Chu C. A new dynamic programming method for reliability & redundancy allocation
in a parallel-series system. IEEE Trans Reliab 2005;54(2):254-61.
[2] Billionnet A. Redundancy allocation for series-parallel systems using integer linear programming. IEEE
Trans Reliab 2008;57:507-16.
[3] Zia L, Coit DW. Redundancy allocation for series-parallel systems using a column generation approach.
IEEE Trans Reliab 2010;59:706-17.
[4] Ha C, Kuo W. Reliability redundancy allocation: An improved realization for nonconvex nonlinear
programming problems. Eur J Oper Res 2006;171(1):24-38.
[5] Ardakan MA, Hamadani AZ. Reliability optimization of series-parallel systems with mixed redundancy
strategy in subsystems. Reliab Eng Syst Saf 2014;130:132-9.
[6] Liang YC, Chen YC. Redundancy allocation of series-parallel systems using a variable neighborhood search
algorithm. Reliab Eng Syst Saf 2007;92(3):323-31.
[7] Hsieh TJ. Hierarchical redundancy allocation for multi-level reliability systems employing a
bacterial-inspired evolutionary algorithm. Inf Sci 2014;288:174-93.
[8] Yeh W C, Hsieh TJ. Solving reliability redundancy allocation problems using an artificial bee colony
algorithm. Comput Oper Res 2011;38(11):1465-73.
24

[9] Ouzineb M, Nourelfath M, Gendreau M. Tabu search for the redundancy allocation problem of
homogenous series-parallel multi-state systems. Reliab Eng Syst Saf 2008;93(8):1257-72.
[10] Yeh WC. Orthogonal simplified swarm optimization for the series-parallel redundancy allocation
problem with a mix of components. Knowledge-Based Systems 2014;64:1-12.
[11] Huang CL. A particle-based simplified swarm optimization algorithm for reliability redundancy allocation
problems. Reliab Eng Syst Saf 2015;142:221-30.
[12] Wang Y, Li L. A PSO algorithm for constrained redundancy allocation in multi-state systems with bridge
topology. Comput Ind Eng 2014;68:13-22.
[13] Kong X, Gao L, Ouyang H, Li S. Solving the redundancy allocation problem with multiple strategy choices
using a new simplified particle swarm optimization. Reliab Eng Syst Saf 2015;144:147-58.
[14] Kanagaraj G, Ponnambalam SG, Jawahar N. A hybrid cuckoo search and genetic algorithm for
reliability-redundancy allocation problems. Comput Ind Eng 2013;66(4):1115-24.
[15] Khalili-Damghani K, Abtahi AR, Tavana M. A Decision Support System for Solving MultiObjective
Redundancy Allocation Problems. Qual Reliab Eng Int 2014;30(8):1249-62.
[16] Cao D, Murat A, Chinnam RB. Efficient exact optimization of multi-objective redundancy allocation
problems in series-parallel systems. Reliab Eng Syst Saf 2013;111:154-63.
[17] Zaretalab A, Hajipour V, Sharifi M, Shahriari MR. A knowledge-based archive multi-objective simulated
annealing algorithm to optimize series-parallel system with choice of redundancy strategies. Comput Ind Eng
2015;80:33-44.
[18] Dolatshahi-Zand A, Khalili-Damghani K. Design of SCADA water resource management control center
by a bi-objective redundancy allocation problem and particle swarm optimization. Reliab Eng Syst Saf
2015;133:11-21.
[19] Li ZJ, Liao HT, Coit DW. A two-stage approach for multi-objective decision making with applications to
system reliability optimization. Reliab Eng Syst Saf 2009;94:1585-92.
[20] Safari J. Multi-objective reliability optimization of series-parallel systems with a choice of redundancy
strategies. Reliab Eng Syst Saf 2012;108:10-20.
[21] Khalili-Damghani K, Abtahi AR, Tavana M. A new multi-objective particle swarm optimization method
for solving reliability redundancy allocation problems. Reliab Eng Syst Saf 2013;111(2):58-75.
[22] Zhang EZ, Wu YF, Chen QW. A practical approach for solving multi-objective reliability redundancy
allocation problems using extended bare-bones particle swarm optimization. Reliab Eng Syst Saf
2014;127:65-76.
[23] Zhao R, Liu B. Stochastic programming models for general redundancy-optimization problems. IEEE
Trans Reliab 2003;52(2):181-91.
[24] Tekiner-Mogulkoc H, Coit D W. System reliability optimization considering uncertainty: Minimization
of the coefficient of variation for series-parallel systems. IEEE Trans Reliab 2011;60(3):667-74.
[25] Wang S, Watada J. Modelling redundancy allocation for a fuzzy random parallel-series system. J
Comput Appl Math 2009;232(2):539-57.
[26] Garg H, Rani M, Sharma SP, Vishwakarma Y. Bi-objective optimization of the reliability-redundancy
allocation problem for series-parallel system. J Manuf Syst 2014;33(3):335-47.
[27] Soltani R, Sadjadi SJ. Reliability optimization through robust redundancy allocation models with
choice of component type under fuzziness. Proceedings of the Institution of Mechanical Engineers, Part
O: Journal of Risk and Reliability 2014;228(5):449-59.
[28] Feizollahi MJ, Ahmed S, Modarres M. The robust redundancy allocation problem in series-parallel
systems with budgeted uncertainty. IEEE Trans Reliab 2014;63(1):239-50.
[29] Feizollahi MJ, Soltani R, Feyzollahi H. The robust cold standby redundancy allocation in series-parallel
systems with budgeted uncertainty. IEEE Trans Reliab 2015;64(2):799-806.
[30] Do DM, Gao W, Song C, Tangramvong S. Dynamic analysis and reliability assessment of structures
with uncertain-but-bounded parameters under stochastic process excitations. Reliab Eng Syst Saf
2014;132:46-59.
[31] Sahoo L, Bhunia AK, Kapur PK. Genetic algorithm based multi-objective reliability optimization in
interval environment. Comput Ind Eng 2012;62(1):152-60.
[32] Bhunia AK, Sahoo L, Roy D. Reliability stochastic optimization for a series system wi th interval
component reliability via genetic algorithm. Appl Math Comput 2010;216(3):929-39.
[33] Gupta RK, Bhunia AK, Roy D. A GA based penalty function technique for solving constrained
redundancy allocation problem of series system with interval valued reliability of components. J
Comput Appl Math, 2009;232(2): 275-84.
[34] Soltani R. Reliability optimization of binary state non-repairable systems: A state of the art survey. Int J
Ind Eng Comput 2014;5(3):339-64.
[35] Sengupta A, Pal TK, Chakraborty D. Interpretation of inequality constraints involving interval
coefficients and a solution to interval linear programming. Fuzzy Sets Syst 2001 ;119(1):129-38.
25

[36] Sengupta A, Pal TK. On comparing interval numbers. Eur J Oper Res 2000;127(1):28-43.
[37] Yokota T, Gen M, Taguchi T, et al. A method for interval 01 nonlinear programming problem using a
genetic algorithm. Comput Ind Eng 1995;29(1):531-5.
[38] Yokota T, Gen M, Li Y, Kim CE. A genetic algorithm for interval nonlinear integer programming
problem. Comput Ind Eng 1996;31(3):913-7.
[39] Taguchi T, Ida K, Gen M. Method for solving nonlinear goal programming with interval coeffic ients
using genetic algorithm. Comput Ind Eng 1997;33(3):597-600.
[40] Taguchi T, Yokota T. Optimal design problem of system reliability with interval coefficient using
improved genetic algorithms. Comput Ind Eng 1999;37(1):145-9.
[41] Feizollahi MJ, Modarres M. The robust deviation redundancy allocation problem with interval
component reliabilities. IEEE Trans Reliab 2012;61(4):957-65.
[42] Soltani R, Sadjadi SJ, Tavakkoli-Moghaddam R. Interval programming for the redundancy allocation
with choices of redundancy strategy and component type under uncertainty: Erlang time to failure
distribution. Appl Math Comput 2014;244:413-21.
[43] Sadjadi SJ, Soltani R. Minimum-Maximum regret redundancy allocation with the choice of redundancy
strategy and multiple choice of component type under uncertainty. Comput Ind Eng 2015 ;79:204-13.
[44] Roy P, Mahapatra BS, Mahapatra GS, Roy PK. Entropy based region reducing genetic algorithm for
reliability redundancy allocation in interval environment. Exp Syst Appl 2014;41:6147-60.
[45] Soltani R, Sadjadi SJ, Tavakkoli-Moghaddam R. Robust cold standby redundancy allocation for
nonrepairable series-parallel systems through Min-Max regret formulation and Benders decomposition
method. Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability
2014;228(3):254-64.
[46] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II.
IEEE Trans Evol Comput 2002;6(2):182-97.
[47] Kennedy J, Eberhart RC. Particle swarm optimization. In: Proceedings of the 4th IEEE International
conference on neural networks. Piscataway, NJ; 1995.p. 1942-8.
[48] Shi Y, Eberhart RC. Empirical study of particle swarm optimization. In: Proceedings of the 1999
Congress on Evolutionary Computation. Washington, DC; 1999. p. 1945-50.
[49] Ratnaweera A, Halgamuge S, Watson HC. Self-organizing hierarchical particle swarm optimizer with
time-varying acceleration coefficients. IEEE Trans Evol Comput 2004;8(3):240-55.
[50] Limbourg P, Aponte DES. An optimization algorithm for imprecise multi-objective problem functions.
In: Proceedings of the 2005 IEEE Congress on Evolutionary Computation. Edinburgh; 2005. p. 459-66.
[51] Schott JR. Fault tolerant design using single and multicriteria genetic algorithm optimization.
Cambridge: Massachusetts Institute of Technology, 1995.
[52] Zitzler E, Deb K, Thiele L. Comparison of multiobjective evolutionary algorithms: Empirical results.
Evol Comput 2000;8(2):173-95.

Highlights
We model the reliability redundancy allocation problem in an interval
environment.
We apply the particle swarm optimization directly on the interval values.
A dominance relation for interval-valued multi-objective functions is defined.
The crowding distance metric is extended to handle imprecise objective
functions.

You might also like