You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/327049183

Evolutionary Many-objective Optimization based on Dynamical Decomposition

Article  in  IEEE Transactions on Evolutionary Computation · August 2018


DOI: 10.1109/TEVC.2018.2865590

CITATIONS READS
0 329

4 authors, including:

He Xiaoyu Yuren Zhou


Sun Yat-Sen University South China University of Technology
5 PUBLICATIONS   6 CITATIONS    40 PUBLICATIONS   848 CITATIONS   

SEE PROFILE SEE PROFILE

Zefeng Chen
Sun Yat-Sen University
18 PUBLICATIONS   71 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

antidepression, hemorrhage prevention and treatment View project

Evolutionary Algorithms and Applications View project

All content following this page was uploaded by He Xiaoyu on 25 August 2018.

The user has requested enhancement of the downloaded file.


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 1

Evolutionary Many-objective Optimization based on


Dynamical Decomposition
Xiaoyu He, Yuren Zhou, Zefeng Chen, and Qingfu Zhang

Abstract—Decomposition-based many-objective evolutionary referred to as multiobjective optimization problems (MOPs),


algorithms generally decompose the objective space into mul- can be stated as follows:
tiple subregions with the help of a set of reference vectors.
The resulting subregions are fixed since the reference vectors minimize F (x) = (f1 (x), f2 (x), . . . , fm (x))T ,
are usually pre-defined. When the optimization problem has a
complicated Pareto front, this decomposition may decrease the subject to x ∈ Ω,
algorithm performance. To deal with this problem, this paper
proposes a dynamical decomposition strategy. Instead of using where x = (x1 , x2 , . . . , xn )T is the n-dimensional decision
pre-defined reference vectors, solution themselves are used as vector, Ω is the decision space, F : Ω → Rm consists of m
reference vectors. Thus, they are adapted to the shape of Pareto real-valued objective functions, and Rm is called the objective
front automatically. Besides, the subregions are produced one by
one through successively bipartitioning the objective space. The space [1].
resulting subregions are not fixed but dynamically determined Unlike single-objective optimization, the target of solving
by the population solutions as well as the subregions produced an MOP is to find a set of trade-off solutions known as Pareto
previously. Based on this strategy, a solution ranking method, front (PF) in the objective space. In the past two decades, a
named DDR, is proposed which can be employed in the mating se-
lot of multiobjective evolutionary algorithms (MOEAs) have
lection and environmental selection in commonly used algorithm
frameworks. Compared with those in the other decomposition- been proposed to solve MOPs. These MOEAs have inherent
based algorithms, DDR has the following properties: (1) no pre- superiority over classical methods on solving MOPs due to
defined reference vectors are required; (2) less parameters are the capacity of finding a set of Pareto optimal solutions in
involved and (3) the ranking results can not only be utilized a single run [2]. One of the most famous approaches is
directly to select solutions but also serve as a secondary criterion
to rank the solutions based on Pareto-dominance to achieve
in traditional Pareto-based algorithms. In this paper, DDR is
equipped in two algorithm frameworks for handling many- convergence. Remarkable MOEAs based on Pareto-dominance
objective optimization problems. Comparisons with five state-of- includes PAES [3], PESA-II [4], SPEA2 [5], and NSGA-II [6].
the-art algorithms on 31 widely used test problems are carried Recent studies have suggested that the conventional Pareto-
out to test the performance of the proposed approach. The based MOEAs are faced with difficulties when tackling MOPs
experimental results have shown the effectiveness of the proposed
approach in keeping a good trade-off between convergence and
with more than three objectives. These MOPs are known as the
diversity. many-objective optimization problems (MaOPs) [7]. The main
challenge brought by MaOPs is the deterioration of selection
Index Terms—Many-objective optimization, evolutionary algo- pressure due to the increasing number of objectives [8]. A
rithm, dynamical decomposition.
number of many-objective evolutionary algorithms (MaOEAs)
have been proposed to deal with these problems. The most
straightforward approach is to relax the dominance relation to
I. I NTRODUCTION enhance the selection pressure toward the PF [9]–[12]. Another
approach is to design novel strategies in order to preserve
the population diversity after applying traditional Pareto-based
I N the real world, there are a variety of problems involving
more than one conflicting objectives. These problems, methods [13]–[15]. Recently proposed algorithms such as
KnEA [16] also suggest to further promote convergence by
exploring the knee region of the problem PF.
This work was supported in part by the National Natural Science Foundation Indicator-based MaOEAs provide an alternative approach
of China (61472143, 61773410, 61673403), and in part by a grant from
ANR/RCC Joint Research Scheme sponsored by the Research Grants Council to handle MaOPs. Their core idea is to evaluate the solutions
of the Hong Kong Special Administrative Region, China and France National by using a single indicator. Having the capacity of measuring
Research Agency (Project No. A-CityU101/16). both convergence and diversity, the hypervolume (HV) [17] is
Xiaoyu He, Yuren Zhou, and Zefeng Chen are with the School of Data and
Computer Science & Collaborative Innovation Center of High Performance usually used as the main indicator in some popular MaOEAs
Computing, Sun Yat-sen University, Guangzhou, P. R. China. such as HypE [18] and SMS-EMOA [19]. These algorithms
Qingfu Zhang is with the Department of Computer Science, City University do not suffer from the selection pressure problem since they
of Hong Kong, Hong Kong.
Yuren Zhou is also with the Department of Computer Science, City use the indicator values to guide the search. However, they
University of Hong Kong, Hong Kong. do have their own issues. For example, calculating the HV
(E-mail: hxyokokok@foxmail.com (X. He), zhouyuren@mail.sysu.edu.cn indicator is very time consuming [20], thus, these algorithms
(Y. Zhou), chzfeng@mail2.sysu.edu.cn (Z. Chen), and
qingfu.zhang@cityu.edu.hk (Q. Zhang)) may be low-efficient in high-dimensional objective space.
*Corresponding author: Yuren Zhou. Moreover, they prefer knee points and have difficulties in
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 2

producing a uniformly distributed population when the PF significantly due to the lose of population diversity.
is not linear [21]. To solve the low-efficiency problem of Recent studies have proposed two approaches to alleviate
HV calculation, computationally efficient indicators such as this problem. The first approach is to employ multiple sets
R2 [22] and ∆p [23] have been studied and used in indicator- of reference vectors or multiple aggregation functions. For
based MaOEAs [24], [25]. However, a pre-defined set of example, [40]–[42] suggest to use two aggregation functions
reference points or a subset of the real PF is usually required, simultaneously, one for pulling the solutions toward the ideal
which is also a challenging task. Recently, state-of-the-art point, and the other one for pushing the solutions away
MaOEAs such as BiGE [26], SRA [27], and 1by1EA [28] from the nadir point. Similarly, [43] utilizes two sets of
generally make use of multiple simple indicators to measure reference vectors, one for achieving fast convergence, and
the convergence and diversity separately. Showing significant the other one for approximating a more complete PF. One
performance improvements, these algorithms introduce some main disadvantage of this approach is that it usually requires
problem-dependent parameters which should be tuned care- a specially designed mechanism to control the interactions
fully. between these multiple sets of reference vectors or multiple
As another non-Pareto-based algorithm, the multiob- aggregation functions. The second approach is the reference
jective evolutionary algorithm based on decomposition vector adaptation. One simple implementation is to employ
(MOEA/D) [29] decomposes an MOP into a number of a fixed number of reference vectors and relocate them on
single-objective optimization problems (called subproblems). the fly [35], [44]. To achieve a fast migration to desired
Usually, a pre-defined set of well-spread reference vectors is distribution, [43], [45]–[48] further suggest to adaptively insert
employed to facilitate this decomposition. A neighborhood “useful” reference vectors and delete “non-useful” ones. These
structure is also constructed with the help of these reference adaptation techniques require problem-dependent parameters
vectors. Then, a subproblem is optimized by using the infor- and extra computational burden to calculate the “usefulness”
mation mainly from its neighbor subproblems. for the reference vectors. Besides, a pool containing the
Though designed mainly for solving MOPs, the decom- candidate reference vectors is usually incorporated to facilitate
position strategy used in MOEA/D has been considered as the insertion and deletion. However, how to maintain this
a promising tool in solving MaOPs. One main approach in pool itself is exactly an optimization problem which should
recent studies focuses on improving the diversity manage- be carefully handled.
ment mechanism in existing MOEAs. These algorithms (e.g., Aiming to deal with the above problems, a dynamical
NSGA-III [30], I-DBEA [31], and MOEA/D-DU [32]) try to decomposition strategy and a ranking method is proposed in
measure the population diversity using the distance between this paper. Being different from existing approaches, decom-
the solutions and the reference vectors. Since the reference position in this study does not rely on pre-defined reference
vectors are well-spread, it helps to maintain the population vectors. Detailed properties of this approach are summarized
diversity in high-dimensional objective space. Decomposition as follows.
strategy can also be used to partition the objective space into • No extra pre-defined set of reference vectors are needed.
small subregions. Good solutions are emphasized by using Contrarily, the solution themselves are considered as
traditional methods in each subregion to maintain a balance reference vectors.
between convergence and diversity [33]. Representative ex- • Apart from some common parameters (e.g., the popula-
amples of this type include MOEA/DD [34], RVEA [35], θ- tion size and termination condition), no other parameters
DEA [36], and SPEA/R [37]. are required.
The set of reference vectors plays a key role in • Being robust in coping with problems with complicated
decomposition-based MaOEAs. However, how to set these PFs.
reference vectors is still an open question. In the early studies • An MaOEA is constructed based on the proposed ranking
of MOEA/D, the reference vectors are produced using the method. Also, this ranking method can serve as a sec-
systematic approach [38]. The number of obtained reference ondary criterion in traditional Pareto-based algorithms.
vectors is a combinatorial number determined by the number In the remainder of this paper, we first provide the ba-
of objectives and the number of divisions along each objective. sic ideas of the dynamical decomposition strategy and the
It means this number will increase significantly as the objective dynamical-decomposition-based ranking method (DDR for
number increases. To fix this problem, a two-layer method [30] short) in Section II. The detailed implementations of DDR
is adopted to produce reference vectors in the boundary and and two examples of using DDR in MaOEAs are provided in
inside layer, separately. This method makes the number of Section III. Then, we present the simulation results on a set
reference vectors acceptable in most scenarios. In [37], it is of test instances in Section IV. Finally, Section V concludes
further extended to generate reference vectors with arbitrary this paper and gives some remarks for future studies.
number of layers. Nevertheless, the number of layers is still
an external parameter required to be tuned. II. BASIC I DEAS
Another big problem concerning the reference vectors is that In this section, we first outline the basic idea of the
the performance of decomposition-based MaOEAs strongly dynamical decomposition strategy. Three concepts are then
depends on the shape of PF [39]. They perform well only presented because they are essential in further explanation of
when the distribution of the reference vectors is consistent our ranking method. Finally, we describe the whole process
with the PF shape. Otherwise, their performance deteriorates and the properties of DDR.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 3

A. Dynamic Decomposition Strategy where g is an aggregation function1 , and λ(p) is a


reference vector constructed using p (discussed later
Given a population P = {x1 , x2 , . . . , xN }, the goal of
in Section II-B). This step is designed to maintain the
DDR is to assign a rank value to each of its solutions. Suppose
convergence since the above two steps only focus on
the ranked and the unranked solutions are contained in the sets
preserving the diversity. Since s is close to p and is able
Q and W , respectively. DDR iteratively assigns a rank value
to optimize the aggregation function, it is of high quality
to a solution s ∈ W and moves it to Q until all solutions
in terms of both diversity and convergence.
have been ranked. The ranking results can be utilized in the
environmental selection or the mating selection in MaOEAs. Fig. 1 provides an illustration of the dynamical decompo-
Hence, at each iteration, this solution s should be selected sition strategy in handling a bi-objective problem2 . Suppose
elaborately so that Q ∪ {s} obtains a good performance in there are 9 solutions (e.g., x1 , x2 , · · · , x9 ) contained in the
terms of both convergence and diversity. current population P and depicted by the circles. Among
Clearly, how to select a promising solution s ∈ W is them, x1 and x2 , painted blue, are ranked ones. That is,
the essential step of DDR. Our idea of solving this selection Q = {x1 , x2 }. Other solutions are unranked ones, indicating
problem is inspired by the classic algorithm quickselect [49]. W = {x3 , · · · , x9 }. First, for each unranked solution in W ,
Quickselect is a selection algorithm to find the k-th smallest we calculate its distance to Q (i.e., the minimum distance to
number in an unsorted number list. It works as follows: find the solutions in Q). The unranked solution having the largest
a pivot in the list; partition the original list into two parts distance to Q is then chosen to be the pivot p. In this case, x3 ,
using the pivot; identify the sublist in which the target number depicted by the green circle, is chosen. Next, with x3 being
is located; recursively execute the above operations in the the pivot, the objective space is partitioned into two subregions
chosen sublist until the pivot itself is the target number. Indeed, SA (depicted by the light yellow area) and SB (depicted by the
selecting a number in a list is similar to finding a solution in white area). Also, solutions in P are classified into two groups
a set. So we design a similar framework consisting of the depending on which subregion it is located in. For example,
following steps: since distance(x6 , Q) = min{dst(x6 , x1 ), dst(x6 , x2 )} =
dst(x6 , x2 ) < dst(x6 , p), x6 is located in the subregion SB .
S1. Find a pivot. An unranked solution p is chosen as the
Two sets of solutions can be obtained after performing this
pivot by means of maximizing its distance to Q:
procedure for each solution in W . Specifically, the set A =
p = arg max distance(x, Q), (1) {x3 , x4 , x7 , x8 } consists of the solutions close to the pivot,
x∈W whereas its complementary set P − A = {x1 , x2 , x5 , x6 , x9 }
distance(x, Q) = min dst(x, y), (2) contains those close to the ranked solutions in Q. In the
y∈Q
end, a reference vector λ(p) is constructed (depicted by the
where dst(x, y) denotes the distance between solutions dashed arrow). With this reference vector and a pre-defined
x and y (discussed later in Section II-C). The distance aggregation function, we are able to calculate a scalar value
between a certain solution x and the set Q is defined for each solution in A. In this example, x4 is chosen as the
as the minimum distance between x and solutions in promising solution s since it has the smallest scalar value. It
Q. The above method is commonly adopted in statistical will be assigned a rank value based on the method discussed
sampling [50] and the resulting p is helpful in preserving in Section II-E.
diversity.
S2. Identify a set of candidate solutions by decomposition.
Bipartition the objective space Rm into two subregions
SA and SB , where

SA = {F (x) ∈ Rm |dst(x, p) ≤ distance(x, Q)}, (3)


SB = {F (x) ∈ Rm |dst(x, p) > distance(x, Q)}. (4)

It is obvious that compared with those in SA , solutions


in SB are much closer to Q. Adding solutions from SB
to Q will have a bad contribution to the diversity. So
SB is discarded in further consideration. Then, the set of
unranked solutions located in SA (denoted by A) can be
formed as:
A = {x|x ∈ W, F (x) ∈ SA } Fig. 1. Dynamical decomposition strategy for a bi-objective problem.
(5)
= {x|x ∈ W, dst(x, p) ≤ distance(x, Q)}.
1 Conventional aggregation function usually employs an ideal point. How-

S3. Find a promising solution. Find an unranked solution s ever, since all solutions have been normalized in this study, the ideal point
from A which optimizes the aggregation function: is exactly the origin point. Hence, it is not employed in calculating the
aggregation function.
2 This example assumes that all solutions have positive objective values. This
s = arg min g(x|λ(p)), (6) can be achieved with the normalization procedure described in Section II-B.
x∈A
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 4

Note that, Eq. (5) can be considered as a centroid- Using a set of pre-defined reference vectors in
based clustering process though it is not performed explicitly decomposition-based algorithms has a disadvantage that
throughout the algorithm. This process yields |Q| + 1 clusters some reference vectors may have no intersection with the
of solutions, where the cluster centers are the solutions in PF. Optimizing with these reference vectors will contribute
Q ∪ {p}. A is exactly one of the resulting clusters whose nothing but waste the computational burden. This issue is
center is p. It is the reason why p is utilized to construct the well-settled in the above method since λ(x) always intersects
reference vector in Eq. (6). In the above example (Fig. 1), with the approximated PF at F 0 (x) for all x. Given that
x1 , x2 , and x3 are the three cluster centers. Each of the solutions are getting closer and closer to the PF in the
other solutions is assigned to one of the clusters by checking evolving population, their corresponding reference vectors are
its minimum distance to the three cluster centers, and we very likely to have intersections with the true PF.
obtain three clusters including {x3 , x4 , x7 , x8 }, {x2 , x6 }, and Also, all these reference vectors are located on the hyper-
{x1 , x5 , x9 }. Here, the first cluster is the set A while the last plane f10 + f20 + · · · + fm
0
= 1. This design is based on the
two form the complementary set P − A. Also, pursing the assumption that the problem PF in the normalized objective
diversity performance is the only one criterion in forming the space is this hyperplane. And we hope all solutions will be
candidate set A. In this way, the proposed approach adopts pulled towards this hyperplane along the directions of these
the “diversity first and convergence second” strategy [51]. reference vectors. Note that, this assumption is reasonable if
In traditional decomposition-based algorithms, the objective we lack a priori knowledge of the PF shape.
space is decomposed into many subregions simultaneously.
This kind of decomposition only relies on the reference C. Distance Measure
vectors. Since the reference vectors are pre-defined in most Since we assume the problem PF is a hyperplane in the
algorithms, the produced subregions are always fixed. On the normalized objective space, the distance between two solu-
contrary, in our strategy, the objective space is decomposed tions is measured by the Euclidean distance between their
successively and a new subregion is produced in each decom- corresponding reference vectors:
position. Moreover, the new subregion relies on the subregions
produced previously and are affected by the population solu- dst(x, y) =k λ(x) − λ(y) k2 , (9)
tions. Since the decomposition results in this approach are where k · k2 denotes the 2-norm of a vector.
not determined by the fixed reference vectors, we call it the According to Eq. (9), the distance between two solutions
dynamical decomposition. is only determined by their directions rather than their √ real
positions. Besides, its value is within the range of [0, 2]
B. Reference Vectors regardless of the dimensionality of the space. These properties
The dynamical decomposition strategy relies on reference make it scalable in high-dimensional objective space. In this
vectors when comparing multiple solutions (as shown in step respect, it is very similar to the angular distance. Angular
S3 in Section II-A). Considering the nature of evolutionary distance is widely used in MaOEAs since it provides nice per-
algorithms, the population can be treated as an approximation formance in high dimensional space compared with Euclidean
to the PF. This leads to an idea that we can consider solution distance [15], [28], [35], [37]. In fact, if the reference vectors
themselves as reference vectors. Details are stated as follows: in our proposed approach are located in the hypersphere
We firstly normalize each solution x in P so that their (f10 )2 + (f20 )2 + · · · + (fm
0 2
) = 1, the above distance could be
objective values are inside the range [0, 1]: transformed to the angular distance. This also demonstrates a
fact that using the angular distance gives an assumption that
fi (x) − zimin the normalized PF should be a hypersphere. If this assumption
fi0 (x) = , i = 1, 2, . . . , m (7)
zimax − zimin is not satisfied, the population diversity may not be achieved.
where zimin = min fi (x) and zimax = max fi (x). To prevent
x∈P x∈P
the denominator in Eq. (7) from becoming zero in the case
of zimax = zimin , we fix fi0 (x) = 1e − 10 if zimax − zimin <
1e − 10. Note that, this method is chosen due to its simplicity.
Other methods can also be used to normalize the objective
values. For more details of the normalization mechanisms and
their effects on the algorithm performance, interested readers
may refer to [52].
Then, for each solution x, we calculate its corresponding
reference vector (denoted by λ(x)) using the following for-
mulation:
1 (a) (b)
λ(x) = m 0 F 0 (x), (8)
Σj=1 fj (x) Fig. 2. Well-spread solutions (circles) measured by (a) the angular distance
and (b) the proposed distance.
0
where F (x) = (f10 (x), f20 (x), . . . , fm
0
(x))T
is the normal-
ized objective vector. That is, λ(x) points from the origin Fig. 2 provides an illustration of the difference between
point 0 to F 0 (x). the proposed distance and the angular distance when the PF
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 5

is convex. The thin black curl depicts the normalized PF. The After the initialization, we select a solution s from W using
empty circles denote some solutions located on the PF and the the method proposed in Section II-A. Once s is selected in
red arrows describe the corresponding vectors. In Fig. 2a, each the r-th iteration, it gets the rank r and is moved to Q. These
pair of adjacent solutions has the same angular distance (e.g., operations are executed repeatedly until W is empty.
hx1 , x2 i = hx2 , x3 i, where h·, ·i denotes the acute angle). To have a clear understanding of the process of DDR, Fig. 3
However, it is observed that these solutions are not uniformly provides an illustration of its first three iterations. The circles
distributed over the PF. For example, the distance between depict the solutions in the whole population P . Among them,
x1 and x2 is much larger than that between x2 and x3 . the blue ones depict the ranked solutions, the green ones depict
In Fig. 2b, the blue line depicts the hyperplane f10 + f20 = 1 the pivot p, and the red ones depict the solution s. If a circle
and the blue points depict the reference vectors. Each pair is half green and half red, it means this solution is selected
of adjacent reference vectors has the same Euclidean distance as p and s at the same time. Light yellow areas describe the
(e.g., kλ(x1 ) − λ(x2 )k2 = kλ(x2 ) − λ(x3 )k2 ). It is found subregions (i.e., SA ) and the arrowed dashed lines describe the
that the distribution of x1 , x2 , and x3 is more uniform than reference vectors (i.e., λ(p)). At the beginning, x1 and x2 get
that shown in Fig. 2a. Therefore, the proposed distance rather the rank 0 since they are extreme solutions. Then, at the first
than the angular distance is utilized in this study. iteration (shown in Fig. 3a), x3 is considered as the pivot since
it has the largest distance to x1 and x2 . The subregion SA is
D. Extreme Solutions determined with these solutions. It is found that there are 3
solutions located in this subregion and x4 is the one optimizing
The proposed approach starts with identifying the extreme the corresponding aggregation function. Hence, x4 is chosen
solutions in the population for each axis. This part is not the and gets rank 1. In the next iteration (shown in Fig. 3b), x5
main contribution of this paper, and thus the method proposed is used as the pivot and helps to find the optimum solution
in [30] is employed here. For the j-th axis, its extreme solution x6 which gets the rank 2. In the third iteration, x7 is selected
ej is the one minimizing the aggregation function g(x|w) with to be the pivot, and it is found that itself is the optimum in
w = (w1 , · · · , wm )T being the axis direction: the subregion. Thus, x7 is chosen and ranked. After these
three iterations, totally five solutions x1 , x2 , x4 , x6 , and x7
ej = arg min g(x|w), (10) are ranked with the corresponding ranking results being 0, 0,
( x∈P 1, 2, and 3, respectively.
1 i=j
wi = , i ∈ {1, 2, · · · , m} (11)
1e − 6 i 6= j F. Explanations of DDR
where g is the same aggregation function as that in Eq. (6). Two properties of the ranking results obtained by DDR are
explained as follows:
1) Extreme solutions always obtain the best ranking results:
E. Ranking the Solutions
The extreme solutions play a key role in preserving the
Based on the dynamical decomposition strategy, the ranking population diversity. First, with an appropriate aggregation
method DDR is proposed in this subsection. Suppose P is a function, the extreme solutions are usually the minimum in
population containing N solutions. The task of DDR is to each objective, which determines the ideal point used in the
assign a rank value to each solution in P . The rank value of normalization. Second, they are on the boundary of the PF and
a solution is inside the range {0, 1, . . . , N − 1} and describes have a disproportionate contribution to capturing the whole
its quality in terms of both convergence and diversity. In PF compared to internal solutions [47], [53]. Third, they
this study, a smaller rank value is desirable and the best form the initial set of Q and influence the decomposition
solutions obtain the rank 0. Thus, when applying DDR to of the objective space. Hence, all extreme solutions obtain
evolutionary algorithms, solutions with smaller rank values the rank 0. Note that, in finding the extreme solutions, the
are more likely to survive in the mating selection and the calculation of Eq. (10) are independent of the other solutions.
environmental selection. This makes the overall performance of DDR less influenced by
DDR works like a selection operator which selects unranked the initialization though the reference vectors constructed in
solutions iteratively. It maintains, in its main loop, two sets Q the early stage of the search process may be badly distributed.
and W which consist of the ranked solutions and unranked so- Another concern about the extreme solutions in DDR is that
lutions, respectively. At the beginning, Q and W are initialized they are obtained through searching along the axis directions.
so that Q is the set of all extreme solutions e1 , e2 , . . . , em , It implicitly assumes that the normalized PF is a hyperplane
while W contains all remaining solutions: which is equally inclined to all objective axes and intersects
Q = {e1 , e2 , . . . , em }, W = P − Q. (12) each axis at 1. In other words, DDR treats the normalized PF
as a simplex. One may wonder whether DDR still works if
Then all extreme solutions in Q get the rank 0.3 the problem PF is of non-simplex shape.
Fig. 4 provides an example for the three-objective DTLZ1−1
3 On some problems especially those with degenerate PFs, searching along
problem (discussed in Section S-I-A in the supplementary
different axis directions with Eq. (10) may produce duplicated extreme
solutions. Nevertheless, the algorithm provided in this study still works since material), whose PF is an inverted simplex. The solid black
it does not rely on the number of different extreme solutions in Q. lines depict the assumed simplex PF, which is a triangle with
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 6

(a) (b) (c)


Fig. 3. An illustration of the first three iterations of the ranking method.

(i)
s(i) unless s(j) ∈/ SA . Otherwise, s(j) rather than s(i) will
optimize Eq. (6) in the i-th iteration. In another word, s(j) is
less helpful in improving the convergence performance without
decreasing diversity. Since preserving the diversity is the first
criterion of DDR to rank solutions, it is reasonable that s(i)
obtains a smaller rank value compared with s(j) .
This property can also be illustrated from another aspect.
As mentioned before, we can consider DDR as a selec-
tion operator: iteratively select the best solution from the
unranked ones. In the above example, since Q(i) ⊂ Q(j) ,
distance(x, Q(j) ) ≤ distance(x, Q(i) ) holds for any x.
(i) (j)
Consequently, SA is very likely to be larger than SA .
When solutions are well-distributed in the objective space,
A(i) will contain more candidate solutions than A(j) does.
Fig. 4. Extreme solutions on the 3-objective DTLZ1−1 problem.
It means the selection pressure is usually large in the previous
stage, but is gradually decreased as more and more solutions
are selected and ranked. From this aspect, previously ranked
its apexes at (1, 0, 0)T , (0, 1, 0)T and (0, 0, 1)T . The green solutions are of high quality because they have survived in
lines depict the true PF after normalization, which is a rotated fierce competitions in the selection. Thus, they obtain smaller
triangle. We also assume that all solutions located on the true rank values compared with those ranked in the later stage.
PF have already been added into the current population P
and the task of DDR is to find out a finite set of uniformly III. D ETAILED I MPLEMENTATIONS
distributed solutions. Obviously, the true extreme solutions In this section, the detailed implementations of DDR are
are g1 , g2 , and g3 . But in DDR, e1 , e2 , and e3 are regarded presented in the form of pseudo-code. We also provide two
as the extreme ones and receive the rank 0. Through some simple examples to show how to use DDR in MaOEAs.
elementary geometric computations, it is found that g1 , g2 ,
and g3 are the midpoints of the boundaries of the assumed PF A. Implementations of DDR
while e1 , e2 , and e3 are the midpoints of the boundaries of the
true PF. Also, g1 , g2 , and g3 are the solutions furthest away All the details of DDR have been discussed in Section II.
from {e1 , e2 , e3 }. Hence, g1 , g2 , and g3 will be chosen as the However, a simple trick can be used to reduce the computa-
promising solutions during the first three iterations of DDR. tions in Eqs. (1) and (5) based on the following fact:
Their rank values can only be chosen from {1, 2, 3} which are distance(x, Q ∪ {s}) = min{distance(x, Q), dst(x, s)}.
also relatively small (good) rank values. This example shows (13)
that DDR still works in finding the extreme solutions even This means we can calculate distance(x, Q) by iteratively
when the problem PF is of non-simplex shape. updating. For convenience, we introduce a new symbol md(x)
2) Previously ranked solutions obtain smaller rank values (means the minimum distance to the ranked solutions) to
than those ranked subsequently: With slightly abusing sym- denote distance(x, Q). Once a solution s is ranked and its
(r)
bols, we denote s(r) , Q(r) , A(r) , and SA as the symbols corre- distance to x is smaller than md(x), md(x) is updated.
sponding to s, Q, A, and SA in the r-th iteration, respectively. Algorithm 1 provides the pseudo-code of DDR. π(x) de-
Given i < j and i, j ∈ {1, . . . , N }, s(j) cannot dominate notes the ranking result of the solution x. Lines 1-4 perform
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 7

Algorithm 1 Dynamical-decomposition-based ranking (DDR) Algorithm 2 Framework of DDEA


Input: P = {x1 , . . . , xN }: Population Input: N : Population size
Output: π(x ∈ P ): Ranking results Output: P : A set of solutions
1: Normalize the population using Eq. (7) 1: Initialize the population P containing N solutions
2: for all x ∈ P do 2: π(x) = 0 for all x ∈ P // initialize the rank values
3: while the termination criterion is not met do
3: Generate the reference vector λ(x) using Eq. (8)
4: P ool = binaryTournament(P )
4: end for 5: Q = createOffspring(P ool)
5: for j ← 1 to m do 6: R=P ∪Q
6: Find the j-th extreme solution ej using Eq. (10) 7: π(x ∈ R) = DDR(R) // rank all the solutions
7: π(ej ) = 0 // extreme solutions obtain the rank 0 8: sort the solutions in R in ascending order based on their rank
8: end for values and select the top N solutions to form P
9: Q = {e1 , e2 , . . . , em } // the set of ranked solutions 9: end while
10: W =P −Q // the set of unranked solutions
11: for all x ∈ W do
// initialize the distance to ranked solutions
12: md(x) = min dst(x, y)
y∈Q Algorithm 3 Framework of DDEA+NS
13: end for
14: r = 1 // current rank value Input: N : Population size
15: while |W | > 0 do Output: P : A set of solutions
// find the pivot furthest away from the ranked solutions
1: Initialize the population P containing N solutions
16: p = arg max md(x)
x∈W 2: π(x) = 0 for all x ∈ P // initialize the rank values
// find the candidate solutions close to the pivot 3: while the termination criterion is not met do
17: A = {x|x ∈ W, dst(x, p) ≤ md(x)} 4: P ool = binaryTournament(P )
// find s by optimizing the aggregation function 5: Q = createOffspring(P ool)
18: s = arg min g(x|λ(p)) 6: R=P ∪Q
x∈A
19: π(s) = r // assign current rank value to s 7: {F1 , F2 , . . . } = fastNondominatedSort(R)
20: r =r+1 8: P =∅
21: Q = Q ∪ {s}, W = W − {s} 9: i=1
22: for all x ∈ W do // update the md values 10: while |P | + |Fi | ≤ N do
23: md(x) = min{dst(x, s), md(x)} // rank the solutions in the i-th front
24: end for 11: π(x ∈ Fi ) = DDR(Fi )
25: end while 12: P = P ∪ Fi
13: i=i+1
14: end while
// rank the solutions in the next front
15: π(x ∈ Fi ) = DDR(Fi )
the normalization and generate the reference vector for each 16: sort the solutions in Fi in ascending order based on their
solution. Extreme solutions are identified and ranked in lines rank values and append the top N − |P | solutions to P
5-8. In lines 9-13, Q and W are initialized and the minimum 17: end while
distance to the ranked ones (i.e., the md value) for each
solution in W is initialized. Then, the loop in lines 15-25
ranks the remaining solutions iteratively. Specifically, in the
r-th iteration, the pivot p and set A are firstly found in lines selection is carried out to construct the mating pool based
16 and 17. The solution s optimizing the aggregation function on the ranking results. Genetic operations are executed to
obtains the rank r in lines 18-20. Finally, in lines 21-24, we generate an offspring population. The offspring solutions and
move s from W to Q and use it to update the md value their parents are mixed and ranked using the DDR. Finally, the
for each unranked solution. Since r is incremented by 1 each best N solutions according to the DDR results are selected to
time the loop is taken, previously ranked solutions always form the new population.
obtain smaller rank values than those ranked in the subsequent Generally, DDR adopts the “diversity first and convergence
iterations. second” strategy. It raises an idea that this ranking method
can serve as a secondary criterion in the traditional Pareto-
based algorithms. Here we provide a non-dominated sorting
B. Two Examples of Using DDR in MaOEAs evolutionary algorithm based on dynamical decomposition
Two simple MaOEAs based on the proposed ranking (DDEA+NS for short) in Algorithm 3. This algorithm is
method are implemented in this subsection. The first one implemented by replacing the crowding distances in NSGA-
adopts the classic EA framework in which the mating selection II with the DDR ranking results. Specifically, in the mating
and environmental selection are based on the ranking results selection, the non-domination ranks and the ranking results of
of DDR. The framework of the dynamical-decomposition- DDR are utilized as the primary and the secondary criteria
based EA (DDEA for short) is presented in Algorithm 2. We to construct the mating pool, respectively. Offspring solutions
firstly initialize a population P and the corresponding rank for are then generated in the pool with some genetic operations.
each solution. Then, in the main loop, the binary tournament After obtaining the mixed population, the non-dominated sort
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 8

is performed to partition the population into multiple fronts and DTLZ4 especially in high-dimensional objective space.
(i.e., F1 , F2 , . . . ). Each non-dominated front is selected one at RVEA is outperformed by both DDEA and DDEA+NS on
a time to construct the new population unless the size of new more than 23 test instances, but it performs pretty well on
population exceeds N . Also, solutions in each front are ranked WFG5, WFG9, and DTLZ3. NSGA-III, MOMBI-II, and VaEA
using DDR. If the solutions in the i-th front can not be added also achieve competitive results on high-dimensional DTLZ
into the population but the population size is smaller than N , problems. GDE3 and MOEA/D do not perform well and are
DDR is performed in the i-th front. Then, the best N − |P | surpassed by both DDEA and DDEA+NS on almost all the
solutions according to the DDR ranking results are selected test instances.
from the i-th front and appended to P . It is worth noting that Here, we discuss the correlations between the characteristics
since DDR employs the aggregation function to increase the of the problems and the algorithm performance. GDE3 and
convergence, the proposed algorithms DDEA and DDEA+NS MOEA/D are not especially designed for solving MaOPs, so
are also considered as aggregation-based MaOEAs in essence. their performance are relatively poor compared with other
competitors when m ≥ 5. For example, GDE3 fails to
C. Time Complexity converge to the PF region on high-dimensional DTLZ1 and
DTLZ3 as it produces a zero HV value. It is because its dif-
In DDR, identifying the extreme solutions and initializ- ferential evolution (DE) operator is likely to generate offspring
ing the md values (line 6 and line 12 in Algorithm 1) distant from their parents, which is seen as undesirable in
takes O(N m2 ) time. Dynamical decomposition (lines 16-18 the context of many-objective optimization [30], [37], [63].
in Algorithm 1) requires O(N m) computations for ranking a On the other hand, the performance of the other competi-
solution. After ranking a solution, updating the md values for tors seem to be problem-dependent. For example, they have
the unranked solutions also takes O(N m) time. When m < N , similar performance on relatively easier problems such as
the overall time complexity for ranking all the population DTLZ1, DTLZ2, and DTLZ4. However, on DTLZ3 which
solutions in DDR is O(N 2 m). has a huge number of local optima, algorithms using pre-
DDEA only relies on DDR to rank the solutions. Thus, defined reference vectors perform significantly better than the
the time complexity of one generation of DDEA is O(N 2 m). others. The underlying reason is that DTLZ3 has a regular PF
Whereas, in DDEA+NS, extra O(N log m−2 N ) computations whose shape is consistent with the distribution of the reference
are required in the non-dominated sort [54]. The rankings vectors. Pulling the solutions to the ideal point along the
executed in all fronts require O(N 2 m) in the worst case. directions determined by these reference vectors will preserve
Hence, the worst-case time complexity of one generation in both convergence and diversity. Without using the pre-defined
DDEA+NS is max{O(N 2 m), O(N log m−2 N )}. reference vectors, DDEA, DDEA+NS, and VaEA are faced
with difficulties in maintaining the diversity and are likely
IV. C OMPARATIVE S TUDIES to be trapped into local optima. This phenomenon is also
To demonstrate the effectiveness of the dynamical de- observed on WFG5 and WFG9 since they are highly deceptive
composition strategy, we compare DDEA and DDEA+NS and a set of pre-defined reference vectors is crucial to preserv-
with six state-of-the-art algorithms including GDE3 [55], ing the population diversity. On the other problems, DDEA
MOEA/D [29], NSGA-III [30], MOMBI-II [35], RVEA [35], and DDEA+NS possess substantial advantages. Particularly,
and VaEA [15]. 30 widely used benchmark problems are though their performance on DTLZ2 is not outstanding, DDEA
selected, namely WFG1 to WFG9 [56], DTLZ1 to DTLZ4 and DDEA+NS are superior to the other competitors on MaF2.
and DTLZ7 [57], DTLZ1−1 to DTLZ4−1 [39], MaF2, MaF5, As MaF2 is intended to increase the convergence difficulty of
MaF8, and MaF9 [58], and CPFT1 to CPFT8 [59]. Apart from DTLZ2, it may be concluded that DDEA and DDEA+NS show
the above benchmark problems, Crashworthiness Design of advantages in terms of convergence.
Vehicles is chosen to test the algorithm performance in engi- To visually understand the experimental results, Fig. 5 plots
neering application. Hypervolume (HV) [60] is employed to the final solutions for all algorithms with respect to the 15-
measure both convergence and diversity of the obtained solu- objective WFG5. Both MOEA/D and MOMBI-II perform
tions. Additionally, the averaged Hausdorff distance (∆p ) [23], poorly on this problem and fail to cover more than ten
generational distance (GD) [61] and spread (∆) [6] are chosen objectives. Other algorithms show high performance in terms
as the assistant indicators. All experiments are made using the of coverage. GDE3 reaches the PF region and obtains a
open source software PlatEMO [62]. The detailed experimen- perfectly-distributed solution set. But its solutions may be not
tal settings are provided in the supplementary material. actually close to the PF, which explains why it has a small HV
value. The solutions obtained by DDEA and DDEA+NS are
uniformly distributed on all objectives. Solutions of NSGA-
A. Comparisons on Problems with Regular Pareto Fronts III and RVEA are only located near the PF boundaries. This
Table I summaries the HV results of all algorithms on the phenomenon is resulted from that in the high-dimensional
problems with regular PFs. Generally, DDEA, DDEA+NS, objective space, the reference vectors generated using the
and RVEA present a clear advantage over the other com- systematic approach [38] or its two-layer version [30] are
petitors on the majority of the test instances. DDEA and very likely to be distributed near the boundaries. VaEA covers
DDEA+NS show good performance mainly on WFG4 to all the objectives and its solutions are well-distributed in the
WFG8 and MaF2. They also perform well on DTLZ1, DTLZ2, middle area of the PF. An interesting phenomenon is observed
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 9

TABLE I
M EDIAN AND IQR OF HV RESULTS ON PROBLEMS WITH REGULAR PF S . T HE BEST AND THE SECOND BEST RESULTS FOR EACH TEST INSTANCE
ARE SHOWN WITH DARK AND LIGHT GRAY BACKGROUND , RESPECTIVELY.

Problem m DDEA DDEA+NS GDE3 MOEA/D NSGA-III MOMBI-II RVEA VaEA


3 7.18E-1(3.3E-3) 7.18E-1(2.9E-3) 6.07E-1(1.8E-2) ‡ 7.03E-1(4.6E-3) ‡ 7.15E-1(2.1E-3) ‡ 7.08E-1(3.5E-3) ‡ 7.09E-1(3.8E-3) ‡ 7.09E-1(1.8E-3) ‡
5 8.68E-1(2.3E-3) 8.68E-1(3.1E-3) 5.83E-1(2.2E-2) ‡ 8.21E-1(1.2E-1) ‡ 8.58E-1(4.8E-3) ‡ 8.58E-1(1.2E-2) ‡ 8.64E-1(4.0E-3) ‡ 8.31E-1(4.4E-3) ‡
WFG4 8 9.48E-1(2.1E-3) 9.48E-1(2.3E-3) 4.58E-1(1.7E-2) ‡ 5.88E-2(2.1E-3) ‡ 8.64E-1(6.8E-2) ‡ 6.04E-1(1.2E-1) ‡ 9.12E-1(2.3E-2) ‡ 9.01E-1(1.2E-2) ‡
10 9.76E-1(1.4E-3) 9.76E-1(1.3E-3) 4.60E-1(2.5E-2) ‡ 4.75E-2(4.4E-3) ‡ 9.39E-1(3.1E-2) ‡ 6.77E-1(5.4E-2) ‡ 9.47E-1(8.6E-3) ‡ 9.15E-1(1.1E-2) ‡
15 9.39E-1(2.1E-2) 9.36E-1(9.0E-3) 3.71E-1(1.9E-2) ‡ 3.30E-2(3.7E-3) ‡ 8.90E-1(3.6E-2) ‡ 3.07E-1(7.1E-2) ‡ 9.55E-1(2.0E-2) 9.14E-1(1.3E-2) ‡
3 6.87E-1(1.4E-3) 6.88E-1(5.6E-3) 6.25E-1(1.6E-2) ‡ 6.67E-1(4.8E-3) ‡ 6.87E-1(7.2E-3) 6.62E-1(5.1E-3) ‡ 6.85E-1(4.6E-3) ‡ 6.85E-1(4.9E-3) ‡
5 8.33E-1(6.0E-3) 8.33E-1(4.9E-3) 5.57E-1(1.8E-2) ‡ 8.10E-1(6.8E-3) ‡ 8.33E-1(1.9E-3) 7.85E-1(2.9E-2) ‡ 8.34E-1(3.6E-3) 8.09E-1(3.4E-3) ‡
WFG5 8 8.96E-1(3.0E-3) 8.97E-1(2.2E-3) 4.49E-1(2.4E-2) ‡ 5.19E-2(1.9E-3) ‡ 8.86E-1(2.1E-3) ‡ 7.86E-1(2.6E-2) ‡ 8.95E-1(8.4E-4) † 8.62E-1(8.2E-3) ‡
10 9.19E-1(2.0E-3) 9.18E-1(1.5E-3) 4.55E-1(1.8E-2) ‡ 4.19E-2(4.5E-8) ‡ 9.12E-1(1.7E-3) ‡ 7.11E-1(3.7E-1) ‡ 9.19E-1(4.7E-4) 8.74E-1(7.1E-3) ‡
15 9.06E-1(1.9E-2) 9.11E-1(5.0E-3) 3.46E-1(1.8E-2) ‡ 2.89E-2(1.0E-11) ‡ 9.02E-1(6.6E-2) † 1.47E-1(1.2E-2) ‡ 9.24E-1(2.5E-4) 8.55E-1(6.7E-3) ‡
3 6.93E-1(7.3E-3) 6.94E-1(5.8E-3) 6.41E-1(1.1E-2) ‡ 6.77E-1(1.1E-2) ‡ 6.91E-1(3.7E-3) † 6.75E-1(1.1E-2) ‡ 6.84E-1(7.4E-3) ‡ 6.88E-1(3.6E-3) ‡
5 8.42E-1(6.8E-3) 8.42E-1(9.6E-3) 7.37E-1(3.0E-2) ‡ 8.12E-1(3.5E-2) ‡ 8.36E-1(6.6E-3) ‡ 8.40E-1(1.3E-2) 8.43E-1(1.0E-2) 8.16E-1(7.2E-3) ‡
WFG6 8 9.06E-1(1.2E-2) 9.10E-1(1.1E-2) 7.16E-1(1.2E-2) ‡ 1.23E-1(6.7E-2) ‡ 9.04E-1(1.3E-2) 8.14E-1(5.0E-2) ‡ 8.93E-1(2.2E-2) ‡ 8.88E-1(9.8E-3) ‡
10 9.30E-1(8.7E-3) 9.31E-1(4.6E-3) 7.53E-1(7.2E-3) ‡ 1.18E-1(4.9E-2) ‡ 9.26E-1(1.2E-2) 9.08E-1(2.4E-2) ‡ 9.22E-1(9.9E-3) ‡ 9.05E-1(1.5E-2) ‡
15 9.33E-1(1.4E-2) 9.39E-1(9.4E-3) 7.39E-1(9.7E-3) ‡ 6.56E-2(6.5E-2) ‡ 9.30E-1(2.3E-2) ‡ 2.57E-1(1.4E-1) ‡ 8.90E-1(1.5E-2) ‡ 9.00E-1(1.4E-2) ‡
3 7.24E-1(1.9E-3) 7.24E-1(2.4E-3) 5.81E-1(1.4E-2) ‡ 7.08E-1(6.4E-3) ‡ 7.23E-1(1.9E-3) † 7.10E-1(5.5E-3) ‡ 7.18E-1(3.3E-3) ‡ 7.20E-1(2.1E-3) ‡
5 8.80E-1(1.4E-3) 8.81E-1(1.5E-3) 5.29E-1(1.4E-2) ‡ 8.64E-1(3.8E-2) ‡ 8.77E-1(2.5E-3) ‡ 8.83E-1(1.0E-2) 8.81E-1(1.4E-3) 8.60E-1(3.4E-3) ‡
WFG7 8 9.53E-1(6.5E-4) 9.53E-1(8.5E-4) 4.82E-1(5.0E-2) ‡ 1.31E-1(4.2E-2) ‡ 9.39E-1(2.4E-2) ‡ 8.54E-1(1.1E-1) ‡ 8.90E-1(3.8E-2) ‡ 9.33E-1(3.0E-3) ‡
10 9.80E-1(6.4E-4) 9.80E-1(4.8E-4) 4.96E-1(2.6E-2) ‡ 1.21E-1(1.6E-2) ‡ 9.09E-1(7.1E-2) ‡ 6.85E-1(9.2E-2) ‡ 9.56E-1(9.6E-3) ‡ 9.57E-1(2.4E-3) ‡
15 9.68E-1(2.6E-2) 9.55E-1(2.9E-2) 4.31E-1(2.2E-2) ‡ 1.06E-1(1.7E-2) ‡ 9.44E-1(5.1E-2) ‡ 3.09E-1(8.5E-3) ‡ 9.82E-1(5.9E-3) 9.48E-1(7.1E-3) 
3 6.42E-1(7.1E-3) 6.52E-1(4.9E-3) 5.05E-1(1.5E-2) ‡ 6.39E-1(4.3E-3) ‡ 6.52E-1(3.2E-3) 6.33E-1(4.5E-3) ‡ 6.44E-1(8.6E-3) † 6.45E-1(3.2E-3) †
5 7.50E-1(4.9E-3) 7.54E-1(7.1E-3) 4.34E-1(1.9E-2) ‡ 4.64E-1(5.7E-2) ‡ 7.75E-1(3.3E-3) 5.32E-1(7.0E-3) ‡ 7.61E-1(1.3E-2) 7.30E-1(8.6E-3) ‡
WFG8 8 7.82E-1(7.7E-3) 7.91E-1(7.4E-3) 3.87E-1(1.5E-2) ‡ 1.07E-1(2.6E-2) ‡ 7.60E-1(1.3E-2) ‡ 5.43E-1(3.3E-2) ‡ 5.88E-1(2.3E-1) ‡ 7.14E-1(1.0E-2) ‡
10 8.43E-1(7.7E-3) 8.50E-1(8.4E-3) 4.05E-1(2.0E-2) ‡ 1.03E-1(1.8E-2) ‡ 7.94E-1(3.2E-2) ‡ 5.46E-1(1.0E-1) ‡ 5.78E-1(1.8E-1) ‡ 7.54E-1(2.7E-2) ‡
15 8.76E-1(7.1E-3) 8.79E-1(3.4E-3) 3.34E-1(1.9E-2) ‡ 5.06E-2(5.0E-2) ‡ 8.40E-1(1.9E-2) ‡ 1.76E-1(1.3E-2) ‡ 3.42E-1(2.8E-1) ‡ 7.89E-1(2.3E-2) ‡
3 6.34E-1(2.2E-3) 6.37E-1(1.7E-3) 6.01E-1(9.0E-3) ‡ 6.19E-1(4.1E-3) † 6.39E-1(3.0E-2) 6.17E-1(4.5E-2) 6.40E-1(3.0E-2) 6.38E-1(3.3E-2)
5 7.43E-1(3.8E-3) 7.43E-1(4.8E-3) 5.56E-1(2.5E-2) ‡ 5.64E-1(1.6E-2) ‡ 7.40E-1(5.5E-3) 5.60E-1(1.6E-2) ‡ 7.89E-1(1.3E-2) 7.27E-1(5.7E-3) ‡
WFG9 8 7.56E-1(9.0E-3) 7.59E-1(7.2E-3) 4.16E-1(2.5E-2) ‡ 4.78E-2(1.1E-2) ‡ 7.53E-1(1.9E-2) ‡ 6.71E-1(1.7E-2) ‡ 8.01E-1(2.7E-2) 7.36E-1(8.7E-3) ‡
10 7.71E-1(5.6E-3) 7.70E-1(7.7E-3) 4.34E-1(2.0E-2) ‡ 3.77E-2(4.6E-3) ‡ 7.46E-1(3.2E-2) ‡ 6.86E-1(2.3E-2) ‡ 8.23E-1(1.2E-2) 7.42E-1(8.9E-3) ‡
15 7.07E-1(1.4E-2) 7.14E-1(1.7E-2) 3.25E-1(2.0E-2) ‡ 2.54E-2(3.4E-3) ‡ 7.56E-1(9.1E-2) 1.32E-1(5.0E-2) ‡ 7.83E-1(1.2E-2) 6.60E-1(1.2E-2) ‡
3 9.92E-1(1.1E-4) 9.92E-1(1.2E-4) 2.68E-2(2.0E-1) ‡ 9.92E-1(5.1E-5) 9.92E-1(6.2E-5) 9.92E-1(6.0E-5) 9.92E-1(5.1E-5) 9.86E-1(8.1E-3) ‡
5 1.00E+0(1.7E-5) 1.00E+0(8.4E-6) 1.07E-2(1.1E-1) ‡ 9.91E-1(6.1E-2) ‡ 1.00E+0(7.7E-7) 1.00E+0(1.4E-6) 1.00E+0(7.3E-7) 9.99E-1(9.7E-4) ‡
DTLZ1 8 1.00E+0(6.0E-7) 1.00E+0(1.7E-6) 0.00E+0(5.8E-1) ‡ 9.37E-1(2.7E-1) ‡ 1.00E+0(5.5E-8) 9.98E-1(3.6E-3) ‡ 1.00E+0(1.1E-8) 1.00E+0(1.1E-4) ‡
10 1.00E+0(0.0E+0) 1.00E+0(1.0E-6) 0.00E+0(0.0E+0) ‡ 6.67E-1(2.7E-1) ‡ 1.00E+0(0.0E+0) 1.00E+0(5.2E-4) ‡ 1.00E+0(0.0E+0) 1.00E+0(1.2E-5) ‡
15 1.00E+0(0.0E+0) 1.00E+0(0.0E+0) 0.00E+0(0.0E+0) ‡ 6.67E-1(1.9E-5) ‡ 1.00E+0(0.0E+0) 9.97E-1(2.3E-3) ‡ 1.00E+0(0.0E+0) 1.00E+0(0.0E+0)
3 9.26E-1(2.9E-4) 9.26E-1(1.0E-4) 9.16E-1(1.4E-3) ‡ 9.27E-1(2.7E-5) 9.27E-1(2.0E-5) 9.27E-1(3.9E-5) 9.27E-1(5.9E-5) 9.26E-1(3.4E-4) †
5 9.90E-1(1.3E-4) 9.90E-1(1.2E-4) 9.13E-1(2.0E-2) ‡ 9.90E-1(2.5E-3) 9.91E-1(1.1E-5) 9.91E-1(1.6E-5) 9.91E-1(6.4E-6) 9.90E-1(1.9E-4) ‡
DTLZ2 8 9.99E-1(7.3E-6) 9.99E-1(1.1E-5) 5.41E-1(2.1E-1) ‡ 5.00E-1(2.1E-9) ‡ 9.99E-1(3.2E-6) ‡ 9.99E-1(4.0E-6) 9.99E-1(3.5E-7) ‡ 9.99E-1(5.3E-5) ‡
10 1.00E+0(9.5E-6) 1.00E+0(1.4E-5) 4.70E-1(1.3E-1) ‡ 5.00E-1(3.0E-10) ‡ 1.00E+0(1.6E-5)  1.00E+0(1.1E-5) 1.00E+0(1.2E-5) 1.00E+0(1.3E-5) ‡
15 1.00E+0(2.0E-6) 1.00E+0(2.0E-6) 4.69E-1(1.1E-1) ‡ 5.00E-1(6.1E-10) ‡ 1.00E+0(1.8E-5) ‡ 9.97E-1(3.6E-3) ‡ 1.00E+0(1.3E-11) 1.00E+0(4.0E-6) ‡
3 9.25E-1(7.7E-4) 9.25E-1(1.2E-3) 1.90E-5(4.7E-1) ‡ 9.26E-1(4.6E-4) 9.26E-1(7.2E-4) 9.26E-1(2.8E-4) 9.26E-1(8.8E-4) 9.24E-1(4.7E-3) ‡
5 9.90E-1(1.8E-4) 9.88E-1(1.9E-3) 0.00E+0(0.0E+0) ‡ 9.47E-1(1.2E-1) ‡ 9.90E-1(9.4E-5) 9.90E-1(1.0E-4) 9.91E-1(8.5E-5) 9.78E-1(2.0E-2) ‡
DTLZ3 8 9.98E-1(2.3E-1) 9.98E-1(8.6E-2) 0.00E+0(0.0E+0) ‡ 5.00E-1(1.3E-4) ‡ 9.99E-1(5.4E-5) 9.99E-1(8.9E-4) 9.99E-1(1.8E-5) 9.16E-1(9.5E-1) ‡
10 1.00E+0(2.8E-5) 1.00E+0(3.8E-5) 0.00E+0(0.0E+0) ‡ 5.00E-1(2.5E-5) ‡ 1.00E+0(1.1E-5) 9.96E-1(1.0E-2) ‡ 1.00E+0(9.0E-6) 9.98E-1(5.0E-3) ‡
15 1.00E+0(4.0E-6) 1.00E+0(3.5E-6) 0.00E+0(0.0E+0) ‡ 5.00E-1(9.1E-5) ‡ 1.00E+0(3.0E-4) ‡ 9.60E-1(3.0E-3) ‡ 1.00E+0(2.0E-6) 0.00E+0(4.7E-1) ‡
3 9.26E-1(2.1E-4) 9.26E-1(6.2E-2) 9.16E-1(1.8E-3)  5.00E-1(3.0E-1) ‡ 9.27E-1(6.3E-2) 9.27E-1(2.0E-5) 9.27E-1(4.7E-6) 9.26E-1(2.7E-4)
5 9.90E-1(9.7E-5) 9.90E-1(9.4E-5) 9.73E-1(1.4E-3) ‡ 9.24E-1(1.3E-1) ‡ 9.91E-1(1.6E-5) 9.91E-1(4.7E-6) 9.91E-1(9.8E-7) 9.89E-1(1.8E-4) ‡
DTLZ4 8 9.99E-1(8.3E-6) 9.99E-1(1.0E-5) 5.12E-1(1.9E-1) ‡ 7.88E-1(1.3E-1) ‡ 9.99E-1(2.7E-3) ‡ 9.99E-1(9.0E-4) 9.99E-1(6.4E-7) ‡ 9.99E-1(3.1E-5) ‡
10 1.00E+0(1.1E-5) 1.00E+0(7.5E-6) 4.77E-1(1.3E-1) ‡ 9.68E-1(1.4E-1) ‡ 1.00E+0(2.0E-5) ‡ 1.00E+0(1.2E-5) 1.00E+0(1.0E-5) ‡ 1.00E+0(2.5E-5) ‡
15 1.00E+0(1.5E-6) 1.00E+0(2.0E-6) 5.23E-1(1.9E-1) ‡ 5.00E-1(1.8E-10) ‡ 1.00E+0(1.8E-5) ‡ 1.00E+0(4.0E-6) ‡ 1.00E+0(2.2E-6)  1.00E+0(3.0E-6) ‡
3 5.23E-1(2.6E-3) 5.22E-1(4.3E-3) 4.24E-1(1.4E-2) ‡ 5.13E-1(7.4E-4) ‡ 5.08E-1(4.2E-3) ‡ 5.14E-1(4.0E-4) ‡ 4.90E-1(6.9E-3) ‡ 5.21E-1(2.9E-3) 
5 5.55E-1(7.0E-3) 5.52E-1(4.9E-3) 3.75E-1(2.7E-2) ‡ 2.97E-1(8.0E-3) ‡ 5.07E-1(5.2E-3) ‡ 2.95E-1(6.0E-3) ‡ 4.87E-1(5.4E-3) ‡ 5.45E-1(7.9E-3) ‡
MaF2 8 7.69E-1(1.0E-2) 7.69E-1(9.7E-3) 4.72E-1(3.0E-2) ‡ 9.44E-2(1.8E-4) ‡ 6.79E-1(3.4E-2) ‡ 4.96E-1(1.1E-2) ‡ 5.53E-1(1.9E-2) ‡ 7.26E-1(1.2E-2) ‡
10 9.95E-1(9.7E-3) 9.92E-1(1.3E-2) 7.31E-1(3.0E-2) ‡ 1.17E-1(3.6E-5) ‡ 8.81E-1(4.7E-2) ‡ 3.20E-1(4.0E-2) ‡ 6.84E-1(2.6E-2) ‡ 9.08E-1(1.6E-2) ‡
15 1.48E+0(3.1E-2) 1.48E+0(3.1E-2) 4.36E-1(5.2E-2) ‡ 1.55E-1(2.5E-2) ‡ 1.22E+0(3.6E-2) ‡ 3.88E-1(7.9E-2) ‡ 4.11E-1(3.2E-1) ‡ 1.26E+0(3.7E-2) ‡
“” indicates that the result is significantly outperformed by DDEA.
“†” indicates that the result is significantly outperformed by DDEA+NS.
“‡” indicates that the result is significantly outperformed by both DDEA and DDEA+NS.
The above symbols have the same meanings in other tables.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 10

maintains its population distribution by the systematically-


generated reference vectors, and thus its solutions only take
several equivalent values on all the objectives. DDEA and
DDEA+NS, oppositely, cover all the objectives without losing
uniformity. VaEA also shows good performance in terms
of coverage. Nevertheless, none of its solutions except the
(a) DDEA (b) DDEA+NS (c) GDE3 extreme ones are located inside the range (0.7, 1) on all
objectives.

B. Comparisons on Problems with Irregular Pareto Fronts


The median and IQR results on problems with irregular PFs
in terms of HV are presented in Table II. DDEA+NS performs
(d) MOEA/D (e) NSGA-III (f) MOMBI-II the best among the eight algorithms. It obtains the best and the
second best results on 23 and 15 test instances, respectively.
DDEA is slightly worse than DDEA+NS but achieves good
performance on WFG1 to WFG3 and DTLZ7. VaEA and
MOMBI-II are also competitive with DDEA+NS and DDEA
on the majority of the test instances. Other competitors do not
perform well and are inferior to the above four algorithms.
(g) RVEA (h) VaEA
It is also observed that the results obtained on problems
Fig. 5. Final solution sets on the 15-objective WFG5, shown by parallel
coordinates. with irregular PFs are significantly different from those ob-
tained on problems with regular PFs. Doing poorly in some
problems with regular PFs, algorithms without pre-defined
reference vectors (i.e., DDEA, DDEA+NS, and VaEA) show
clear improvements over the other competitors. Contrarily, the
performance of MOEA/D, NSGA-III, and RVEA deteriorates
significantly, being inferior to both DDEA and DDEA+NS on
most test instances. This phenomenon is due to the fact that
(a) DDEA (b) DDEA+NS (c) GDE3 the PFs of these problems are disconnected, complicated, or
degenerate. Though the set of well-spread reference vectors
are used in MOEA/D, NSGA-III, and RVEA, some of these
reference vectors have no intersection with the PF. Therefore,
it won’t lead to a set of uniformly distributed solutions on the
PF. MOMBI-II does rely on a set of pre-defined reference
vectors but achieves good results especially on the CPFT
(d) MOEA/D (e) NSGA-III (f) MOMBI-II problems. The reason is that its environmental selection is
indicator-based rather than decomposition-based. It does not
directly maintain diversity by using the reference vectors.
However, the distances between the solutions and reference
vectors are implicitly considered in its indicator calculation.
Thus, its performance is still influenced by the PF shapes on
most test instances.
(g) RVEA (h) VaEA
The final solutions on the 10-objective DTLZ3−1 prob-
Fig. 6. Final solution sets on the 15-objective DTLZ4, shown by parallel lem plotted in parallel coordinates are provided in Fig. 7.
coordinates.
MOEA/D, NSGA-III, MOMBI-II, and RVEA all converge to
the PF region but fail to maintain population diversity. The
that though VaEA is able to find out the extreme solutions, few other four competitors show clear advantages in preserving
solutions are located close to them. This can be ascribed to diversity. However, only DDEA and DDEA+NS are able to
the employment of the angular distance in VaEA. find the PF boundaries on all objectives.
Fig. 6 plots the final solutions on the 15-objective DTLZ4. To further investigate the performance sensitiveness in re-
GDE3 fails to converge to the PF region though its solutions lation to the irregularity of PF shape, Fig. 8 plots the final
are uniformly distributed. This may be ascribed to its DE oper- solutions obtained on the 3-objective DTLZ1−1 . The PF of
ator as discussed above. MOEA/D shows poor performance in this problem is a rotated hyperplane, which makes it different
terms of coverage. The other six MaOEAs have their own traits from its original version. GDE3 cannot find the PF boundary
in the distribution of the solutions although they all obtain and all its solutions are located in the middle region of the PF.
similar HV values as shown in Table I. Specifically, NSGA- Also, MOEA/D, NSGA-III, and MOMBI-II fail to preserve the
III and MOMBI-II fail to cover all the objectives. RVEA population diversity as most of their solutions are obtained on
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 11

TABLE II
M EDIAN AND IQR OF HV RESULTS ON PROBLEMS WITH IRREGULAR PF S . T HE BEST AND THE SECOND BEST RESULTS FOR EACH TEST INSTANCE ARE
SHOWN WITH DARK AND LIGHT GRAY BACKGROUND , RESPECTIVELY.

Problem m DDEA DDEA+NS GDE3 MOEA/D NSGA-III MOMBI-II RVEA VaEA


3 9.67E-1(4.2E-4) 9.67E-1(5.3E-4) 9.60E-1(2.5E-3) ‡ 9.66E-1(6.2E-4) ‡ 9.67E-1(1.1E-4)  9.67E-1(1.5E-4)  9.67E-1(1.6E-4) 9.65E-1(5.9E-4) ‡
5 9.99E-1(1.5E-5) 9.99E-1(2.4E-5) 9.98E-1(1.7E-4) ‡ 9.98E-1(3.1E-3) ‡ 9.99E-1(3.9E-5) ‡ 9.99E-1(8.7E-6) ‡ 9.99E-1(6.3E-4) ‡ 9.99E-1(1.8E-4) ‡
WFG1 8 1.00E+0(1.7E-6) 1.00E+0(7.2E-7) 1.00E+0(2.1E-5) ‡ 9.95E-1(9.0E-2) ‡ 1.00E+0(1.1E-5) ‡ 1.00E+0(1.3E-4) ‡ 9.98E-1(1.8E-3) ‡ 1.00E+0(8.3E-5) ‡
10 1.00E+0(1.2E-8) 1.00E+0(1.0E-6) 1.00E+0(2.0E-6) ‡ 9.04E-1(3.7E-1) ‡ 1.00E+0(3.6E-6) ‡ 1.00E+0(1.1E-6) ‡ 9.98E-1(1.4E-3) ‡ 1.00E+0(7.9E-5) ‡
15 1.00E+0(8.2E-8) 1.00E+0(1.2E-7) 1.00E+0(0.0E+0) 4.81E-1(4.0E-1) ‡ 9.99E-1(1.7E-3) ‡ 1.00E+0(9.4E-4) ‡ 9.96E-1(5.1E-3) ‡ 1.00E+0(4.8E-5) ‡
3 8.03E-1(1.4E-1) 9.44E-1(1.4E-1) 8.67E-1(2.1E-2) 7.99E-1(1.7E-2) ‡ 8.04E-1(1.4E-1) 9.28E-1(1.3E-1) 9.17E-1(1.4E-1) 9.38E-1(7.5E-2)
5 9.93E-1(1.5E-3) 9.93E-1(2.9E-3) 8.38E-1(2.0E-2) ‡ 8.06E-1(2.2E-2) ‡ 9.87E-1(2.9E-3) ‡ 9.95E-1(1.7E-3) 9.84E-1(4.3E-3) ‡ 9.81E-1(2.4E-3) 
WFG2 8 9.95E-1(1.9E-3) 9.95E-1(2.2E-3) 7.63E-1(4.3E-2) ‡ 8.30E-2(5.1E-2) ‡ 9.84E-1(1.8E-1) ‡ 7.89E-1(1.9E-2) ‡ 7.97E-1(1.6E-1) ‡ 9.86E-1(5.7E-3) ‡
10 9.97E-1(7.6E-4) 9.98E-1(1.0E-3) 7.74E-1(2.9E-2) ‡ 2.07E-1(1.6E-1) ‡ 9.94E-1(2.5E-3) ‡ 7.90E-1(2.1E-2) ‡ 9.78E-1(1.8E-1) ‡ 9.91E-1(2.6E-3) ‡
15 9.96E-1(2.4E-3) 9.95E-1(3.5E-3) 7.45E-1(6.5E-2) ‡ 5.62E-2(5.1E-2) ‡ 9.86E-1(2.1E-2) ‡ 5.68E-1(2.1E-1) ‡ 7.90E-1(1.6E-1) ‡ 9.91E-1(7.0E-3) 
3 6.92E-1(5.2E-3) 6.98E-1(5.1E-3) 6.39E-1(1.3E-2) ‡ 7.04E-1(1.1E-2) 6.96E-1(6.0E-3) 7.06E-1(4.1E-3) 6.72E-1(5.9E-3) ‡ 6.92E-1(5.5E-3) †
5 6.53E-1(1.4E-2) 6.62E-1(1.0E-2) 5.85E-1(9.5E-3) ‡ 6.10E-1(1.8E-2) ‡ 6.66E-1(6.1E-3) 5.89E-1(1.8E-2) ‡ 6.37E-1(2.0E-2) ‡ 6.46E-1(1.2E-2) †
WFG3 8 6.38E-1(1.6E-2) 6.36E-1(2.4E-2) 5.59E-1(1.9E-2) ‡ 5.85E-2(1.7E-4) ‡ 6.19E-1(2.5E-2) ‡ 9.19E-2(2.3E-2) ‡ 2.80E-1(1.7E-2) ‡ 6.12E-1(2.5E-2) ‡
10 6.41E-1(1.8E-2) 6.49E-1(3.2E-2) 5.67E-1(2.0E-2) ‡ 4.74E-2(8.1E-5) ‡ 6.36E-1(3.7E-2) † 7.32E-2(1.3E-2) ‡ 2.56E-1(9.6E-3) ‡ 6.19E-1(1.7E-2) ‡
15 6.82E-1(1.7E-2) 6.84E-1(4.5E-2) 5.51E-1(2.7E-2) ‡ 3.20E-2(1.4E-4) ‡ 6.15E-1(1.9E-2) ‡ 3.80E-2(3.5E-4) ‡ 1.98E-1(8.4E-3) ‡ 5.78E-1(2.3E-2) ‡
3 4.09E-1(4.4E-3) 4.21E-1(1.3E-3) 3.63E-1(1.8E-2) ‡ 3.57E-1(6.6E-2) ‡ 4.13E-1(1.7E-3) † 4.19E-1(4.9E-4) † 4.00E-1(3.2E-3) ‡ 4.19E-1(2.1E-3) †
5 3.33E-1(4.6E-3) 3.38E-1(5.5E-3) 9.23E-2(5.5E-2) ‡ 2.79E-1(6.7E-2) ‡ 3.42E-1(5.6E-3) 3.42E-1(1.0E-3) 2.81E-1(7.3E-3) ‡ 3.22E-1(4.4E-3) ‡
DTLZ7 8 2.55E-1(4.1E-3) 2.55E-1(5.7E-3) 7.86E-8(3.7E-7) ‡ 9.09E-2(1.2E-2) ‡ 2.41E-1(5.2E-3) ‡ 2.12E-1(3.1E-2) ‡ 1.93E-1(6.1E-2) ‡ 1.97E-1(1.2E-2) ‡
10 2.30E-1(3.1E-3) 2.31E-1(4.1E-3) 4.21E-10(2.3E-8) ‡ 1.00E-1(3.7E-2) ‡ 2.14E-1(1.6E-2) ‡ 1.86E-1(2.2E-2) ‡ 1.58E-1(6.9E-2) ‡ 1.56E-1(9.9E-3) ‡
15 1.74E-1(6.6E-3) 1.72E-1(8.1E-3) 0.00E+0(2.4E-10) ‡ 1.03E-1(1.1E-1) ‡ 1.51E-1(2.3E-2) ‡ 1.29E-1(1.2E-3) ‡ 1.34E-1(9.3E-3) ‡ 1.04E-1(5.4E-3) ‡
3 2.20E-1(4.2E-4) 2.20E-1(2.7E-4) 1.51E-1(5.0E-3) ‡ 1.94E-1(5.1E-6) ‡ 1.56E-1(2.8E-2) ‡ 1.94E-1(1.3E-4) ‡ 1.85E-1(2.2E-3) ‡ 2.15E-1(1.4E-3) ‡
5 1.25E-2(2.6E-4) 1.25E-2(1.9E-4) 5.82E-3(2.3E-4) ‡ 5.44E-3(6.2E-5) ‡ 4.44E-3(2.2E-3) ‡ 5.38E-3(4.5E-6) ‡ 3.74E-3(4.4E-4) ‡ 1.17E-2(1.9E-4) ‡
−1
DTLZ1 8 2.50E-5(9.6E-7) 2.50E-5(5.8E-7) 1.06E-5(6.4E-7) ‡ 1.87E-5(1.2E-6) ‡ 1.20E-5(6.4E-6) ‡ 1.68E-5(1.1E-6) ‡ 9.42E-7(4.4E-7) ‡ 2.19E-5(1.1E-6) ‡
10 0.00E+0(5.0E-7) 0.00E+0(1.0E-6) 0.00E+0(2.9E-7) 2.67E-7(1.2E-8) 2.52E-7(1.0E-7) 2.28E-7(1.7E-8) 1.31E-8(8.3E-9) 0.00E+0(1.0E-6)
15 0.00E+0(0.0E+0) 0.00E+0(0.0E+0) 0.00E+0(0.0E+0) 3.42E-14(2.7E-14) 3.51E-12(1.6E-12) 1.97E-12(4.0E-13) 2.92E-14(1.1E-14) 0.00E+0(0.0E+0)
3 5.29E-1(1.7E-3) 5.30E-1(2.7E-3) 5.12E-1(1.3E-2) ‡ 5.03E-1(8.8E-4) ‡ 4.54E-1(5.1E-2) ‡ 5.05E-1(1.9E-3) ‡ 5.03E-1(4.9E-3) ‡ 5.23E-1(2.0E-3) ‡
5 1.16E-1(1.5E-3) 1.16E-1(1.4E-3) 1.16E-1(5.6E-3) 3.60E-2(8.9E-4) ‡ 5.53E-2(1.2E-2) ‡ 3.52E-2(5.0E-4) ‡ 4.50E-2(3.0E-3) ‡ 1.14E-1(2.6E-3) ‡
DTLZ2−1 8 1.61E-3(1.0E-4) 1.61E-3(1.1E-4) 2.25E-3(5.4E-4) 1.98E-3(1.1E-4) 4.84E-4(2.3E-4) ‡ 2.03E-3(8.8E-5) 1.40E-3(2.0E-4) ‡ 1.93E-3(1.1E-4)
10 1.19E-4(2.0E-5) 1.19E-4(8.9E-6) 1.46E-4(4.3E-5) 4.12E-5(1.5E-5) ‡ 1.76E-5(1.7E-5) ‡ 1.64E-4(1.5E-5) 1.24E-4(1.7E-5) 1.35E-4(1.6E-5)
15 1.32E-8(4.7E-8) 2.99E-8(7.8E-8) 0.00E+0(0.0E+0) ‡ 1.82E-12(3.1E-12) † 2.84E-8(5.2E-8) 8.46E-8(2.0E-8) 1.03E-7(3.3E-8) 3.46E-8(1.0E-7)
3 5.30E-1(1.3E-3) 5.31E-1(1.8E-3) 3.40E-1(2.3E-2) ‡ 5.02E-1(6.4E-5) ‡ 4.57E-1(1.1E-1) ‡ 5.03E-1(7.1E-4) ‡ 5.04E-1(2.9E-3) ‡ 5.22E-1(2.6E-3) ‡
5 1.16E-1(2.2E-3) 1.16E-1(2.4E-3) 3.51E-2(3.0E-3) ‡ 3.52E-2(8.4E-4) ‡ 5.77E-2(9.7E-3) ‡ 3.49E-2(8.4E-5) ‡ 4.22E-2(3.5E-3) ‡ 1.10E-1(2.0E-3) ‡
−1
DTLZ3 8 1.56E-3(3.0E-5) 1.58E-3(9.3E-5) 4.69E-4(7.3E-5) ‡ 1.84E-3(9.9E-5) 5.14E-4(2.0E-4) ‡ 1.87E-3(1.0E-4) 1.35E-3(1.6E-4) ‡ 1.86E-3(8.6E-5)
10 1.20E-4(1.6E-5) 1.26E-4(2.1E-5) 2.83E-5(4.4E-6) ‡ 5.17E-5(2.0E-5) ‡ 2.88E-5(1.6E-5) ‡ 1.55E-4(1.0E-5) 1.20E-4(1.3E-5) 1.25E-4(8.5E-6)
15 0.00E+0(4.4E-8) 9.24E-9(4.5E-8) 0.00E+0(0.0E+0) ‡ 2.03E-12(4.9E-12) 2.27E-8(1.4E-8) 7.06E-8(1.8E-8) 9.36E-8(2.7E-8) 6.41E-8(6.3E-8)
3 5.31E-1(1.5E-3) 5.31E-1(1.6E-3) 4.83E-1(1.9E-2) ‡ 5.02E-1(1.3E-4) ‡ 3.89E-1(2.4E-1) ‡ 5.02E-1(2.9E-4) ‡ 5.05E-1(1.9E-3) ‡ 5.23E-1(3.2E-3) ‡
5 1.17E-1(2.0E-3) 1.17E-1(1.8E-3) 9.44E-2(3.8E-3) ‡ 3.50E-2(2.3E-4) ‡ 5.80E-2(1.9E-2) ‡ 3.51E-2(4.0E-4) ‡ 4.06E-2(1.3E-3) ‡ 1.14E-1(2.0E-3) ‡
−1
DTLZ4 8 1.57E-3(9.7E-5) 1.55E-3(6.3E-5) 6.14E-4(6.3E-4) ‡ 1.67E-3(3.0E-5) 1.10E-4(6.8E-5) ‡ 1.64E-3(6.6E-5) 2.78E-4(7.8E-4) ‡ 1.88E-3(1.5E-4)
10 1.24E-4(2.3E-5) 1.23E-4(1.4E-5) 9.00E-6(9.0E-6) ‡ 1.49E-4(4.2E-6) 3.24E-6(2.5E-6) ‡ 1.51E-4(4.7E-6) 5.62E-6(5.1E-6) ‡ 1.17E-4(1.8E-5)
15 0.00E+0(7.2E-8) 0.00E+0(0.0E+0) 0.00E+0(0.0E+0)  1.41E-12(6.7E-12) 9.54E-9(5.4E-8) 1.20E-7(2.7E-8) 3.06E-11(4.4E-11) 0.00E+0(9.9E-8)
3 5.55E-1(1.0E-3) 5.54E-1(2.1E-1) 5.00E-1(6.7E-3)  9.09E-2(2.5E-1) ‡ 5.60E-1(1.1E-1) 5.60E-1(4.1E-5) 5.60E-1(1.4E-5) 5.55E-1(2.3E-3)
5 8.01E-1(2.3E-3) 8.02E-1(2.1E-3) 5.29E-1(3.5E-2) ‡ 4.32E-1(2.2E-1) ‡ 8.12E-1(1.8E-4) 8.12E-1(2.5E-4) 8.12E-1(1.1E-4) 7.90E-1(4.3E-3) ‡
MaF5 8 9.28E-1(5.8E-4) 9.29E-1(6.6E-4) 0.00E+0(8.1E-3) ‡ 2.99E-1(2.1E-1) ‡ 9.24E-1(1.8E-4) ‡ 9.02E-1(2.8E-2) ‡ 8.99E-1(1.8E-2) ‡ 9.04E-1(5.7E-3) ‡
10 9.72E-1(7.2E-4) 9.73E-1(4.7E-4) 0.00E+0(4.4E-2) ‡ 4.92E-1(3.9E-1) ‡ 9.70E-1(2.8E-4) ‡ 9.73E-1(5.8E-3) 9.54E-1(2.9E-3) ‡ 9.38E-1(5.9E-3) ‡
15 9.91E-1(1.8E-4) 9.91E-1(1.1E-4) 0.00E+0(0.0E+0) ‡ 9.09E-2(1.1E-9) ‡ 9.91E-1(1.1E-4) ‡ 9.72E-1(9.0E-3) ‡ 9.23E-1(6.3E-2) ‡ 9.64E-1(1.6E-3) ‡
3 2.79E-1(2.1E-3) 2.82E-1(7.7E-4) 2.78E-1(1.3E-3) ‡ 2.64E-1(8.2E-4) ‡ 2.62E-1(4.2E-3) ‡ 2.65E-1(1.2E-3) ‡ 2.45E-1(3.5E-3) ‡ 2.79E-1(1.3E-3) †
5 1.30E-1(3.3E-4) 1.30E-1(2.2E-4) 1.21E-1(1.6E-3) ‡ 8.23E-2(2.6E-3) ‡ 1.09E-1(3.8E-3) ‡ 8.23E-2(3.2E-3) ‡ 8.03E-2(9.4E-3) ‡ 1.30E-1(3.4E-4) †
MaF8 8 3.23E-2(1.4E-4) 3.23E-2(1.9E-4) 2.76E-2(6.3E-4) ‡ 6.65E-3(4.4E-3) ‡ 2.58E-2(1.5E-3) ‡ 9.39E-3(1.9E-3) ‡ 1.31E-2(1.8E-3) ‡ 3.24E-2(8.1E-5)
10 1.17E-2(1.2E-4) 1.18E-2(2.1E-4) 9.96E-3(1.1E-4) ‡ 9.13E-4(8.1E-4) ‡ 9.75E-3(5.2E-4) ‡ 1.99E-3(9.0E-4) ‡ 4.44E-3(1.2E-3) ‡ 1.18E-2(6.9E-5)
15 6.18E-4(2.8E-5) 6.13E-4(3.2E-5) 4.16E-4(4.0E-5) ‡ 8.80E-7(2.4E-6) ‡ 3.66E-4(7.4E-5) ‡ 3.66E-6(2.5E-5) ‡ 1.57E-4(6.7E-5) ‡ 6.05E-4(3.4E-5)
3 8.31E-1(2.2E-3) 6.14E-1(1.7E-2) 7.94E-1(1.4E-2)  7.98E-1(1.5E-2)  8.40E-1(4.7E-4) 7.99E-1(1.5E-2)  8.39E-1(1.4E-2) 5.99E-1(5.2E-2) 
5 2.97E-1(2.9E-3) 2.37E-1(1.1E-1) 2.89E-1(1.5E-2)  1.90E-1(8.4E-3) ‡ 2.39E-1(7.0E-2)  1.89E-1(8.9E-3) ‡ 2.29E-1(2.5E-2)  1.64E-1(1.2E-1) 
MaF9 8 4.04E-2(1.3E-3) 4.34E-2(6.0E-3) 0.00E+0(0.0E+0) ‡ 1.76E-3(1.7E-3) ‡ 2.37E-2(1.1E-2) ‡ 7.67E-3(2.3E-3) ‡ 1.77E-2(4.1E-3) ‡ 4.25E-2(2.8E-3)
10 1.60E-2(3.5E-4) 1.73E-2(8.6E-4) 0.00E+0(0.0E+0) ‡ 0.00E+0(8.6E-4) ‡ 9.03E-3(2.2E-3) ‡ 2.07E-3(7.7E-4) ‡ 5.26E-3(3.0E-3) ‡ 1.70E-2(2.7E-3)
15 8.91E-4(5.1E-5) 1.10E-3(1.1E-4) 3.34E-5(4.5E-5) ‡ 0.00E+0(0.0E+0) ‡ 5.16E-4(5.7E-4) ‡ 0.00E+0(1.7E-5) ‡ 1.50E-4(1.1E-4) ‡ 7.12E-4(4.9E-4) ‡
CPFT1 3 8.59E-1(5.1E-3) 8.76E-1(3.9E-3) 8.51E-1(1.8E-2) ‡ 8.77E-1(1.0E-3) 8.74E-1(4.2E-3) 8.78E-1(6.2E-4) 8.63E-1(4.1E-3) † 8.76E-1(1.3E-3)
CPFT2 3 8.42E-1(9.8E-4) 8.43E-1(4.4E-4) 8.42E-1(2.4E-3) † 8.43E-1(2.9E-4) † 8.42E-1(1.3E-3) † 8.43E-1(4.0E-4) 8.40E-1(1.5E-3) ‡ 8.43E-1(9.8E-4) †
CPFT3 3 8.16E-1(6.8E-3) 8.39E-1(1.6E-3) 8.20E-1(1.1E-2) † 8.39E-1(3.2E-1) † 8.35E-1(5.5E-3) † 8.39E-1(9.9E-4) 8.19E-1(6.6E-3) † 8.39E-1(1.9E-3)
CPFT4 3 8.05E-1(5.5E-3) 8.29E-1(1.6E-3) 8.08E-1(1.6E-2) † 8.31E-1(1.9E-3) 8.25E-1(3.5E-3) † 8.31E-1(1.4E-3) 8.09E-1(7.2E-3) † 8.29E-1(3.3E-3)
CPFT5 3 7.45E-1(4.1E-3) 7.61E-1(4.4E-3) 7.36E-1(1.5E-2) ‡ 7.68E-1(1.2E-3) 5.65E-1(1.1E-1) ‡ 7.68E-1(2.2E-3) 7.33E-1(5.1E-3) ‡ 7.67E-1(2.3E-3)
CPFT6 3 8.03E-1(2.9E-3) 8.19E-1(2.9E-3) 7.91E-1(1.5E-2) ‡ 8.20E-1(3.7E-3) 8.05E-1(1.8E-2) † 8.24E-1(1.8E-3) 7.98E-1(7.4E-3) ‡ 8.19E-1(3.6E-3)
CPFT7 3 8.82E-1(2.3E-3) 8.90E-1(1.2E-3) 8.71E-1(1.0E-2) ‡ 8.89E-1(1.1E-3) † 8.81E-1(3.2E-3) ‡ 8.90E-1(9.5E-4) 8.77E-1(4.2E-3) ‡ 8.89E-1(1.9E-3) †
CPFT8 3 7.81E-1(4.7E-3) 7.91E-1(5.9E-3) 7.67E-1(1.8E-2) ‡ 7.97E-1(2.0E-3) 7.24E-1(9.9E-2) ‡ 7.99E-1(1.8E-3) 7.68E-1(9.2E-3) ‡ 7.91E-1(4.2E-3)
Crashworthiness
Design
3 7.70E-1(1.9E-3) 7.74E-1(4.2E-3) 7.72E-1(3.7E-3) † 7.56E-1(1.4E-2) ‡ 7.68E-1(6.3E-3) † 7.59E-1(4.1E-3) ‡ 7.32E-1(1.8E-2) ‡ 7.74E-1(2.4E-3)
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 12

(a) DDEA (b) DDEA+NS (c) GDE3 (a) DDEA (b) DDEA+NS (c) GDE3

(d) MOEA/D (e) NSGA-III (f) MOMBI-II (d) MOEA/D (e) NSGA-III (f) MOMBI-II

(g) RVEA (h) VaEA (g) RVEA (h) VaEA


Fig. 7. Final solution sets on the 10-objective DTLZ3−1 , shown by parallel Fig. 9. Final solution sets on the 3-objective DTLZ7.
coordinates.

VaEA performs similarly to DDEA and DDEA+NS on this


test instance.

(a) DDEA (b) DDEA+NS (c) GDE3

(a) DDEA (b) DDEA+NS (c) GDE3

(d) MOEA/D (e) NSGA-III (f) MOMBI-II

(d) MOEA/D (e) NSGA-III (f) MOMBI-II

(g) RVEA (h) VaEA


Fig. 8. Final solution sets on the 3-objective DTLZ1−1 .

(g) RVEA (h) VaEA


the boundary. The reason is that a large portion of intersections Fig. 10. Final solution sets on the Crashworthiness Design problem.
between the reference vectors and the hyperplane are outside
the range of PF. Hence, the optimal solutions along these Figs. 9 and 10 plot the final solutions on the 3-objective
reference vectors are only found on the boundary. These results DTLZ7 problem and the Crashworthiness Design problem,
are consistent with the analysis in [39]. RVEA faces the same respectively. For DTLZ7, MOMBI-II shows relatively poor
problem. However, in RVEA, each reference vector is strictly uniformity while MOEA/D only covers two out of four
associated to no more than one solution. Thus, the population disconnected parts of the PF. RVEA obtains a significantly
diversity is maintained. But it raises another serious problem: fewer number of non-dominated solutions. GDE3 encounters
Many reference vectors have no associated solutions, and difficulties in convergence as many of its solutions are distant
thus, the number of non-dominated solutions found decreases. from the PF. DDEA is better than the above competitors,
Therefore, only 28 out of 91 solutions are obtained by RVEA. but is still worse than DDEA+NS and NSGA-III in terms of
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 13

the number of non-dominated solutions. Solutions of VaEA TABLE III


are all non-dominated and uniformly distributed, but about AVERAGE RANKS OF ALL COMPARATIVE ALGORITHMS BASED ON THE
F RIEDMAN TEST. T HE BEST AND THE SECOND BEST RESULTS ARE
17 solutions are outside the PF regions. On the contrary, HIGHLIGHTED BY BOLD AND UNDERLINED .
DDEA+NS and NSGA-III achieve good performance in terms
of both convergence and uniformity. For Crashworthiness Regular PF Irregular PF
Design, the PF mainly consists of two disconnected curved DDEA 46.89(2) 55.20(2)
surfaces though its true shape is unknown. The obtained DDEA+NS 43.27(1) 45.71(1)
experimental results are very similar to those obtained on GDE3 136.03(8) 110.09(8)
MOEA/D 127.82(7) 106.34(7)
DTLZ7 whose PF is also disconnected. Also, its PF seems to NSGA-III 57.87(4) 84.40(5)
have some degenerate segments, which makes it more difficult MOMBI-II 93.29(6) 75.50(4)
than the above benchmark problems. Generally, no algorithms RVEA 51.81(3) 99.89(6)
VaEA 87.01(5) 66.87(3)
can cover the whole PF. GDE3 is the only one finding both
the degenerate and non-degenerate parts of the PF. This may
be ascribed to the outstanding exploration ability of its DE TABLE IV
S UMMARY OF THE W ILCOXON SIGNED TEST ON ALL TEST INSTANCES .
operator in low-dimensional objective space. However, its T HE PAIRWISE WIN - LOSS - TIE COUNTS OF COLUMNS AGAINST ROWS ARE
solutions are not uniformly distributed on the non-degenerate SHOWN IN THE TABLE .
parts. The other algorithms, on the contrary, are more efficient
in finding the non-degenerate parts of the PF. Specifically, DDEA vs. DDEA+NS vs.
+ - = + - =
most solutions of MOEA/D, MOMBI-II, and RVEA are lo-
cated on the boundaries. The solutions of NSGA-III fail to DDEA / / / 24 7 88
DDEA+NS 7 24 88 / / /
uniformly spread over the left non-degenerate part of the PF. GDE3 108 4 7 107 5 7
Again, DDEA yields slightly fewer non-dominated solutions. MOEA/D 96 16 7 100 13 6
The above experiments show that DDEA, DDEA+NS, and NSGA-III 74 22 23 81 16 22
MOMBI-II 76 32 11 75 29 15
VaEA are more robust in coping with complicated problems. RVEA 73 30 16 76 28 15
Besides, the non-dominated sorting increases the convergence VaEA 82 16 21 85 7 27
performance of DDR especially in handling disconnected PFs.
Note that using the HV metric alone to measure the so-
lution quality may be inappropriate especially on DTLZ1−1 convergence speed and suffer from premature convergence on
to DTLZ4−1 . The reason is that all algorithms produce very the majority of test instances.
small HV values on these problems when m ≥ 10, and this
may be influenced by the errors in the Monte Carlo simulation
in HV calculation [18]. It explains why MOMBI-II has good D. Overall Performance
HV values in Table II but performs poorly as shown in Figs. 7 Table III shows the ranking results of all comparative
and 8. To address this issue, the ∆p metric is employed as algorithms according to the Friedman test. On problems with
a secondary performance indicator on these four problems. regular PFs, DDEA and DDEA+NS are ranked second and
As shown in Table S-IV in the supplementary material, first, respectively. They also get the second and the first scores
DDEA+NS performs the best and followed by DDEA. This is in handling irregular PFs, respectively. Table IV summaries
consistent with the observations from Figs. 7 and 8. the Wilcoxon signed test on all 119 test instances. DDEA and
DDEA+NS are clearly superior to MOEA/D and GDE3. They
both outperform MOEA/D or GDE3 on about 100 test in-
C. Convergence Behaviors
stances. DDEA and DDEA+NS show certain advantages over
In this subsection, the convergence curves of all comparative NSGA-III, MOMBI-II, RVEA, and VaEA, and significantly
algorithms are shown in Fig. S-1 for problems with regular surpass them on more than 70 test instances. But it is worth
PFs and Fig. S-2 for problems with irregular PFs (in the pointing out that, no algorithm can beat all the other algorithms
supplementary material). In these figures, the x-coordinate is on all problems and some algorithms may be more suitable
the number of function evaluations, while the y-coordinate for solving certain problems. For example, RVEA and NSGA-
is the value of HV metric. On almost all the problems, III have advantages when the problem PFs are regular since
DDEA and DDEA+NS converge to the near-final values within the pre-defined reference vectors are helpful in preserving the
about 10% of the total function evaluations. Their HV values population diversity. Contrarily, VaEA is more suitable to solve
increase continuously no matter what PF shapes the problems problems with irregular PFs.
take. This observation demonstrates their high performance in
terms of convergence speed. VaEA has similar curves but is
V. C ONCLUSION
slightly slower. RVEA converges much slower, but achieves
better performance on some problems in the final stage. The This paper proposes a dynamical decomposition strategy
behaviors of NSGA-III and MOMBI-II seem to be problem- for many-objective optimization. This approach decomposes
dependent. For example, MOMBI-II converges quickly on the objective space into subregions dynamically without using
WFG5 to WFG9 but diverges on problems with irregular PFs a set of pre-defined reference vectors. A solution ranking
such as WFG3. GDE3 and MOEA/D generally have slower method named DDR and two MaOEAs including DDEA and
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 14

DDEA+NS are proposed. It is demonstrated by the experimen- [17] L. M. S. Russo and A. P. Francisco, “Quick Hypervolume,” IEEE
tal results that DDEA and DDEA+NS are quite competitive Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 481–502,
Aug. 2014.
with the state-of-the-art algorithms on the majority of the test [18] J. Bader and E. Zitzler, “HypE: An Algorithm for Fast Hypervolume-
instances. Moreover, since no reference vectors are employed, Based Many-Objective Optimization,” Evolutionary Computation,
the performance of DDEA and DDEA+NS is less dependent vol. 19, no. 1, pp. 45–76, Mar. 2011.
[19] N. Beume, B. Naujoks, and M. Emmerich, “SMS-EMOA: Multiobjec-
on the PF shapes and is robust especially in solving problems tive selection based on dominated hypervolume,” European Journal of
having irregular PFs. Operational Research, vol. 181, no. 3, pp. 1653–1669, Sep. 2007.
In the future, we will have a further investigation into [20] K. Bringmann and T. Friedrich, “Approximating the volume of unions
and intersections of high-dimensional geometric objects,” Computational
the reference vector generation mechanism in the dynamical Geometry, vol. 43, no. 6-7, pp. 601–610, Aug. 2010.
decomposition strategy. Identifying the dominance resistant [21] M. Emmerich, N. Beume, and B. Naujoks, “An EMO Algorithm Using
solutions may be a promising work to further improve the the Hypervolume Measure as Selection Criterion,” in Evolutionary
Multi-Criterion Optimization. Berlin, Heidelberg: Springer Berlin
algorithm performance. Also, it would be interesting to in- Heidelberg, 2005, vol. 3410, pp. 62–76.
vestigate whether the ranking results of DDR are consistent [22] D. Brockhoff, T. Wagner, and H. Trautmann, “On the Properties of
with those obtained by indicator-based ranking methods [64]. the R2 Indicator,” in Proceedings of the 14th Annual Conference on
Extending current work to solve constrained problems or Genetic and Evolutionary Computation, ser. GECCO ’12. Philadelphia,
Pennsylvania, USA: ACM, 2012, pp. 465–472.
dynamic problems is also one of further studies. [23] O. Schutze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the
Averaged Hausdorff Distance as a Performance Measure in Evolution-
ary Multiobjective Optimization,” IEEE Transactions on Evolutionary
R EFERENCES Computation, vol. 16, no. 4, pp. 504–522, Aug. 2012.
[24] R. Hernández Gómez and C. A. Coello Coello, “Improved Metaheuristic
[1] K. Miettinen, Nonlinear Multiobjective Optimization, ser. International
Based on the R2 Indicator for Many-Objective Optimization,” in Pro-
Series in Operations Research & Management Science, F. S. Hillier, Ed.
ceedings of the 2015 Annual Conference on Genetic and Evolutionary
Boston, MA: Springer US, 1998, vol. 12.
Computation, ser. GECCO ’15. Madrid, Spain: ACM, 2015, pp. 679–
[2] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms,
686.
1st ed., ser. Wiley-Interscience series in systems and optimization.
Chichester ; New York: John Wiley & Sons, 2001. [25] C. A. Rodrı́guez Villalobos and C. A. Coello Coello, “A new multi-
[3] J. D. Knowles and D. W. Corne, “Approximating the Nondominated objective evolutionary algorithm based on a performance assessment
Front Using the Pareto Archived Evolution Strategy,” Evolutionary indicator.” ACM Press, 2012, p. 505.
Computation, vol. 8, no. 2, pp. 149–172, Jun. 2000, 02038. [26] M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objective
[4] D. W. Corne, N. R. Jerram, J. D. Knowles, M. J. Oates, and others, optimization problems,” Artificial Intelligence, vol. 228, pp. 45–65, Nov.
“PESA-II: Region-based selection in evolutionary multiobjective opti- 2015.
mization,” in Proceedings of the Genetic and Evolutionary Computation [27] B. Li, K. Tang, J. Li, and X. Yao, “Stochastic Ranking Algorithm
Conference (GECCO’2001), 2001. for Many-Objective Optimization Based on Multiple Indicators,” IEEE
[5] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 924–938,
pareto evolutionary algorithm,” ETH Zurich, Tech. Rep. TIK-Report Dec. 2016.
103, 2001. [28] Y. Liu, D. Gong, J. Sun, and Y. Jin, “A Many-Objective Evolutionary
[6] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist Algorithm Using A One-by-One Selection Strategy,” IEEE Transactions
multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on on Cybernetics, pp. 1–14, 2017.
Evolutionary Computation, vol. 6, no. 2, pp. 182–197, Apr. 2002, 17703. [29] Qingfu Zhang and Hui Li, “MOEA/D: A Multiobjective Evolutionary
[7] B. Li, J. Li, K. Tang, and X. Yao, “Many-Objective Evolutionary Algorithm Based on Decomposition,” IEEE Transactions on Evolution-
Algorithms: A Survey,” ACM Computing Surveys, vol. 48, no. 1, pp. ary Computation, vol. 11, no. 6, pp. 712–731, Dec. 2007.
1–35, Sep. 2015. [30] K. Deb and H. Jain, “An Evolutionary Many-Objective Optimization Al-
[8] R. C. Purshouse and P. J. Fleming, “On the Evolutionary Optimization gorithm Using Reference-Point-Based Nondominated Sorting Approach,
of Many Conflicting Objectives,” IEEE Transactions on Evolutionary Part I: Solving Problems With Box Constraints,” IEEE Transactions on
Computation, vol. 11, no. 6, pp. 770–784, Dec. 2007. Evolutionary Computation, vol. 18, no. 4, pp. 577–601, Aug. 2014.
[9] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining Con- [31] M. Asafuddoula, T. Ray, and R. Sarker, “A Decomposition-Based Evolu-
vergence and Diversity in Evolutionary Multiobjective Optimization,” tionary Algorithm for Many Objective Optimization,” IEEE Transactions
Evolutionary Computation, vol. 10, no. 3, pp. 263–282, Sep. 2002, on Evolutionary Computation, vol. 19, no. 3, pp. 445–460, Jun. 2015.
00981. [32] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing Conver-
[10] A. G. Hernández-Dı́az, L. V. Santana-Quintero, C. A. Coello Coello, and gence and Diversity in Decomposition-Based Many-Objective Optimiz-
J. Molina, “Pareto-adaptive -dominance,” Evolutionary Computation, ers,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2,
vol. 15, no. 4, pp. 493–517, Dec. 2007. pp. 180–198, Apr. 2016.
[11] Saku Kukkonen and Jouni Lampinen, “Ranking-Dominance and Many- [33] A. Trivedi, D. Srinivasan, K. Sanyal, and A. Ghosh, “A Survey of
Objective Optimization,” in IEEE Congress on Evolutionary Computa- Multiobjective Evolutionary Algorithms based on Decomposition,” IEEE
tion, 2007. IEEE, Sep. 2007, pp. 3983–3990. Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 440–462,
[12] X. Zou, Y. Chen, M. Liu, and L. Kang, “A New Evolutionary Algorithm 2016.
for Solving Many-Objective Optimization Problems,” IEEE Transactions [34] K. Li, K. Deb, Q. Zhang, and S. Kwong, “An Evolutionary Many-
on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 5, Objective Optimization Algorithm Based on Dominance and Decompo-
pp. 1402–1412, Oct. 2008. sition,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 5,
[13] M. Li, S. Yang, and X. Liu, “Shift-Based Density Estimation for Pareto- pp. 694–716, Oct. 2015.
Based Algorithms in Many-Objective Optimization,” IEEE Transactions [35] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A Reference Vec-
on Evolutionary Computation, vol. 18, no. 3, pp. 348–365, Jun. 2014. tor Guided Evolutionary Algorithm for Many-Objective Optimization,”
[14] S. F. Adra and P. J. Fleming, “Diversity Management in Evolution- IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp.
ary Many-Objective Optimization,” IEEE Transactions on Evolutionary 773–791, Oct. 2016.
Computation, vol. 15, no. 2, pp. 183–195, Apr. 2011. [36] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A New Dominance Relation-
[15] Y. Xiang, Y. Zhou, M. Li, and Z. Chen, “A Vector Angle-Based Evo- Based Evolutionary Algorithm for Many-Objective Optimization,” IEEE
lutionary Algorithm for Unconstrained Many-Objective Optimization,” Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37,
IEEE Transactions on Evolutionary Computation, vol. 21, no. 1, pp. Feb. 2016.
131–152, Feb. 2017. [37] S. Jiang and S. Yang, “A Strength Pareto Evolutionary Algorithm Based
[16] X. Zhang, Y. Tian, and Y. Jin, “A Knee Point-Driven Evolutionary on Reference Direction for Multiobjective and Many-Objective Opti-
Algorithm for Many-Objective Optimization,” IEEE Transactions on mization,” IEEE Transactions on Evolutionary Computation, vol. 21,
Evolutionary Computation, vol. 19, no. 6, pp. 761–776, Dec. 2015. no. 3, pp. 329–346, Jun. 2017.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. XX, SEPTEMBER 2017 15

[38] I. Das and J. E. Dennis, “Normal-Boundary Intersection: A New Computation (CEC), 2014 IEEE Congress On. IEEE, Jul. 2014, pp.
Method for Generating the Pareto Surface in Nonlinear Multicriteria 2156–2163.
Optimization Problems,” SIAM Journal on Optimization, vol. 8, no. 3, [60] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A
pp. 631–657, Aug. 1998. comparative case study and the strength Pareto approach,” IEEE trans-
[39] H. Ishibuchi, Y. Setoguchi, H. Masuda, and Y. Nojima, “Performance of actions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
Decomposition-Based Many-Objective Algorithms Strongly Depends on [61] D. A. Van Veldhuizen, “Multiobjective Evolutionary Algorithms: Classi-
Pareto Front Shapes,” IEEE Transactions on Evolutionary Computation, fications, Analyses, and New Innovations,” Ph.D. dissertation, Air Force
vol. 21, no. 2, pp. 169–190, Apr. 2017. Institute of Technology, Wright Patterson AFB, OH, USA, 1999.
[40] R. Saborido, A. B. Ruiz, and M. Luque, “Global WASF-GA: An [62] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB
Evolutionary Algorithm in Multiobjective Optimization to Approximate Platform for Evolutionary Multi-Objective Optimization [Educational
the Whole Pareto Optimal Front,” Evolutionary Computation, pp. 1–41, Forum],” IEEE Computational Intelligence Magazine, vol. 12, no. 4,
Feb. 2016. pp. 73–87, Nov. 2017.
[41] S. Jiang and S. Yang, “An Improved Multiobjective Optimization Evolu- [63] N. Kowatari, A. Oyama, H. E. Aguirre, and K. Tanaka, “A Study
tionary Algorithm Based on Decomposition for Complex Pareto Fronts,” on Large Population MOEA Using Adaptive -Box Dominance and
IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 421–437, Feb. Neighborhood Recombination for Many-Objective Optimization,” in
2016. Learning and Intelligent Optimization. Berlin, Heidelberg: Springer
[42] Z. Wang, Q. Zhang, H. Li, H. Ishibuchi, and L. Jiao, “On the use of Berlin Heidelberg, 2012, vol. 7219, pp. 86–100.
two reference points in decomposition based multiobjective evolutionary [64] E. Zitzler and S. Künzli, “Indicator-Based Selection in Multiobjective
algorithms,” Swarm and Evolutionary Computation, vol. 34, pp. 89–102, Search,” in Parallel Problem Solving from Nature - PPSN VIII. Berlin,
Jun. 2017. Heidelberg: Springer Berlin Heidelberg, 2004, vol. 3242, pp. 832–842.
[43] X. Cai, Z. Mei, and Z. Fan, “A Decomposition-Based Many-Objective
Evolutionary Algorithm With Two Types of Adjustments for Direction Xiaoyu He received the B.Eng. degree from Beijing
Vectors,” IEEE Transactions on Cybernetics, pp. 1–14, 2017. Electronic Science and Technology Institute, Bei-
[44] S. Jiang, Z. Cai, J. Zhang, and Y.-S. Ong, “Multiobjective optimization jing, China, in 2010, and the M.P.A. degree from
by decomposition with Pareto-adaptive weight vectors,” in Seventh South China University of Technology, Guangzhou,
International Conference on Natural Computation, vol. 3. IEEE, Jul. China, in 2016. He is currently pursuing the Ph.D.
2011, pp. 1260–1264. degree with Sun Yat-sen University, Guangzhou.
[45] H. Jain and K. Deb, “An Evolutionary Many-Objective Optimization Al- His current research interests include evolutionary
gorithm Using Reference-Point Based Nondominated Sorting Approach, computation and data mining.
Part II: Handling Constraints and Extending to an Adaptive Approach,”
IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp.
602–622, Aug. 2014.
[46] Y. Qi, X. Ma, F. Liu, L. Jiao, J. Sun, and J. Wu, “MOEA/D with Adaptive
Weight Adjustment,” Evolutionary Computation, vol. 22, no. 2, pp. 231– Yuren Zhou received the B.Sc. degree in mathemat-
264, Jun. 2014. ics from Peking University, Beijing, China, in 1988,
[47] M. Asafuddoula, H. K. Singh, and T. Ray, “An Enhanced the M.Sc. degree in Mathematics from Wuhan Uni-
Decomposition-Based Evolutionary Algorithm With Adaptive Reference versity, Wuhan, China, in 1991, and the Ph.D. degree
Vectors,” IEEE Transactions on Cybernetics, pp. 1–14, 2017. in Computer Science from the same University in
[48] H. L. Liu, L. Chen, Q. Zhang, and K. Deb, “Adaptively Allocating 2003. He is currently a professor with the School of
Search Effort in Challenging Many-Objective Optimization Problems,” Data and Computer Science, Sun Yat-sen University,
IEEE Transactions on Evolutionary Computation, vol. PP, no. 99, pp. Guangzhou, P. R. China.
1–1, 2017. His current research interests include design and
[49] C. A. R. Hoare, “Algorithm 65: Find,” Commun. ACM, vol. 4, no. 7, analysis of algorithms, evolutionary computation,
pp. 321–322, Jul. 1961. and social networks.
[50] D. P. Mitchell, “Spectrally optimal sampling for distribution ray tracing.”
ACM Press, 1991, pp. 157–164. Zefeng Chen received the B.Sc. degree in In-
[51] H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a Multiobjective formation and Computational Science from Sun
Optimization Problem Into a Number of Simple Multiobjective Sub- Yat-sen University in 2013, and the M.Sc. degree
problems,” IEEE Transactions on Evolutionary Computation, vol. 18, in Computer Science and Technology from South
no. 3, pp. 450–455, Jun. 2014. China University of Technology in 2016. He is cur-
[52] H. Ishibuchi, K. Doi, and Y. Nojima, “On the effect of normalization in rently a Ph.D. candidate in Sun Yat-sen University,
MOEA/D for multi-objective and many-objective optimization,” Com- Guangzhou, P. R. China.
plex & Intelligent Systems, vol. 3, no. 4, pp. 279–294, Dec. 2017. His current research interests include evolution-
[53] H. K. Singh, A. Isaacs, and T. Ray, “A Pareto Corner Search Evolution- ary computation, multi-objective optimization, data
ary Algorithm and Dimensionality Reduction in Many-Objective Opti- mining, and machine learning.
mization Problems,” IEEE Transactions on Evolutionary Computation,
vol. 15, no. 4, pp. 539–556, Aug. 2011.
[54] M. Jensen, “Reducing the Run-Time Complexity of Multiobjective Qingfu Zhang (M’01-SM’06-F’17) received the
EAs: The NSGA-II and Other Algorithms,” IEEE Transactions on BSc degree in mathematics from Shanxi University,
Evolutionary Computation, vol. 7, no. 5, pp. 503–515, Oct. 2003. China in 1984, the MSc degree in applied mathemat-
[55] S. Kukkonen and J. Lampinen, “GDE3: The third Evolution Step ics and the PhD degree in information engineering
of Generalized Differential Evolution,” in 2005 IEEE Congress on from Xidian University, China, in 1991 and 1994,
Evolutionary Computation, vol. 1, Sep. 2005, pp. 443–450 Vol.1. respectively.
[56] S. Huband, P. Hingston, L. Barone, and L. While, “A review of He is a Professor at the Department of Com-
multiobjective test problems and a scalable test problem toolkit,” IEEE puter Science, City University of Hong Kong, Hong
Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, Kong. His main research interests include evolu-
Oct. 2006. tionary computation, optimization, neural networks,
[57] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Problems data analysis, and their applications. He is currently
for Evolutionary Multiobjective Optimization,” in Evolutionary Multi- leading the Metaheuristic Optimization Research (MOP) Group in City
objective Optimization, A. Abraham, L. Jain, and R. Goldberg, Eds. University of Hong Kong.
London: Springer-Verlag, 2005, pp. 105–145. Dr. Zhang is an Associate Editor of the IEEE Transactions on Evolutionary
[58] R. Cheng, M. Li, Y. Tian, X. Zhang, S. Yang, Y. Jin, and X. Yao, Computation and the IEEE Transactions on Systems, Man, and Cybernetics-
“A benchmark test suite for evolutionary many-objective optimization,” Part B. He is also an Editorial Board Member of three other international
Complex & Intelligent Systems, vol. 3, no. 1, pp. 67–81, Mar. 2017. journals. He is a Web of Science highly cited researcher in Computer Science.
[59] H. Li, Q. Zhang, and J. Deng, “Multiobjective test problems with
complicated Pareto fronts: Difficulties in degeneracy,” in Evolutionary

View publication stats

You might also like