You are on page 1of 7

Applied Mathematics and Computation 219 (2013) 1096710973

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Solving nonlinear equations system via an efcient genetic


algorithm with symmetric and harmonious individuals
Hongmin Ren a,, Long Wu a, Weihong Bi b, Ioannis K. Argyros c
a

College of Information and Engineering, Hangzhou Radio and TV University, Hangzhou 310012, Zhejiang, PR China
Department of Mathematics, Zhejiang University, Hangzhou 310028, Zhejiang, PR China
c
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
b

a r t i c l e

i n f o

Keywords:
Genetic algorithms
Symmetric and harmonious individuals
Nonlinear equations system
Elitist model
Population diversity

a b s t r a c t
We present an efcient genetic algorithm as a general tool for solving optimum problems.
As a specialized application this algorithm can be used to approximate a solution of a system of nonlinear equations on n-dimensional Euclidean space setting. The new idea
involves the introduction of some pairs of symmetric and harmonious individuals for the
generation of a genetic algorithm. The population diversity is maintained better this
way. The elitist model is used to ensure the convergence. Simulation results show that
the new method is indeed very effective.
2013 Elsevier Inc. All rights reserved.

1. Introduction
In this paper, we introduce genetic algorithms as a general tool for solving optimum problems. As a special case we use
these algorithms to nd the solution of the system of nonlinear equations

8
f1 x1 ; x2 ; . . . ; xn 0;
>
>
>
>
< f2 x1 ; x2 ; . . . ; xn 0;
>
...
>
>
>
:
fn x1 ; x2 ; . . . ; xn 0;

1:1

where f f1 ; f2 ; . . . ; fn : D a1 ; b1   a2 ; b2      an ; bn  # Rn ! Rn is continuous.
Genetic algorithms (GA) were rst introduced in the 1970s by John Holland at University of Michigan [8]. Since then, a
great deal of developments on GA have been obtained, see [5,6] and references therein. GA are used as an adaptive machine
study approach in their former period of development, and they have been successfully applied in numerous areas such as
articial intelligence, self-adaption control, systematic engineering, image processing, combinatorial optimization and nancial system. GA show us very extensive application prospect. Genetic algorithms are search algorithms based on the mechanics of natural genetics.
In a genetic algorithm, a population of candidate solutions (called individuals or phenotypes) to an optimization problem
is evolved towards better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can
be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also
possible [5]. The evolution usually starts from a population of randomly generated individuals and happens in generations. In
Corresponding author.
E-mail addresses: rhm65@126.com (H. Ren), wl@mail.hzrtvu.edu.cn (L. Wu), bivh@zju.edu.cn (W. Bi), iargyros@cameron.edu (I.K. Argyros).
0096-3003/$ - see front matter 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2013.04.041

10968

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

each generation, the tness of every individual in the population is evaluated, the more t individuals are stochastically
selected from the current population, and each individuals genome is modied (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory tness level has been
reached for the population.
In the use of genetic algorithms to solve the practical problems, the premature phenomenon often appears, which limits
the search performance of genetic algorithm. The reason for causing premature phenomenon is that highly similar exists between individuals after some generations, and the opportunity of the generation of new individuals by further genetic
manipulation has been reduced greatly. Many ideas have been proposed to avoid the premature phenomenon
[6,15,16,18]. Guang and Yonglin [18] introduces two special individuals at each generation of genetic algorithm to make
the population maintains diversity in the problem of nding the minimized distance between surfaces, and the computational efciency has been improved. We introduces other two special individuals at each generation of genetic algorithm
in the same problem in [15], and the computational efciency has been further improved. Furthermore, we suggest to
put some symmetric and harmonious individuals at each generation of genetic algorithm applied to a general optimal problem in [16], and good computational efciency has been obtained. Some application of our methods in reservoir mid-ong
hydraulic power operation has been given in [17].
Recently many authors use GA to solve nonlinear equations system, see [2,9,10,12,14]. These works give us impression
that GA are effective methods for solving nonlinear equations system. However, efforts are still needed so as to solve nonlinear equations system more effectively. In this paper, we present a new genetic algorithm to solve Eq. (1.1).
The paper is organized as follows: We convert the equation problem (1.1) to an optimal problem in Section 2, in Section 3
we present our new genetic algorithm with symmetric and harmonious individuals for the corresponding optimal problem,
in Section 4 we give a mixed method by our method with Newtons method, whereas in Section 5 we provide some numerical examples to show that our new methods are very effective. Some remarks and conclusions are given in the concluding
Section 6.
2. Convert (1.1) to an optimal problem
Let us dene function F : D a1 ; b1   a2 ; b2      an ; bn  # Rn ! R as follows

Fx1 ; x2 ; . . . ; xn

n
1X
jfi x1 ; x2 ; . . . ; xn j:
n i1

2:1

Then, we convert problem (1.1) to the following optimal problem

min Fx1 ; x2 ; . . . ; xn
s:t x1 ; x2 ; . . . ; xn 2 D:

2:2

Suppose xH 2 D is a solution of Eq. (1.1), then we have FxH 0 from the denition (2.1) of F. Since Fx P 0 holds for all
x 2 D, we deduce that xH is the solution of problem (2.2). On the other hand, assume xH 2 D is a solution of problem
(2.2). Then for all x 2 D, we have Fx P FxH . Now we suppose Eq. (1.1) has at least one solution denoted as yH . Then,
we have 0 6 FxH 6 FyH 0, that is FxH 0 is true and xH is a solution of Eq. (1.1). Hence, Eq. (1.1) is equivalent to problem (2.2) if Eq. (1.1) has at least one solution.
In what follows, we always suppose Eq. (1.1) has at least one solution, and try to nd its solution by nding a solution of
(2.2) via a genetic algorithm.
3. New genetic algorithm: SHEGA
Since the simple genetic algorithm (called standard genetic algorithm) is not very efcient in practical computation, many
varieties have been given [6]. In this paper, we give a new genetic algorithm. Our main idea is to put pairs of symmetric and
harmonious individuals in generations.
3.1. Coding
We use the binary code in our method, as used in the simple genetic algorithm [6]. The binary code is the most used code
in GA, which represents the candidate solution by a string of 0s and 1s. The length of the string is relation to the degree of
accuracy needed for the solution, and satises the following inequality

Li P log2

bi  ai

ei

3:1

where, Li is the length of the string standing for the i-component of an individual, and ei is the degree of accuracy needed for
xi . In fact, we should choose the minimal positive integer to satisfy (3.1) for Li . Usually, all ei are equal to one another, so it can
be denoted by ex in this status. For example, let a1 0; b1 1 and ex 106 , then we can choose L1 20, since

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

10969

L1 P log2 106  19:93156857. Variable x1 2 a1 ; b1  is in the form of real numbers, it can be represented by 20 digits of the
binary code:00000000000000000000 stands for 0, 00000000000000000001 stands for 21Li , 00000000000000000010 stands
for 22Li , . . ., 11111111111111111111 stands for 1.
3.2. Fitness function
Since the simple genetic algorithm is used to nd a solution of a maximal problem, one should make some change for the
tness function. In this study, we dene the tness function as follows

gx1 ; x2 ; . . . ; xn

1
:
1 Fx1 ; x2 ; . . . ; xn

3:2

For this function g, it satises: (1) the tness value is always positive, which is needed in the following genetic operators; (2)
the tness value will be bigger if the point x1 ; x2 ; . . . ; xn is closer to a solution xH of problem (2.2).
3.3. Selection
We use Roulette Wheel Section (called proportional election operator) in our method, as used in the simple genetic
algorithm. The probability of individual is selected and the tness function is proportional to the value. Suppose the size
of population is N, the tness of individual i is g i . The individual i was selected to the next generation with probability
pi given by

g
pi PN i

i1

gi

i 1; 2; . . . ; N:

3:3

3.4. Symmetric and harmonious individuals


We rst give the denition of symmetric and harmonious individuals.
Denition 3.1. Suppose individuals M 1 x1 ; x2 ; . . . ; xn and M 2 y1 ; y2 ; . . . ; yn can be represented in the binary code by
M01 x11 x12 . . . x1L1 x21 x22 . . . x2L2 . . . xn1 xn2 . . . xnLn and M02 y11 y12 . . . y1L y21 y22 . . . y2L . . . yn1 yn2 . . . ynLn , respectively. They are
1
2
called symmetric and harmonious individuals if and only if xij 1  yij holds for any i 1; 2; . . . ; n and j 1; 2; . . . ; Li .
In Denition 3.1 individuals M 01 and M 02 are complements in the binary sense.
In order to avoid the premature phenomenon of genetic algorithm, we introduce some pair of symmetric and harmonious
individuals in generation. We do not use the xed symmetric and harmonious individuals as in [15,18]. Contrarily, we generate pair of symmetric and harmonious individuals randomly. On one hand, these pair of symmetry of individuals continue
to enrich the diversity of the population. On the other hand, they continue to explore the space even if they have not been
selected to participate in genetic manipulation of exchange or mutation.
Suppose the size of population is N, and k 2 0; 0:5 is a parameter. Let brc be the biggest integer equal to or less than r. We
introduce bk  Nc pairs of symmetric and harmonious individuals in current generation provided that the quantity between
the best tness value of the last generation to the one of the current generation is less than a preset precision denoted by e1 .
Here, we call k as symmetry and harmonious factor.
3.5. Crossover and mutation
We use one-point crossover operator in our genetic algorithm, just as used in the simple genetic algorithm. That is, a
single crossover point on both parents organism strings is selected. All data beyond that point in either organism
string is swapped between the two parent organisms. The resulting organisms are the children. An example is shown as
follows:

A : 10110111 j 001
B : 00011100 j 110

A0 : 10110111 j 110
B0 : 00011100 j 001:

We use bit string mutation, just as used in the simple genetic algorithm. That is, the mutation of bit strings ensue through bit
ips at random positions. The following example is provided to show this:

A : 1010 1 0101010 ) A0 : 1010 0 0101010:


3.6. Elitist model
It is well-known that the global convergence of the simple genetic algorithm cannot be assured [6]. In order to guarantee
the global convergence of the genetic algorithm, we use an elitist model (or the optimal preservation strategy) in this paper.
That is, at each generation we reserves the individual with the maximum tness value to the next generation.

10970

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

3.7. Procedure
For the convenience of discussion, we call our genetic algorithm with symmetric and harmonious individuals as SHEGA,
and call the simple genetic algorithm with the elitist model as EGA. We can give the procedure of SHEGA as follows:
Step 1. Assignment of parameters of genetic algorithm: The size N of population, the number n of variables of (2.2), the lengthes L1 ; L2 ; . . . ; Ln (computed from (3.1)) of the binary string of the components of an individual, symmetry and harmonious factor k, controlled precision e1 in Section 3.4 to introduce the symmetric and harmonious individuals, the
probability pc of the crossover operator, the probability pm of the mutation operator, and the largest genetic generation G.
Step 2. Generate the initial population randomly.
Step 3. Calculate the tness value of each individual of the contemporary population, and reserve the optimal individual of
the contemporary population to the next generation.
Step 4. If the distance between the best tness value of the last generation to that of the current generation is less than a
preset precision e1 , we generate N  2  bk  Nc  1 individuals using Roulette Wheel Section and bk  Nc pairs of
symmetry and harmonious individuals randomly. Otherwise we generate N  1 individuals using Roulette Wheel
Section directly. The population is then divided into two parts: one is the seed subpopulation constituted by symmetry and harmonious individuals, and the other is a subpopulation ready to be bred and constituted by the residual
individuals.
Step 5. Take the crossover operator between each individual in the seed subpopulation to one individual selected from the
other subpopulation randomly. Take the crossover operator each other using two paired method with probability pc
and take the mutation operator with probability pm for each residual individual in the subpopulation ready to be
bred.
Step 6. Repeat Step 3Step 5 until the maximum genetic generation G is reached.
4. Mixed algorithm: SHEGANewton method
In order to improve further the efciency of SHEGA, we can apply it by mixed with a classical iterative method such as
Newtons method [3,13]

xk1 xk  f 0 xk 1 f xk k P 0 x0 2 D:

4:1

Here, f x denotes the Frchet derivative of function f.


Local as well as semilocal convergence results for Newton method (4.1) under various assumptions have been given by
many authors [1,3,4,7,11,13]. It is well-known that Newtons method converges to the solution quadratically provided that
some necessary conditions are satised. However, there are two deciencies to limit the application of Newtons method.
First, function f must be differentiable, which will not be satised in practical application. Second, a good initial point for
beginning the iterative is key to ensure the convergence of the iterative sequence, but it is a difcult task to choose the initial
point in advance. In fact, choosing good initial points to begin the corresponding iteration is the common question for all the
classical iterative methods used to solve Eq. (1.1) [3,13].
Here, we use Newtons method as an example. In fact, one can develop other methods by mixing SHEGA and other iterative methods. We can state SHEGA-Newton method as follows:
Step 1. Given the maximal iterative step S and the precision accuracy ey . Set s 1.
Step 2. Find an initial guess x0 2 D by using SHEGA given in Section 3.
s
s
s
s
s
s
Step 3. Compute fi x1 ; x2 ; . . . ; xn i 1; 2; . . . ; n. If Fx1 ; x2 ; . . . ; xn 6 ey , report that the approximation solution
s
s
s
s
x x1 ; x2 ; . . . ; xn is found and exit from the circulation, where F is dened in (2.1).
Step 4. Compute the Jacobian J s f 0 xs and solve the linear equations

J s us f xs :

4:2

Set xs1 xs us .
Step 5. If s 6 S, set s s 1 and go to Step 3. Otherwise, report that the approximation solution cannot be found.
5. Numerical examples
In this section, we will provide some examples to show the efciency of our new method.
Example 5.1. Let f be dened in D 3:5; 2:5  3:5; 2:5 by

f1 x1 ; x2 x21 x22 x1 x2  8 0;
f2 x1 ; x2 x1 x2 x1 x2  5 0:

5:1

10971

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

Let us choose parameters as follows:

N 40;

pc 0:9;

pm 0:005;

ex 0:001; e1 0:001:

5:2

Since the genetic algorithms are random algorithms, we run each method 30 times, and compare convergence number of
times under various maximal generation G for EGA and SHEGA. The comparison results are given in Table 1. We also give
the comparison results of the average of the best function value F under the xed maximal generation G 300 in Table 2.
Here, we say the corresponding genetic algorithm is convergent if the function value Fx1 ; x2 ; . . . ; xn is less than a xed precision ey . We set ey 0:05 for this example. Tables 1 and 2 show us that SHEGA with proper symmetry and harmonious
factor k performs better than EGA.
Example 5.2. Let f be dened in D 5; 5  1; 3  5; 5 by

8
2
2
>
< f1 x1 ; x2 ; x3 3x1 sinx1 x2  x3 2 0;
3
2
f2 x1 ; x2 ; x3 2x1  x2  x3 3 0;
>
:
f3 x1 ; x2 ; x3 sin2x1 cosx2 x3 x2  1 0:

5:3

Let us choose parameters as follows:

N 50;

pc 0:8;

pm 0:05;

ex 0:0001; e1 0:001:

5:4

We run each method 20 times, and compare convergence number of times under various maximal generation G for EGA and
SHEGA. The comparison results are given in Table 3. We also give the comparison results of the average of the best function
value F under the xed maximal generation G 500 in Table 4. Here, we say the corresponding genetic algorithm is convergent if the function value Fx1 ; x2 ; . . . ; xn is less than a xed precision ey . We set ey 0:02 for this example. Tables 3 and 4
show us that SHEGA with proper symmetry and harmonious factor k performs better than EGA.
Next, we will provide an example when f is non-differentiable. Newtons method (4.1) and SHEGA-Newton cannot be
used to solve this problem since f is non-differentiable. However, SHEGA can apply. Moreover, with the same idea used
for SHEGA-Newton method, one can develop some mixed methods which do not require the differentiability of function f
to solve the problem.
Example 5.3. Let f be dened in D 2; 2  1; 6 by

f1 x1 ; x2 x21  x2 1 19 jx1  1j 0;

5:5

f2 x1 ; x2 x22 x1  7 19 jx2 j 0:
Table 1
The comparison results of convergence number of times for Example 5.1.

EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

G 50

G 100

G 150

G 200

G 250

G 300

11
13
8
16
16
20
22
17
17
19

11
15
20
21
26
28
28
27
29
30

11
18
23
27
29
29
30
29
30
30

11
20
26
28
30
29
30
30
30
30

12
20
29
28
30
30
30
30
30
30

12
21
29
29
30
30
30
30
30
30

Table 2
The comparison results of the average of the best function value F for Example 5.1.
G 300
EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

0.129795093953972
0.053101422547384
0.041427669194903
0.034877317449825
0.035701604675096
0.038051665705034
0.039332883168632
0.035780879206619
0.034509424138501
0.037425326021257

10972

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

Table 3
The comparison results of convergence number of times for Example 5.2.

EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

G 100

G 200

G 300

G 400

G 500

3
2
2
5
3
8
8
6
5
6

4
6
7
11
11
11
16
14
14
17

4
8
11
14
14
14
16
18
19
19

4
9
14
14
17
17
17
19
19
19

4
11
15
16
18
17
17
20
20
20

Table 4
The comparison results of the average of the best function value F for Example 5.2.
G 500
EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

0.1668487275323187
0.0752619855962109
0.0500864062405815
0.0358268275921585
0.0257859494269335
0.0239622084932336
0.0247106452514721
0.0171980128114993
0.0179659124369376
0.0158999282064303

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

Table 5
The comparison results of convergence number of times for Example 5.3.

EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

G 100

G 150

G 200

G 250

G 300

6
10
12
15
15
11
10
16
10
9

9
15
15
18
20
14
14
17
12
12

10
16
17
19
20
17
16
18
18
15

10
17
18
20
20
17
17
19
18
15

10
19
20
20
20
18
19
20
20
17

Table 6
The comparison results of the average of the best function value F for Example 5.3.
G 300
EGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA
SHEGA

0.0081958187235953
0.0021219384775699
0.0019286950245614
0.0018367719544782
0.0022816080103967
0.0023297925904943
0.0023318357433983
0.0021392510790106
0.0022381534380744
0.0025798012930550

(k = 0.05)
(k = 0.10)
(k = 0.15)
(k = 0.20)
(k = 0.25)
(k = 0.30)
(k = 0.35)
(k = 0.40)
(k = 0.45)

Let us choose parameters as follows:

N 40;

pc 0:85;

pm 0:008;

ex 0:001; e1 0:005:

5:6

We run each method 20 times, and compare convergence number of times under various maximal generation G for EGA and
SHEGA. The comparison results are given in Table 5. We also give the comparison results of the average of the best function

H. Ren et al. / Applied Mathematics and Computation 219 (2013) 1096710973

10973

value F under the xed maximal generation G 300 in Table 6. Here, we say the corresponding genetic algorithm is convergent if the function value Fx1 ; x2 ; . . . ; xn is less than a xed precision ey . We set ey 0:003 for this example. Tables 5 and 6
show us that SHEGA with proper symmetry and harmonious factor k performs better than EGA.

6. Conclusion
We presented a genetic algorithm as a general tool for solving optimum problems. Note that in the special case of approximating solutions of systems of nonlinear equations there are many deciencies that limit the application of the usually employed methods. For example in the case of Newtons method function f must be differentiable and a good initial point must
be found. To avoid these problems we have introduced some pairs of symmetric and harmonious individuals for the generation of a genetic algorithm. The population diversity is preserved this way and the method guarantees convergence to a
solution of the system. Numerical examples are illustrating the efciency of the new algorithm.
Acknowledgement
This work was supported by National Natural Science Foundation of China (No. 10871178), Zhejiang Radio and Television
University Research Project (No. xkt-2005-40), and Hangzhou Polytechnic Important Research Project (No. KZYZD-2010-9).
References
[1] S. Amat, S. Busquier, M. Negra, Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397405.
[2] Angel Fernando Kuri-Morales, Solution of simultaneous non-linear equations using genetic algorithms, WSEAS Trans. SYSTEMS 2 (2003) 4451.
[3] I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Studies in Computational Mathematics 15, Elsevier Publ. Co.,
New York, U.S.A., 2007.
[4] J.A. Ezquerro, M.A. Hernndez, On an application of Newtons method to nonlinear equations with w-condition second derivative, BIT 42 (2002) 519
532.
[5] D.B. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, New York, 2000.
[6] D.E. Goldberg, Genetic Algorithms in Search, second ed., Optimization and Machine Learning, Addison-Wesley, 1989.
[7] J.M. Gutierrez, A new semilocal convergence theorem for Newtons method, J. Comput. Appl. Math. 79 (1997) 131145.
[8] J.H. Holland, Adaptation in Natural and Articial System, Michigan Univ Press, Ann Arbor, 1975.
[9] Ibrahiem M.M. El-Emary, Mona M. Abd El-Kareem, Towards using genetic algorithm for solving nonlinear equation systems, World Appl. Sci. J. 5 (2008)
282289.
[10] G.N. Nasira, D. Sharmila Devi, Solving nonlinear equations through jacobian sparsity pattern using genetic algorithm, Int. J. Commun. Eng. 5 (2012) 78
82.
[11] B. Neta, A new iterative method for the solution of system of nonlinear equations, in: Z. Ziegler (Ed.), Approximation Theory and Applications,
Academic Press, 1981. pp. 249263.
[12] Nikos E. Mastorakis, Solving non-linear equations via genetic algorithms, in: Proceedings of the 6th WSEAS International Conference on Evolutionary
Computing, Lisbon, Portugal, June 1618, 2005, pp. 2428.
[13] J.M. Ortega, W.C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.
[14] Punam S. Mhetre, Genetic algorithm for linear and nonlinear equation, Int. J. Adv. Eng. Tech. 3 (2012) 114118.
[15] Ren Hongmin, Bi Weihong, Wu Qingbiao, Calculation of minimum distance between free-form surfaces by a type of new improved genetic algorithm,
Comput. Eng. Appl. 23 (2004) 6264 (in Chinese).
[16] Ren Hongmin, Wu Qingbiao, Bi Weihong, A genetic algorithm with symmetric and harmonious individuals, Comput. Eng. Appl. 5 (2005) 2426. 87 (in
Chinese).
[17] Wan Xing, Zhou Jianzhong, Application of genetic algorithm for self adaption, symmetry and congruity in reservoir mid-ong hydraulic power
operation, Adv. Water Sci. 18 (2007) 598603 (in Chinese).
[18] Xi Guang, Cai Yonglin, Finding the minimized distance between surfaces, J. CAD Graphics 14 (2002) 209213 (in Chinese).

You might also like