Professional Documents
Culture Documents
Abstract: Aggregate blending consists of finding the for solving aggregate-blending problems. These methods
proportions of fractions to form a final blend satisfying were all characterized by choosing a subset of sieve sizes
predefined specifications. It is a problem which is posed to make the calculations easier. In the TE type calcula-
in many ways, and solved by using different techniques. tions, an iterative procedure was followed for obtaining
These techniques range from simple graphical methods an acceptable solution. In the graphical methods, some
to advanced computer methods such as nonlinear pro- triangular and rectangular charts were designed where
gramming or dynamic programming. In this article, an each side corresponded to one sieve size. The solution
aggregate-blending problem is formulated as a multiob- was highly dependent on the sizes chosen and also the
jective optimization problem and solved by using genetic experience of the engineer. These methods were effec-
algorithms (GAs). It is shown that in this way all existing tive for at most three or four fractions and for the only
formulations of an aggregate-blending problem can be objective of finding a mix within the prescribed limits.
covered and solved. The effectiveness of this new appli- For a higher number of fractions, other graphical meth-
cation is demonstrated through numerical examples. The ods were proposed where straight lines approximated
technique is shown to be quite versatile in tackling multiple the grading curves. Analytical methods, which consisted
objectives including cost minimization, and approaching of solving a system of equations of a number equal to
at best a given target curve. Linear and nonlinear cost the number of fractions considered, were also being
functions can be dealt with equal ease; additional objec- used.
tives may be inserted into the problem with no difficulty. Then came more sophisticated methods, which were
The user has the possibility of defining and finding the adapted for computer applications. With the advances
best solutions with Pareto optimality considerations. in computer technologies and using the advantages
of these methods, more and more complex blending
problems have been solved, such as multiobjective or
1 INTRODUCTION chance-constrained problems or problems with nonlin-
ear constraints. A summary of these methods with a com-
Problems associated with aggregate blending are very prehensive literature analysis is given by Toklu (2002b).
common in the construction industry. Mixing aggregate The common disadvantage of these methods is that all
fractions is necessary when making concrete, mortar, as- of them are especially designed for the formulation con-
phalt concrete, and any soil recomposition, and when sidered. If a problem has different properties, then the
constructing granular bases and sub-bases. In fact, the method should be changed accordingly.
problem can easily be generalized to any blending prob- In the present study, the problem is formulated as a
lem that can be encountered in the food, chemical, multiobjective optimization problem. It has been shown
pharmaceutical, and petrochemical industries and the that this formulation is capable of covering all formula-
like. tions studied before and actually can be considered as
Before the common use of computers, trial-and-error the most general approach. The problem is then solved
(TE) type and graphical methods were used extensively by using a metaheuristic method, namely, a genetic algo-
rithm (GA). Certain aspects are checked by applying a
Towhom correspondence should be addressed. E-mail: yct001@ combinatory approach, scanning the range of all feasible
yahoo.com. solutions using a step size sufficiently small. Optimality
C 2005 Computer-Aided Civil and Infrastructure Engineering. Published by Blackwell Publishing, 350 Main Street, Malden, MA 02148, USA,
and 9600 Garsington Road, Oxford OX4 2DQ, UK.
Aggregate blending using genetic algorithms 451
xi 0, i = 1, 2, . . . , m (2a) Cj = c j xj (5a)
m where cj is the cost of unit amount of fraction j.
xi = 1 (2b) Then, an objective such as minimize C can be
i=1 inserted to the problem to make it an optimization
problem.
In general, the required grading is given by two curves: 2. Closeness to a target curve: As stated above, the
one specifying the upper limit and the other the lower usual grading constraints are given as an upper limit
limit. The upper-limit sieve passing percentages are and a lower limit, imposing the condition that the
characterized by r{ri , i = 1, 2, . . . , n}, and the lower- final blend should be within the envelope defined
limit sieve passing percentages are characterized by by the limits. Actually, any solution found near the
s{si , i = 1, 2, . . . , n} along with the conditions ri si , i = limits, though satisfying them, is liable to be violated
1, 2, . . . , n, so that the final gradation will remain within in the next sampling, due to the probabilistic nature
these limits: of the problem. Second, if an envelope is given, it is
si pi ri , i = 1, 2, . . . , n (3) always better to be away from the limits to achieve a
qualitatively better result because these limits mark
Thus, the problem can be stated as finding an m- the start of regions with unacceptable results. Thus,
dimensional vector x such that m nonnegativity condi- one may impose an objective that the final blend will
tions (Equation (2a)), 2n inequalities (Equation (3)), be as far as possible from the limits, though within
and one equality constraint (Equation (2b)), (thus to- the given envelope. This argumentation results in
taling m + 2n + 1 constraints) will be satisfied. In some defining a target gradation which is the median of
problems, more constraints are added to the ones cited the envelope, with a vector q:
above, such as the conditions on fineness modulus or
1
plasticity index (Easa and Can, 1985). The additional q qi = (ri + si ), i = 1, 2, . . . , n (6)
2
constraints coming from these constraints are also lin-
ear, thus they do not change the general character of the If the target is chosen as the median of the envelope,
problem. Their sole effect would be an increase in the it will be equidistant from the upper and lower bounds,
number of constraints. thus not favoring any of them.
452 Toklu
In the literature, there are other curves that may be chromosomes (Toklu, 2002a). Although the main prin-
chosen as the target, such as Fullers ideal grading ciples are the same, the definitions of genetic operators
curve based upon considerations of obtaining a mix with and their relative importance vary from application to
minimum voids (Neville, 2002). In these curves, the cu- application (Haupt and Haupt, 1998).
mulative passing percentage at a sieve is equal to the A typical pseudocode of a simple application of GA
normalized sieve size raised to the power of a number will look like the following:
in the order of 0.45 or 0.50. Here, the normalization is
Begin GA
obtained by dividing the sieve size to the largest sieve
generation counter := 0
size used for the aggregate at hand. If such a target curve
Initialize Population P(generation counter)
is used, attention should be paid to its concordance with
Evaluate Population P(generation counter)
the other constraints of the grading.
{compute fitness values by using objective function}
In any case, the corresponding objective is to make p
While Not Done Do
as close as possible to q, the target, or, in other words,
generation counter := generation counter + 1
to make the length of the vector p q as small as
Select P(generation counter) from
possible. So, the objective becomes the minimization
P(generation counter 1)
of = p q, where is a measure of the distance
Crossover P(generation counter)
between the vectors p and q and may be calculated in a
Reproduce P(generation counter)
variety of ways.
Mutate P(generation counter)
Evaluate P(generation counter)
End While
3 SOLUTION OF AGGREGATE-BLENDING End GA
PROBLEM USING THE GENETIC ALGORITHMS In the present study, GAs are applied to the aggregate-
blending problem with appropriate definitions. The fol-
3.1 Overview of the genetic algorithms lowing remarks point out the important points or differ-
GAs are optimization techniques based on natures prac- ences with classical applications.
tice of evolution and survival of the fittest and were de-
veloped by Holland (1975). They belong to the class of
3.2 Chromosome structure
stochastic search methods such as simulated annealing,
ant colony optimization, and tabu search. The main char- In the present application, chromosomes are chosen to
acteristic of GAs is that a population of solutions to the represent the solution vector x. They are formed by
problem at hand is taken as an initial generation, and genes, and the ith gene in a chromosome representing
an iterative process that involves operators of genetic the proportion of fraction i in a blend. The first m 1
origin, such as reproduction, crossover, and mutation, is elements of vector x are generated randomly in the in-
applied with the aim of obtaining better generations un- terval of [0,1], the mth one being calculated using Equa-
til sufficiency conditions are satisfied (Goldberg, 1989; tion (2b):
Haupt and Haupt, 1998).
The successive generations are formed by individuals,
m1
that the crossover operator will be applied by section be- set of equations and inequalities that define the con-
tween the first and second genes. The resulting offspring straints and inputting them to the GA system, or by
will be {0.15 0.42 0.11 0.14} and {0.33 0.00 0.63 0.22} creating random individuals and rejecting the unfeasi-
with the summation of the elements as 0.82 and 1.18, ble ones. Both of these approaches may have very im-
respectively. In the present application, a normaliza- portant disadvantages. For the first approach, it may be
tion process is applied here to satisfy the unity condi- difficult or even impossible to find the solution set for
tion. Hence, the elements of the offspring are multi- the constraints, especially when dealing with nonlinear
plied by their respective normalization factors 1.00/0.82 equations and inequalities. For the second approach, it
and 1.00/1.18 to yield the normalized crossover offspring is possible that the probability of randomly finding fea-
as {0.183 0.512 0.134 0.171} and {0.280 0.000 0.534 sible solutions may be very low, which makes this part
0.186}. of the procedure highly time-consuming, or even never
ending.
Therefore, in this study, a more natural approach
3.4 Mutation operator
is adopted (Toklu, 2002a). Individuals are accepted to
For mutations, a modified definition is also applied to the initial or subsequent generations without checking
satisfy the conditions given in Equation (2). Consider whether they satisfy all the constraints. The procedure is
that the first chromosome above, namely {0.15 0.00 0.63 designed in such a manner that, as it can be seen in the
0.22}, is randomly chosen to be subject to mutation, and above paragraphs, the individuals created from the start
that the fourth gene is randomly assigned to take the or generated through crossover or mutation operators,
randomly determined value 0.47. With these considera- satisfy only Equation (2), but they do not necessarily
tions, the chromosome becomes {0.15 0.00 0.63 0.47} satisfy the constraints given in Equation (3). The elim-
with the summation of elements equal to 1.25 instead of ination of the infeasible individuals is then left to the
1.00. Applying the normalization procedure defined in natural selection aspect of the GAs through assigning
the paragraph above, the elements of the mutant will be penalties to the unfeasible ones evaluated in the objec-
multiplied by 1.00/1.25 to yield the normalized mutant tive function.
as {0.120 0.000 0.504 0.376}.
3.7 Objective function
3.5 Proportions of genetic operators
To compare and evaluate the fitness values of chro-
Three operators are used in the formulation: crossover, mosomes, a proper objective function has to be de-
mutation, and reproduction. The first two operators are fined. For a multiobjective problem like this one, this
defined in a modified way as defined above. The repro- can be achieved in a number of ways (Ehrgott, 2000;
duction is applied in such a way that a certain percent- Triantaphyllou, 2000). In this application, an appropri-
age of individuals are chosen to be passed to the next ate choice would be to use the weighted sum model and
generation without being subject to the applications of to combine the three objectives into one, as
the crossover operator. For example, for an application
(x) = (x) + C C(x) + (x) (7)
where the number of individuals is 40 and the propor-
tions are given such that reproduction proportion is 0.30 where (x) is the objective function to be minimized.
and mutation probability is 0.01, the next generation will In this equation, (x) is the distance of p(x) from the
be found as follows: 12 best individuals will be chosen target curve q; C(x) is the cost of the solution x; (x)
as they are, 28 individuals will be obtained through an is the penalty function for unsatisfied constraints, and
application of the crossover operator. Then, mutation , C , and are nonnegative factors for arranging the
operator will be applied to all these offspring with 1/100 existence and relative importance of the terms in the
probability to form the next generation. objective function.
The distance in Equation (7) between vectors p and q
can be calculated in one of the following ways
3.6 Feasible solutionssatisfaction of constraints
n
Dealing with constraints is one of the most critical points 1 = p q1 = | pi qi |
in the process. In creating individuals for the initial gen- i=1
eration or in producing new individuals for the next gen- n
(8)
eration, one approach is to design a procedure such that 2 = p q2 = | pi qi |2
only feasible solutions, that is, individuals satisfying all i=1
the constraints will enter the generation. This can be ob-
tained either by introducing a routine which solves the = p q = max[| pi qi |, i = 1, . . . , n].
i
454 Toklu
The penalty function is calculated from the feasible elements of the solution space are obtained
naturally.
n
= j It is to be noted that in this formulation, the prob-
j=1 lem is a multiobjective optimization problem. The third
objective is to guarantee the satisfaction of the feasi-
(9)
pj r j
if p j > r j bility conditions. The other objectives may be elimi-
j = s j pj if p j < s j nated or augmented in number, for example, ignoring the
0 if s j p j r j cost-minimization aspect, or including other objectives
such as obtaining a certain plasticity index or fineness
so that measures how much a pseudosolution x goes modulus.
outside the borders of feasible solutions. At the end of the optimization, one may not hope that
With this choice, three objectives are combined into and C would both vanish thus arriving at a solution with
one: zero cost and exactly equal to the target curve. This is not
the case for . To have a feasible solution, the latter has
1. being as close as possible to the target curve
to go down until it vanishes. At the end of the prescribed
(minimize ),
number of generations, if is still not equal to 0, then the
2. obtaining a least cost solution (minimize C),
iterations may not be sufficient in quantity and quality,
3. satisfying the constraints to remain in the prescribed
or the problem may not have feasible solutions.
envelope (minimize until it vanishes, if possible).
It is to be noted that among these objectives, the first
two are the only real external objectives. The third one is 4 ILLUSTRATIVE EXAMPLE
actually part of the algorithm to find feasible solutions.
In fact, alternatively this same program could be for- The algorithm is applied to the problem with the data
mulated without this third objective with the condition presented in Table 1. There are four fractions to be
that only chromosomes satisfying all feasibility condi- blended; the analysis is carried out with 10 sieves.
tions would be considered in the populations, and any Gradations for these four fractions, limiting gradations,
chromosome which does not satisfy the conditions im- and the target gradation, which is the median of the up-
posed by Equation (3) would be considered to be pre- per and lower limits, are shown in Figure 1. Two types
maturely dead. This means that according to the choice of the cost function are considered: linear and nonlinear.
made in this article, individuals in a population are pseu- For the linear cost function, the unit price of each fraction
dosolutions to the problem, in the sense that they may is independent of the quantity used, on the other hand,
or may not satisfy the limiting conditions. The nature for the nonlinear cost function, the unit prices are step-
of the procedure is that, in the succeeding generations, wise functions of the quantity used. As shown in Figure 2,
the ones that satisfy these conditions are preferred, and the functions j , defined in Equation (5), are taken to be
Table 1
Input data: required gradation and gradations for fractions to be blended; the unit costs of fractions for the
linear cost function
s Lower Limits
100
r Upper Limits
25
90
q Median
80 Gi1 Fraction 1
70 Gi2 Fraction 2
Passing %
60 Gi3 Fraction 3 20
50 Gi4 Fraction 4
40 Best
30
fitness 15
20
10 value
0
1 2 3 4 5 6 7 8 9 10
10
Sieve Number
1
Generations
Unit cost function Y()
0.75
Fig. 4. Typical GA application. Three independent runs
(islands).
0.5
14 Fr. 3 view of the curves obtained, seeing that the results are
12 Fr. 4 difficult to differentiate in Figure 5. The modification is
10 obtained by a normalization so that the upper and lower
8
gradation limits are mapped to +100% and 100%, re-
6
4
spectively, and the median is mapped to 0%.
2 In the runs D1, D2, D3, L4, and N4, the objective func-
0 tion is formulated with two terms; although in the runs
0 0.25 0.5 0.75 1 L1, L2, L3, N1, N2, and N3 this number is three. One
of the terms in the objective function is always related
to minimizing total penalties to ensure finding feasible
Fig. 3. Cost functions for the four fractions (dashed lines for solutions. In all cases this objective was satisfied, that is,
linear cost function, continuous lines for nonlinear feasible solutions are obtained such that at the end of it-
cost function). erations, = 0. The coefficient is taken to be 1,000 for
456 Toklu
Table 2
Data for all runs and corresponding outputs
100 100
s 80
90
Normalized Passing %
r 60 s
80
q 40 r
70
TE 20 q
Passing %
60 0 TE
D1
50 -20 D1
D2
40 -40 D2
D3
30 -60 D3
-80
20
-100
10 1 2 3 4 5 6 7 8 9 10
0
Sieve Number
1 2 3 4 5 6 7 8 9 10
Sieve Number
Fig. 6. Gradation curves obtained for three different norms
and for TE solution on normalized scale (upper limits are
Fig. 5. Gradation curves obtained for three different norms 100%, lower limits are 100%, median are 0%).
and for TE solution.
100
80
eliminating infeasible solutions with a reasonable speed.
Normalized Passing %
60 s
The sum + C of the other two coefficients is kept to 40 r
be equal to 1 to obtain a convex linear combination of 20 q
these objectives. 0 TE
-20 D1
-40 L4
-60
4.1 Effect of type of metric used N4
-80
Use of three different norms are compared in the runs -100
D1, D2, and D3 where the fitness value is calculated using 1 2 3 4 5 6 7 8 9 10
100 0.7
80 s 0.6
60
r
40 0.5
q x3 L
20
D1 0.4 x3 N
0
L1 0.3 x4 L
-20
L2 x4 N
-40 0.2
-60 L3
-80 L4 0.1
-100 0
1 2 3 4 5 6 7 8 9 10 0 20 40 60 80 100
Sieve Number Cost Ratio in Final Objective
(L for Linear, N for Nonlinear Cost Function)
40 D1
r 14 N1
20 q
Cost
L2
D1 12
0
N1 N2
-20 10
N2
L3
-40 N3 8 L4
N4 N3
-60 N4
6
-80 5 10 15 20 25 30 35
-100 Distance From Target Curve
1 2 3 4 5 6 7 8 9 10
Sieve Number Fig. 12. Pareto optimal points for linear and nonlinear
cost functions.
Fig. 9. Gradation curves for solutions with different
importance levels of nonlinear cost.
as expected (see Figure 7). Effectively, the TE method
needs to be improved because it is simply a feasible
for further applications. solution without any optimization, and for L4 and N4
approaching the target curve was not an objective.
4.2 Minimization of the distance from the target curve The best solutions for each type of metric are under-
lined in Table 2 in columns 1 , 2 , and . If 2 is taken
It can be seen from the results that the worst solutions as to be the distance measure accepted, obviously the run
far as approaching the target curve is concerned are ob- D2 becomes the absolute best, as expected by definition.
tained from the TE method and the solutions L4 and N4 This solution gives 2 (D2) = 3.5983. It is to be noted that
the solutions obtained in the run D2 are not the best
when compared to other solutions as far as other metric
14
y = 0.413x + 0.048 definitions are concerned: 1 (D1) = 9.4207 < 1 (D2) =
12
R2 = 0.9842 9.627, (D3) = 2.0298 < (D2) = 2.2630.
10
2,
8 2
6 4.3 Minimization of cost
y = 0.2685x - 0.1861
4
R2 = 0.9219
2 Cost minimization is analyzed for two types of cost def-
0
inition: linear and nonlinear costs. In the first case the
0 5 10 15 20 25 30 35 unit costs are constant, in the second case the unit costs
1 decrease nonlinearly as the amount used increases (see
Figure 3). Among the fractions the third one is the cheap-
Fig. 10. Correlation between norm types. est one and the fourth one is the most expensive one. The
458 Toklu