You are on page 1of 5

23

GREY WOLF OPTIMIZATION CHAPTER


ALGORITHM 3

3.1 Need for Optimization Techniques

The CHP operational problem is mathematically formulated as a multivariable

optimization problem as detailed in the Chapter 2. This operational problem can be

formulated for static and dynamic environments considering single and multiple

objectives. Hence, a suitable optimization method has to be adopted to determine the

optimal output settings of operational units.

Meta-heuristic optimization methods have become extremely popular over the

past two decades because of their simplicity, flexibility, derivation-free and local minima

avoidance. These techniques have been mostly inspired by very simple concepts typically

related to physical phenomena, animals’ behaviour or evolutionary concepts. This

simplicity attracts the researchers to develop and propose new meta-heuristics.

The meta-heuristics can be separated into two main modules: single-solution

based and population based. Among these, the latter has some advantages than the primer

which motivate the researcher to apply meta-heuristic techniques for solving various

practical optimization problems. The techniques based on the swarm intelligence

behavior belong to a branch of population based meta-heuristics. The exploration and

exploitation phases are the common feature of swarm intelligence techniques.

Usually optimization techniques bring the control parameters of the non-linear

' ( !)
24

problem to the edge, whereas, their mathematical methods are difficult to implement for

better accuracy. So, an effective optimization technique is needed to solve non-linear

problems.

3.2 Why Grey Wolf Optimization?


Grey wolf optimization is a swarm intelligent technique developed by Mirjalili et

al., 2014, which mimics the leadership hierarchy of wolves are well known for their

group hunting.

Grey wolf belongs to Canidae family and mostly prefer to live in a pack. They

have a strict social dominant hierarchy; the leader is a male or female, called Alpha ( ).

The alpha is mostly responsible for decision making. The orders of the dominant wolf

should be followed by the pack. The Betas ( ) are subordinate wolves which help the

alpha in decision making. The beta is an advisor to alpha and discipliner for the pack. The

lower ranking grey wolf is Omega ( ) which has to submit all other dominant wolves. If

a wolf is neither an alpha or beta nor omega, is called delta. Delta ( ) wolves dominate

omega and reports to alpha and beta.

The hunting techniques and the social hierarchy of wolves are mathematically

modelled in order to develop GWO and perform optimization. The GWO algorithm is

tested with the standard test functions that indicate that it has superior exploration and

exploitation characteristics than other swarm intelligence techniques. Further, the GWO

has been successfully applied for solving various engineering optimization problems

(Hong Mee Song et al., 2014; Ali Madadi and Mahmood Mohseni Motlagh, 2014;

Pranjali Rathee et al., 2015; Xianhai Song et al., 2015).

Moreover, most of the swarm intelligent techniques that are used to solve the

optimization problems cannot have the leader to control over the entire period. This

drawback is rectified in GWO in which the grey wolves have natural leadership

' ( !)
25

mechanism. Further, this algorithm has a few parameters only and easy to implement,

which makes it superior than earlier ones. Due to the versatile properties of the GWO

algorithm, attempts have been made to implement GWO to solve the optimization

problems.

3.3 Overview of Grey Wolf Optimization Algorithm

The GWO mimics the hunting behavior and the social hierarchy of grey wolves.

In addition to the social hierarchy of grey wolves, pack hunting is another appealing

societal action of grey wolves. The main segments of GWO are encircling, hunting and

attacking the prey. The algorithmic steps of GWO are presented in this section.

3.3.1 Algorithmic steps and Pseudo code

The GWO algorithm is described briefly with the following steps:

Step 1: Initialize the GWO parameters such as search agents (Gs), design variable size

(Gd), vectors a, A, C and maximum number of iteration (itermax).

→ → →
A. = 2 a. rand 1− a (3.1)

C. = 2. rand 2 (3.2)


The values of a are linearly decreased from 2 to 0 over the course of iterations.

Step 2: Generate wolves randomly based on size of the pack. Mathematically, these

wolves can be expressed as,

G11 G 21 G31 1
. . . . . G Gd −1
1
GGd
G12 G 22 G 32 2
. . . . . GGd −1
2
GGd
. . . . . . . . . .
Wolves = (3.3)
. . . . . . . . . .
G1Gs G 2Gs G3Gs . . . . . GGd
Gs
−1
Gs
G Gd

' ( !)
26

Where, Gij is the initial value of the jth pack of the ith wolves.

Step 3: Estimate the fitness value of each hunt agent using Equations (3.4)-(3.5).

→ → → →
D = C . G p (t)- G (t) (3.4)

→ → → →
G (t + 1 ) = G p (t) − A . D (3.5)
Step 4: Identify the best hunt agent (G ), the second best hunt agent (G ) and the third best

hunt agent (G ) using Equations (3.6)-(3.11).

→ → → →
Dα = C 1 . G α - G (3.6)

→ → → →
Dβ = C2 . G β - G (3.7)

→ → → →
Dδ = C3 . G δ -G (3.8)

→ → → →
G1 = G α − A1 .( D α ) (3.9)
→ → → →
G2 = G β − A2 .( Dβ ) (3.10)
→ → → →
G 3 = G δ − A3 .( D δ ) (3.11)
Step 5: Renew the location of the current hunt agent using Equation (3.12).

→ → →
→ G1 + G 2 + G 3
G (t + 1) = (3.12)
3
Step 6: Estimate the fitness value of all hunts.

Step 7: Update the value of G ,G and G .

Step 8: Check for stopping condition i.e., whether the Iter reaches Itermax, if yes, print the

best value of solution otherwise go to step 5.

The Pseudo code for GWO algorithm is as follows

1: Generate initial search agents Gi (i=1, 2,…., n)


2: Initialize the vector’s a, A and C
3: Estimate the fitness value of each hunt agent
G =the best hunt agent

' ( !)
27

G =the second best hunt agent


G =the third best hunt agent
4: Iter=1
5: repeat
6: for i=1: Gs (grey wolf pack size)
Renew the location of the current hunt agent using Equation (3.12).
End for
7: Estimate the fitness value of all hunt agents
9: Update the value of G , G , G
8: Update the vectors a, A and C
10: Iter=Iter+1
11: until Iter>= maximum number of iterations {Stopping criteria}
12: output G
End

3.4 Chapter Summary

This chapter details the justification for using the meta-heuristic techniques for

solving engineering optimization problem. Further, background review of the modern

meta-heuristic technique namely GWO is presented. The computational steps involved to

find the feasible solution for non-linear optimization problem are also presented. The

pseudo code for the GWO algorithm is also illustrated.

' ( !)

You might also like