Professional Documents
Culture Documents
2
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Problem with 2 variables and 1 constraints (n = 2, m = 1)
min 𝑓 𝑥1 , 𝑥2
s. t. 𝑔 𝑥1 , 𝑥2 = 0
construct the Lagrange function 𝐿 𝑥1 , 𝑥2 , 𝜆 where 𝜆 is the
Lagrange multiplier
𝐿 𝑥1 , 𝑥2 , 𝜆 = 𝑓 𝑥1 , 𝑥2 + 𝜆𝑔 𝑥1 , 𝑥2
3
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
The necessary conditions for its extreme point are:
𝜕𝐿 𝜕𝑓 𝜕𝑔
𝑥1 , 𝑥2 , 𝜆 = 𝑥1 , 𝑥2 + 𝜆 𝑥1 , 𝑥2 = 𝟎
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1
𝜕𝐿 𝜕𝑓 𝜕𝑔
𝑥1 , 𝑥2 , 𝜆 = 𝑥1 , 𝑥2 + 𝜆 𝑥1 , 𝑥2 = 𝟎
𝜕𝑥2 𝜕𝑥2 𝜕𝑥2
𝜕𝐿
𝑥1 , 𝑥2 , 𝜆 = 𝑔 𝑥1 , 𝑥2 = 0
𝜕𝜆
Example
min 𝑓 𝑥, 𝑦 = 𝑘𝑥 −1 𝑦 −2
s. t. 𝑔 𝑥, 𝑦 = 𝑥 2 + 𝑦 2 − 𝑎2 = 0
4
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
5
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
6
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
min 𝑓 𝑥1 , 𝑥2
s. t. 𝑔1 𝑥1 , 𝑥2 = 0
𝑔2 𝑥1 , 𝑥2 = 0
the necessary condition to have
𝑥1∗ , 𝑥2∗ on C as an extreme is:
−𝛻𝑓 = 𝜆1 𝛻𝑔1 + λ2 𝛻𝑔2
7
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Problem with n variables and m constraints
min 𝑓 𝒙
𝒙
s. t. 𝑔𝑗 𝒙 = 0, 𝑗 = 1,2, . . , 𝑚
construct the Lagrange function 𝐿 with Lagrange multiplier
𝜆𝑗 for each constraint 𝑔𝑗
𝑚
𝐿 𝑥1 , 𝑥2 , … , 𝑥𝑛 , 𝜆1 , 𝜆2 , … , 𝜆𝑚 = 𝑓 𝒙 + 𝜆𝑗 𝑔𝑗 𝒙
𝑗=1
𝐿 𝒙, 𝝀 = 𝑓 𝒙 + 𝝀𝑻 𝒈 𝒙 , 𝒙 ∈ ℝ𝑛 , 𝝀, 𝒈 ∈ ℝ𝑚
8
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Necessary Conditions for a General Problem (n , m )
By treating 𝐿 as a function of n + m unknowns,
𝑚
𝜕𝐿 𝜕𝑓 𝜕𝑔𝑗
= + 𝜆𝑗 = 0, 𝑖 = 1,2, … , 𝑛 (𝑛 equations)
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗=1
𝜕𝐿
= 𝑔𝑗 𝒙 = 0, 𝑗 = 1,2, … , 𝑚 (𝑚 equations)
𝜕𝜆𝑗
Sufficient Conditions for a General Problem (n , m )
𝑛 𝑛
𝜕 2𝐿
𝑄 = 𝑑𝒙 𝑇 𝛻𝑥2 𝐿 𝒙∗ , 𝝀∗ 𝑑𝒙 = 𝒙∗ , 𝝀∗ 𝑑𝑥𝑖 𝑑𝑥𝑗 > 0
𝜕𝑥𝑖 𝑥𝑗
𝑖=1 𝑗=1
for all the admissible variations 𝑑𝑥𝑖 and 𝑑𝑥𝑗 (𝛻𝑥2 𝐿 ≻ 0) 9
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Sufficient Conditions for a General Problem (n , m )
16
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Consider the problem
min 𝑓 𝒙
𝒙
𝐬. 𝐭. 𝑔𝒋 𝒙 ≤ 0, 𝑗 ∈ 𝐽 = {1,2, . . , 𝑚}
The necessary conditions to be satisfied at a relative min of 𝑓
𝑚
𝜕𝑓 𝜕𝑔𝑗
+ 𝜆𝑗 =0
𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗∈𝐽1
𝜆𝑗 > 0; 𝑗 ∈ 𝐽1
where 𝐽1 is the set of active constraints.
17
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions
A vector 𝑺 is called a feasible direction from a point 𝒙 if at least
a small step can be taken along 𝑺 that does not immediately
leave the feasible region. Thus for problems with sufficiently
smooth constraint surfaces, vector 𝑺 satisfying the relation
𝑺T 𝛻𝑔𝑗 < 0
18
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions
𝑺T 𝛻𝑔𝑗 ≤ 0
19
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions
min 𝑓 𝒙
𝒙
𝐬. 𝐭. ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝑔𝒋 𝒙 ≤ 0, 𝑗 = 1,2, . . , 𝑚
24
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Karush-Kuhn-Tucker (KKT) conditions
The necessary conditions to be satisfied at a relative min of 𝑓:
𝑚 𝑝
𝜕𝑓 𝜕𝑔𝑗 𝜕ℎ𝑘
+ 𝜆𝑗 − 𝛽𝑘 = 0; 𝑖 = 1,2, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑘
𝑗=1 𝑘=1
𝜆𝑗 𝑔𝑗 = 0; 𝑗 = 1,2, . . , 𝑚
𝑔𝒋 𝒙 ≤ 0; 𝑗 = 1,2, . . , 𝑚
ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝜆𝑗 ≥ 0; 𝑗 = 1,2, . . , 𝑚
25
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Theorem
Let 𝒙∗ be a feasible solution to the problem of mixed constraints
problem. If 𝛁𝒈𝒋 (𝒙∗ ), 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ), 𝒌 = 𝟏, 𝟐, … , 𝒑, are linearly
independent, there exist 𝝀∗ and 𝜷∗ such that (𝒙∗ , 𝝀∗ , 𝜷∗ ) satisfy:
𝑚 𝑝
𝜕𝑓 𝜕𝑔𝑗 𝜕ℎ𝑘
+ 𝜆𝑗 − 𝛽𝑘 = 0; 𝑖 = 1,2, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑘
𝑗=1 𝑘=1
𝜆𝑗 𝑔𝑗 = 0; 𝑗 = 1,2, . . , 𝑚
𝑔𝒋 𝒙 ≤ 0; 𝑗 = 1,2, . . , 𝑚
ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝜆𝑗 ≥ 0; 𝑗 = 1,2, . . , 𝑚 26
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Notes
• The requirement of 𝛁𝒈𝒋 (𝒙∗ ) , 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ) , 𝒌 =
𝟏, 𝟐, … , 𝒑, are linearly independent, is called the constraint
qualification .
• If 𝛁𝒈𝒋 (𝒙∗ ), 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ), 𝒌 = 𝟏, 𝟐, … , 𝒑, are linearly
independent then, there exist ( 𝒙∗ , 𝝀∗ , 𝜷∗ ) satisfy KTT,
however, converse may not be true.
• The constraint qualification is always satisfied for problems:
– All the inequality and equality constraint functions are linear.
– All the inequality constraint functions are convex, and all the equality
constraint functions are linear.
27
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Example
Consider the following optimization problem:
min 𝑓 𝑥1 , 𝑥2 = 𝑥1 − 1 2 + 𝑥22
𝒙
s. t. 𝑥13 + 2𝑥2 ≤ 0
𝑥13 − 2𝑥2 ≤ 0