当前课程知识点:现代优化方法(英文) >  Chapter 7. Theory of Constrained Optimization >  7.4 Constraint Qualifications >  7.4 Constraint Qualifications

返回《现代优化方法(英文)》慕课在线视频课程列表

7.4 Constraint Qualifications在线视频

下一节:7.5 First-Order Optimality Conditions

返回《现代优化方法(英文)》慕课在线视频列表

7.4 Constraint Qualifications课程教案、知识点、字幕

Hi, everyone,

welcome to modern optimization methods.

Today we will introduce the fourth part of chapter seven,

the constraint qualifications.

We will basically introduce the following definitions

to derive the constraint qualifications.

One is tangent cone

and another one is linearized feasible direction.

And finally

is the linearly independent constraint qualification.

Now let's start with a brief talk about constraint qualifications

as we analyzed in the previous parts

for the first order optimality condition,

we have derived the first order optimality condition

by using some sort of linear expansion.

And then we get the mathematical form

for first order optimality condition.

However,

a question is whether such linear approximation works

or whether it is sound,

so we have to provide some theoretical investigation to prove this.

And this theoretical tool is just the constraint qualification.

In other words,

the constraint qualification actually shows that

under what condition we can do linear expansion

to the constraints as well as to the function values.

Now let's first take a look at the tangent cone.

We start with the definition

about a feasible sequence approaching a point x.

Assume that Ω is a closed convex set.

Given a feasible point x,

then we call a sequence zk

a feasible sequence approaching point x

if the following conditions hold.

For sufficiently large k,

the sequence zk will be inside of Ω,

and the second condition is that the limit of zk must be x,

so this will be a feasible sequence approaching x.

And with this feasible sequence approaching to x

then we can give the tangent direction.

What is tangent direction?

It is defined as follows.

We make this vector d to be a tangent

or tangent vector to a set Ω at point x

if the following conditions hold.

The first condition is that

we can find a feasible sequence zk approaching this point x.

The second condition is that

there is a sequence of positive scalars tk

with tk approaching to zero,

such that the following condition holds,

meaning that the limit of this ratio is exactly the direction d.

Then we will call that

this d is a tangent vector to the set Ω at point x.

And all the tangent vectors,

they will make up the tangent cone to set Ω at point x.

This is the definition for tangent cone.

In other words,

the tangent vector is defined as a limiting direction,

which is approaching to x.

If we take a look at this limit,

then it actually means that

we got a feasible sequence approaching x,

this is the first one.

Secondly,

we also got a sequence of scalar tk approaching zero

to make this limit exist,

and the limit will be the tangent vector.

We give an example in the next page.

Now in this page,

we actually have the feasible region as a circle.

And we consider

the tangent cone at the nonoptimal point (-√2,0)

Here,

let's construct two sequences to derive the tangent vector,

so one sequence is the zk like this.

We take it from downward to x,

and it moves along the direction of zk here

and approaching to the point x in this way.

So as k goes to infinity,

zk will approach x.

And we make the scalar tk as the distance between zk and x.

By taking the limit,

then we can get the limit direction d as (0, -1) this direction.

Because we are approaching this x from downward.

Similarly,

if we approach x from upward,

then we can similarly get another direction - tangent direction

as (0, 1).

Now the tangent cone

will make up by the two vectors here,

so the first component is zero

and the second one is actually both negative or positive,

or zero.

This means that the second component here

is actually free to move along any direction.

Now,

having given the tangent cone,

you can see that actually the tangent cone

is not easy to calculate.

Giving you a nonoptimal point,

calculating the tangent cone may be very difficult.

And to that end,

we need to use some alternative way

to represent this tangent cone.

The alternative way

is actually called the linearized feasible direction,

which we actually used in previous parts.

Now let's recall our constraint is some equality one

and inequality one.

And also we have definitions for the active set,

which actually means those constraints

that make strictly equality constraints at some point x.

Now,

having the constraint and active set,

we can define the linearized feasible direction

in the following way.

It says that f at the point x is made up of those directions,

so such directions have to satisfy two conditions.

The first one is for all equality constraints,

this direction must be orthogonal to the gradient

of these constraints.

And for the inequality constraints,

and also active constraints,

the inner product of the direction

with the gradient of constraints must be greater than

or equal to zero.

Which means that,

in this set,

we actually contain some linearized feasible directions.

And also notice that in this linearized feasible direction,

this set is also a cone.

Because if you multiply by a positive scalar,

it is still a feasible point in this set.

Now let's take a look at

how we calculate the linearized feasible direction,

still for the constraint here.

At the point here,

let's calculate the linearized feasible direction.

Recall that the definition for this case is that

we only require the direction

is orthogonal to the gradient of equality constraints,

so if we take the gradient of equality constraints,

then we can get the following condition.

By taking it to be zero,

we can see that it actually means that d1 is zero.

In other words,

the linearized feasible direction also takes the form

as the first component to be zero

and the second one is free.

Compared to the tangent cone

as we analyzed before,

you can see that the two sets are actually the same.

In other words,

sometimes or under some conditions,

the linearized feasible direction

is exactly the tangent cone at the same point,

so you may ask,

under what condition,

the conclusion will hold.

This actually brings the constraint qualifications.

Now let's give a very special class of constraint qualifications,

which is LICQ.

It actually means that

the linearly independent constraint qualification,

the definition is like this.

Assume that we have a point x

and we got the corresponding active set for some constraints.

Then we say that

the linear independent constraint qualification holds

if for all the active constraint gradients,

they are linearly independent.

If this condition is satisfied,

then we say that LICQ holds.

Now we have some remarks about the LICQ property.

As we mentioned before,

that the tangent cone

and the linearized feasible direction may sometimes coincide

with each other,

so under what condition they will coincide?

We need some constraint qualifications.

We can see that or we can prove that,

under LICQ condition,

then the linearized feasible direction does coincide

with the tangent cone,

so this is a very helpful property.

Therefore,

in our following optimality condition analysis,

we will always use LICQ.

Actually,

there are also other constraint qualifications,

like the Mangasarian-Fromovitz constraint qualifications

or other different types of constraint qualifications,

under which we can also get different characterizations

about linearized feasible direction and the tangent cone.

Also LICQ is actually a very special constraint qualification,

under which we do have that the two sets,

they are the same.

This is the main conclusion of this part.

Now let's make a brief summary about this part.

We basically introduce the constraint qualification,

which actually plays an important role

in analyzing the relations of linearized feasible direction

as well as tangent cone.

We stop here and see you next time.

现代优化方法(英文)课程列表:

Chapter 1. Introduction

-1.1 About optimization

--1.1 About optimization

-1.2 Classification of optimization

--1.2 Classification of optimization

-1.3 Preliminaries in convex analysis

--1.3 Preliminaries in convex analysis

-课件

-【作业须知】

-练习题

Chapter 2. Fundamentals of Unconstrained Optimization

-2.1 What is a Solution?

--2.1 What is a Solution?

-2.2 Optimality Conditions Ⅰ

--2.2 Optimality Conditions Ⅰ

-2.3 Optimality Conditions Ⅱ

--2.3 Optimality Conditions Ⅱ

-2.4 Line search strategy

--2.4 Line search strategy

-2.5 Search direction Ⅱ

--2.5 Search direction Ⅱ

-2.6 Convergence

--2.6 Convergence

-课件

-【作业须知】

-作业

Chapter 3. Line Search Methods

-3.1 Exact Step Length

--3.1 Exact Step Length

-3.2 The Wolfe conditions

--3.2 The Wolfe conditions

-3.3 Inexact Line Search II

--3.3 Inexact Line Search II

-3.4 Convergence of Line Search Methods

--3.4 Convergence of Line Search Methods

-3.5 Convergence Rate

--3.5 Convergence Rate

-课件

-【作业须知】

-作业

Chapter 4. Trust Region Methods

-4.1 Main Idea of Trust Region Methods

--4.1 Main Idea of Trust Region Methods

-4.2 Trust-Region Algorithm

--4.2 Trust-Region Algorithm

-4.3 Solving Subproblem

--4.3 Solving Subproblem

-4.4 Solving Subproblem II

--4.4 Solving Subproblem II

-4.5 Convergence

--4.5 Convergence

-课件

-【作业须知】

-作业

Chapter 5. Conjugate Gradient Methods

-5.1 Conjugate direction method

--5.1 Conjugate direction method

-5.2 Property of conjugate direction method

--5.2 Property of conjugate direction method

-5.3 Conjugate gradient method

--5.3 Conjugate gradient method

-5.4 Rate of convergence

--5.4 Rate of convergence

-5.5 Nonlinear conjugate gradient method

--5.5 Nonlinear conjugate gradient method

-5.6 Convergence of nonlinear conjugate gradient method

--5.6 Convergence of nonlinear conjugate gradient method

-课件

-【作业须知】

-作业

Chapter 6. Semismooth Newton's Method

-6.1 Semismoothness

--6.1 Semismoothness

-6.2 Semismooth Newton's Method

-6.3 Support Vector Machine

--6.3 Support Vector Machine

-6.4 Semismooth Newtons' Method for SVM

--6.4 Semismooth Newtons' Method for SVM

-6.5 Exploring Sparsity in SVM

--6.5 Exploring Sparsity in SVM

-课件

-【作业须知】

-作业

Chapter 7. Theory of Constrained Optimization

-7.1 Local and Global Solutions

--7.1 Local and Global Solutions

-7.2 Examples One

--7.2 Examples One

-7.3 Examples Two and Three

--7.3 Examples Two and Three

-7.4 Constraint Qualifications

--7.4 Constraint Qualifications

-7.5 First-Order Optimality Conditions

--7.5 First-Order Optimality Conditions

-7.6 Second Order Necessary Condition

--7.6 Second Order Necessary Condition

-7.7 Second Order Sufficient Condition

--7.7 Second Order Sufficient Condition

-7.8 Duality

--7.8 Duality

-课件

-【作业须知】

-作业

Chapter 8. Further Discussions on Constrained Optimization

-8.1 KKT conditions

--8.1 KKT conditions

-8.2 An Example

--8.2 An Example

-8.3 Dual Problem

--8.3 Dual Problem

-课件

-【作业须知】

-测试题

Chapter 9. Penalty and Augmented Lagrangian Methods

-9.1 Quadratic Penalty Method

--9.1 Quadratic Penalty Method

-9.2 Exact Penalty Function

--9.2 Exact Penalty Function

-9.3 Augmented Lagrangian Method

--9.3 Augmented Lagrangian Method

-9.4 Quadratic Penalty Method for Hypergraph Matching

--9.4 Quadratic Penalty Method for Hypergraph Matching

-9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

--9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

-9.6 Augmented Lagrangian Method for SVM

--9.6 Augmented Lagrangian Method for SVM

-9.7 Augmented Lagrangian Method for SVM: Ⅱ

--9.7 Augmented Lagrangian Method for SVM: Ⅱ

-课件

-【作业须知】

-作业

7.4 Constraint Qualifications笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。