当前课程知识点:现代优化方法(英文) >  Chapter 9. Penalty and Augmented Lagrangian Methods >  9.1 Quadratic Penalty Method >  9.1 Quadratic Penalty Method

返回《现代优化方法(英文)》慕课在线视频课程列表

9.1 Quadratic Penalty Method在线视频

下一节:9.2 Exact Penalty Function

返回《现代优化方法(英文)》慕课在线视频列表

9.1 Quadratic Penalty Method课程教案、知识点、字幕

Hi, everyone.

Welcome to modern optimization methods.

Today we will introduce

the first part of chapter nine,

the penalty method

and augmented Lagrangian method.

This chapter is divided into the following parts.

First,

we will introduce the quadratic penalty method

and then the exact penalty method.

Finally,

we will introduce the augmented Lagrangian method.

If you have time,

then we will discuss more properties and applications

for the three types of method.

Now

we begin with the first part of this chapter,

the quadratic penalty methods.

Let's take a look at the

equality constraint case first.

We try to minimize

a function value f about x.

We only have some equality constraints.

And in this case,

the idea of penalty method

is that we try to put the constraints

into the objective function

and transfer it into an unconstrained problem.

If we put the constraint

into the objective function,

then we cannot guarantee that x may

satisfy the constraints.

Therefore, we need a measure

to penalize the violation of constraints.

The quadratic penalty function,

or the quadratic penalty methods

basically means that

we take the quadratic penalty term

of the constraint into the objective function.

Now the quadratic penalty function

for the problem here is that

we just penalize

each of the equality constraints

into the objective function,

and we take the quadratic term.

After that,

we make the sum of all the violations

and then we multiplied by a

penalty parameter μ.

This will form the quadratic penalty function.

In other words,

if we solve this quadratic penalty problem,

then we may reach a function value.

However,

this solution may violate the constraints here.

So, suppose or assume that

we derive the penalty parameter μ

to be infinity,

then we can actually see that

we will restrict the feasible solution

to be in the feasible sets here.

So, this is the idea of

quadratic penalty method.

Now we have several remarks about the

quadratic penalty method.

The first one is,

as I just mentioned before,

if we derive μ to be positive infinity,

we can actually restrict the set

or we can actually restrict the variable

inside of the feasible region.

This is an infinity case.

This motivates us to consider a sequence

of the penalty parameter μ

that we move gradually to the infinity

for this penalty parameter.

So, a question is that

can we initially set this μ to be infinity.

The answer is that we can do this.

However, in this case,

this problem may be in a very

bad condition number.

So, we have to set this μ from a very small value

and then we increase it gradually

to solve a sequence of the penalized problem.

So, for each μk,

after we fix μk,

we got an unconstrained problem.

We solve this unconstrained problem

to get an approximate solution

which may violate the constraint.

And then this solution from the

quadratic penalty problem

can provide us the initial guess

to solve the next iteration.

By doing this,

we hope that the limit point of the sequence

will converge to the optimal solution

of the original problem.

Now for solving the unconstrained problem

inside of the out loop,

we can use just a few iteration steps

to reach an approximate solution.

We do not need to solve the

unconstrained problem exactly

to the optimal solution.

We just solve them using a few steps

which is enough.

Now let's show you an example

how the quadratic penalty method works.

We have the example,

which is the same as in chapter seven.

We minimize the sum of two components

and subjected to an equality constraint here.

This equality constraint is basically a circle

with center origin.

And the quadratic penalty function is like this.

We take the quadratic term of the constraints

and we multiply by a quadratic penalty parameter μ.

Then by doing this,

we can get the quadratic penalty function Q(x, μ).

Now let's take a look at the figure

where this Q(x, μ) lies.

Now these are the contours

of the quadratic penalty function.

As we can see that

the minimizer is actually achieved

at this point (-1.1,-1.1).

And near x=(0.3,0.3),

we actually got a local minimizer here.

This is a local minimizer.

It actually means that

by setting different parameters,

we may got different contours

of function values,

which may return us different minimizers.

Now if we set

the penalty parameter μ is 0.1,

then we can get a function value

contour like this.

You can see that the contour shape is similar.

However,

they do have different local minimizers

and global minimizers.

Now,

another question is that

for inequality constraints,

what can we do?

Or what kind of quadratic

penalty function can we use?

The inequality constraints basically mean that

if we got feasible solutions

ci(x) greater than or equal to zero,

then we will not penalize the problem.

If it is smaller than zero,

then we need some penalty.

Therefore,

we add an extra penalty term here,

which means that if ci(x) is smaller than zero,

then we take the max of zero and minus ci(x)

then this will be a positive part.

And we take the square of this part,

then this will make the quadratic penalty term

for inequality constrained case.

Putting together the equality penalty term

and inequality penalty term,

then we get the quadratic penalty function

for both equality constraints and inequality constraints.

And notice here that the penalty parameter μ,

they are the same

for equality one and inequality one.

For general problems,

this will be the quadratic penalty function we can use.

After that,

we are now ready to give the outline

of quadratic penalty method.

Before that,

we have some settings.

The starting penalty parameter μ0

and a sequence {τk}

which controls the tolerance of solving subproblem.

We assume that {τk}

goes down to be zero

and a starting point xs0

for the very starting point of first loop.

Then we start the loop in the following.

We first find an approximate minimizer xk

of the quadratic penalty function

with the starting point at xsk.

Then

we stop solving the inner problem,

or the inner unconstrained problem,

until the first order optimality condition

approximately holds.

This norm of the quadratic penalty function

greater than or equal to τk

basically means that xk is an approximate

first order solution for the problem.

Now we check whether we can stop.

If the final convergence test is satisfied,

then we stop.

Otherwise,

we can choose a new parameter

which is greater than the original

penalty parameter

then we solve the problem again

with a starting point xs(k+1).

In other words,

the quadratic penalty method actually

transfers the constrained problem

to a series of unconstrained problem,

we increase the penalty parameter gradually,

hopefully the limit will be the optimal solution

of the original problem.

Now in terms of the convergence

of quadratic penalty method,

we have two theorems.

The first one

is about the global minimizer result.

We assume that

each xk is the exact global minimization

of quadratic penalty problem.

Then

we can show that every limit point x*

of the sequence xk is a global solution

of the original problem.

In other words,

if for each inner problem

we can get the exact global minimizer xk,

then the limit point of xk

by driving μk to be positive infinity,

then we can get that

the limit point of the xk

is the global minimizer

of the original constrained problem.

This is a very nice property.

However, in practice,

if the problem of unconstrained

quadratic penalty problem is not convex

it is usually difficult

to get the exact global minimizer.

Therefore,

we can only get the

local minimizer or stationary point.

The following theorem here

actually mentions that

when we can only solve the

first order optimality condition

of the inner problem,

then the limit point of this sequence

will also be the first order stationary point

of the original problem.

Let's take a look at the details here.

We assume that the tolerance

τk tends to zero

and the penalty parameter μ

tends to positive infinity.

Then if a limit point x*

of the sequence {xk} is infeasible,

in this case,

x* must be a stationary point

of the quadratic norm of constraints c(x).

This is one case.

Another case is that

if a limit point x* is feasible,

and if the constraint gradients

▽ci(x*) is linearly independent,

then we can see that x* is a KKT point

of the original constrained problem.

For such points,

we have the following result,

that for any infinite subsequence K

that this small index k lies in the big K

where xk tends to x*.

Then we have that

the limit point of μk multiplied by ci(xk)

will be the Lagrange multiplier λi*

for all these i

in the equality constrained part.

In other words,

if x* is a feasible point

and LICQ holds,

then we can see that x* is a KKT point.

And we can also derive

the corresponding Lagrange multiplier in this way

which is actually a local

convergence result.

Now let's make a brief summary about this part.

In this part,

we introduce the quadratic penalty method

as well as its global convergence result.

And different assumptions

one is under global optimizer solution.

Another is that we assume

the xk is just an approximate stationary point.

We stop this part here and see you next time.

现代优化方法(英文)课程列表:

Chapter 1. Introduction

-1.1 About optimization

--1.1 About optimization

-1.2 Classification of optimization

--1.2 Classification of optimization

-1.3 Preliminaries in convex analysis

--1.3 Preliminaries in convex analysis

-课件

-【作业须知】

-练习题

Chapter 2. Fundamentals of Unconstrained Optimization

-2.1 What is a Solution?

--2.1 What is a Solution?

-2.2 Optimality Conditions Ⅰ

--2.2 Optimality Conditions Ⅰ

-2.3 Optimality Conditions Ⅱ

--2.3 Optimality Conditions Ⅱ

-2.4 Line search strategy

--2.4 Line search strategy

-2.5 Search direction Ⅱ

--2.5 Search direction Ⅱ

-2.6 Convergence

--2.6 Convergence

-课件

-【作业须知】

-作业

Chapter 3. Line Search Methods

-3.1 Exact Step Length

--3.1 Exact Step Length

-3.2 The Wolfe conditions

--3.2 The Wolfe conditions

-3.3 Inexact Line Search II

--3.3 Inexact Line Search II

-3.4 Convergence of Line Search Methods

--3.4 Convergence of Line Search Methods

-3.5 Convergence Rate

--3.5 Convergence Rate

-课件

-【作业须知】

-作业

Chapter 4. Trust Region Methods

-4.1 Main Idea of Trust Region Methods

--4.1 Main Idea of Trust Region Methods

-4.2 Trust-Region Algorithm

--4.2 Trust-Region Algorithm

-4.3 Solving Subproblem

--4.3 Solving Subproblem

-4.4 Solving Subproblem II

--4.4 Solving Subproblem II

-4.5 Convergence

--4.5 Convergence

-课件

-【作业须知】

-作业

Chapter 5. Conjugate Gradient Methods

-5.1 Conjugate direction method

--5.1 Conjugate direction method

-5.2 Property of conjugate direction method

--5.2 Property of conjugate direction method

-5.3 Conjugate gradient method

--5.3 Conjugate gradient method

-5.4 Rate of convergence

--5.4 Rate of convergence

-5.5 Nonlinear conjugate gradient method

--5.5 Nonlinear conjugate gradient method

-5.6 Convergence of nonlinear conjugate gradient method

--5.6 Convergence of nonlinear conjugate gradient method

-课件

-【作业须知】

-作业

Chapter 6. Semismooth Newton's Method

-6.1 Semismoothness

--6.1 Semismoothness

-6.2 Semismooth Newton's Method

-6.3 Support Vector Machine

--6.3 Support Vector Machine

-6.4 Semismooth Newtons' Method for SVM

--6.4 Semismooth Newtons' Method for SVM

-6.5 Exploring Sparsity in SVM

--6.5 Exploring Sparsity in SVM

-课件

-【作业须知】

-作业

Chapter 7. Theory of Constrained Optimization

-7.1 Local and Global Solutions

--7.1 Local and Global Solutions

-7.2 Examples One

--7.2 Examples One

-7.3 Examples Two and Three

--7.3 Examples Two and Three

-7.4 Constraint Qualifications

--7.4 Constraint Qualifications

-7.5 First-Order Optimality Conditions

--7.5 First-Order Optimality Conditions

-7.6 Second Order Necessary Condition

--7.6 Second Order Necessary Condition

-7.7 Second Order Sufficient Condition

--7.7 Second Order Sufficient Condition

-7.8 Duality

--7.8 Duality

-课件

-【作业须知】

-作业

Chapter 8. Further Discussions on Constrained Optimization

-8.1 KKT conditions

--8.1 KKT conditions

-8.2 An Example

--8.2 An Example

-8.3 Dual Problem

--8.3 Dual Problem

-课件

-【作业须知】

-测试题

Chapter 9. Penalty and Augmented Lagrangian Methods

-9.1 Quadratic Penalty Method

--9.1 Quadratic Penalty Method

-9.2 Exact Penalty Function

--9.2 Exact Penalty Function

-9.3 Augmented Lagrangian Method

--9.3 Augmented Lagrangian Method

-9.4 Quadratic Penalty Method for Hypergraph Matching

--9.4 Quadratic Penalty Method for Hypergraph Matching

-9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

--9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

-9.6 Augmented Lagrangian Method for SVM

--9.6 Augmented Lagrangian Method for SVM

-9.7 Augmented Lagrangian Method for SVM: Ⅱ

--9.7 Augmented Lagrangian Method for SVM: Ⅱ

-课件

-【作业须知】

-作业

9.1 Quadratic Penalty Method笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。