当前课程知识点:现代优化方法(英文) >  Chapter 9. Penalty and Augmented Lagrangian Methods >  9.2 Exact Penalty Function >  9.2 Exact Penalty Function

返回《现代优化方法(英文)》慕课在线视频课程列表

9.2 Exact Penalty Function在线视频

下一节:9.3 Augmented Lagrangian Method

返回《现代优化方法(英文)》慕课在线视频列表

9.2 Exact Penalty Function课程教案、知识点、字幕

Hi, everyone.

Welcome to optimization methods.

Today we will introduce the second part

of chapter nine,

the exact penalty method.

So recall that in this chapter

we are trying to solve the constrained problem

with equality constraints and inequality constraints.

We try to use the way that

we transfer the constraint

into the unconstrained problem.

So in the previous part,

we introduce that we transfer the constraints

in terms of quadratic penalty method,

and we solve the resulting unconstrained problem.

So there is another way

that we transfer the constraint

in terms of penalty function by absolute values.

So let's take a look at the nonsmooth

penalty method,

which is also referred to as l1 penalty method.

In l1 penalty method,

we just penalize the constraints

in the absolute value here.

So for equality constraints,

we just put the absolute value here

and do the summation.

For inequality constraints,

we take the max of zero

and the nonnegative part of

inequality constraints,

then we put them directly into the objective function.

This will become the l1 penalty function here.

So in other words,

we replace the l2

penalty function by l1 penalty function.

And then we get the l1 penalty problem.

So we have some remarks

about the exact penalty method.

Actually, for exact penalty method,

why we call it exact,

because it has a nice property,

meaning the exact recovery property

that is by choosing a proper penalty parameter μ.

The resulting

penalty problem have the same solution

as the original problem.

So recall that in quadratic penalty method,

we have to drive the parameter σ

to be infinity in order to get the original solution

for the problem.

Now here that we have no need to drive this

penalty parameter μ to infinity.

So we just choose a proper one,

then we can get the exact solution

of the original problem.

Now,

let's take a look at the example here.

We get a very simple example in one-dimensional space,

where we try to minimize x

and we have a nonnegative constraint

that x must be greater than or equal to one.

To see the constraint and the objective,

we can easily find that the optimal solution

is achieved at x* =1.

And if we take a further look at the l1 penalty function,

it actually takes the following form

that we multiply the violation of the inequalities

by a penalty parameter μ,

and since this is an absolute value of max,

according to different cases,

we can write the l1 penalty function into two parts.

If x is smaller than or equal to one,

then this will be a violation,

and we add the penalty term here.

So the objective becomes the term like this.

And if x satisfies the condition,

meaning that at x is greater than or equal to one,

then the function value takes the original one,

which is x.

So in other words,

for the one-dimensional example here,

we can actually plot the figure of this

l1 penalty function.

And by taking the minimization

of the l1 penalty function,

we can find the minimizer.

So below,

we demonstrate the figure of the penalty function.

As we can see that,

this is the kind of penalty function.

So the left hand side is just

a penalty one because in this case,

x violates the constraints.

When x is greater than one,

then this is the original function.

And for this case,

you can see that if μ is greater than one,

then we have the figure like this,

where the optimal solution is achieved

exactly at one.

However,

if we choose penalty parameter μ

strictly smaller than one,

then the penalty function take the shape like this,

where x=1

is not the minimizer of the problem.

Because for the part here,

we can actually see that this problem is unbounded.

Therefore,

for l1 penalty function,

the parameter μ has to be chosen carefully.

It has no need to turn to infinity.

Now let's take a look at

how we do the exact penalty method.

So the framework of the algorithm

is demonstrated here,

where it is very similar

to the quadratic penalty method, actually.

So in the initial point

or the starting point,

we still choose the penalty parameter μ0,

and tolerance τ

as well as a starting point xs0.

Then we start the loop

in the following way.

So in each iteration,

we find an approximate minimizer

about this l1 penalty function

with a starting point at xsk,

and if the measurement of violation

is smaller than or equal to τ,

then we will stop the inner iteration.

And then we return the current iteration xk.

Now we check whether we need to

stop the outer loop.

If the stopping criteria is satisfied,

then we stop with the out loop.

Otherwise,

we need to choose the next new

penalty parameter to be greater

than the current one.

And then we choose a new starting point

for the next loop.

And the iteration goes on until

the final stopping criterion is satisfied.

Now let's make a brief summary about

the exact penalty method.

The exact penalty method basically means that

we take the penalty function as the l1 penalty function

to do the optimization process.

There is a good property

about the exact penalty function.

However,

when we try to solve the inner problem,

we actually may get some difficulty

because the objective function is nonsmooth.

Anyway,

we will leave this problem to the future,

which we will discuss

how we solve nonsmooth problems.

We stop this part here and see you next time.

现代优化方法(英文)课程列表:

Chapter 1. Introduction

-1.1 About optimization

--1.1 About optimization

-1.2 Classification of optimization

--1.2 Classification of optimization

-1.3 Preliminaries in convex analysis

--1.3 Preliminaries in convex analysis

-课件

-【作业须知】

-练习题

Chapter 2. Fundamentals of Unconstrained Optimization

-2.1 What is a Solution?

--2.1 What is a Solution?

-2.2 Optimality Conditions Ⅰ

--2.2 Optimality Conditions Ⅰ

-2.3 Optimality Conditions Ⅱ

--2.3 Optimality Conditions Ⅱ

-2.4 Line search strategy

--2.4 Line search strategy

-2.5 Search direction Ⅱ

--2.5 Search direction Ⅱ

-2.6 Convergence

--2.6 Convergence

-课件

-【作业须知】

-作业

Chapter 3. Line Search Methods

-3.1 Exact Step Length

--3.1 Exact Step Length

-3.2 The Wolfe conditions

--3.2 The Wolfe conditions

-3.3 Inexact Line Search II

--3.3 Inexact Line Search II

-3.4 Convergence of Line Search Methods

--3.4 Convergence of Line Search Methods

-3.5 Convergence Rate

--3.5 Convergence Rate

-课件

-【作业须知】

-作业

Chapter 4. Trust Region Methods

-4.1 Main Idea of Trust Region Methods

--4.1 Main Idea of Trust Region Methods

-4.2 Trust-Region Algorithm

--4.2 Trust-Region Algorithm

-4.3 Solving Subproblem

--4.3 Solving Subproblem

-4.4 Solving Subproblem II

--4.4 Solving Subproblem II

-4.5 Convergence

--4.5 Convergence

-课件

-【作业须知】

-作业

Chapter 5. Conjugate Gradient Methods

-5.1 Conjugate direction method

--5.1 Conjugate direction method

-5.2 Property of conjugate direction method

--5.2 Property of conjugate direction method

-5.3 Conjugate gradient method

--5.3 Conjugate gradient method

-5.4 Rate of convergence

--5.4 Rate of convergence

-5.5 Nonlinear conjugate gradient method

--5.5 Nonlinear conjugate gradient method

-5.6 Convergence of nonlinear conjugate gradient method

--5.6 Convergence of nonlinear conjugate gradient method

-课件

-【作业须知】

-作业

Chapter 6. Semismooth Newton's Method

-6.1 Semismoothness

--6.1 Semismoothness

-6.2 Semismooth Newton's Method

-6.3 Support Vector Machine

--6.3 Support Vector Machine

-6.4 Semismooth Newtons' Method for SVM

--6.4 Semismooth Newtons' Method for SVM

-6.5 Exploring Sparsity in SVM

--6.5 Exploring Sparsity in SVM

-课件

-【作业须知】

-作业

Chapter 7. Theory of Constrained Optimization

-7.1 Local and Global Solutions

--7.1 Local and Global Solutions

-7.2 Examples One

--7.2 Examples One

-7.3 Examples Two and Three

--7.3 Examples Two and Three

-7.4 Constraint Qualifications

--7.4 Constraint Qualifications

-7.5 First-Order Optimality Conditions

--7.5 First-Order Optimality Conditions

-7.6 Second Order Necessary Condition

--7.6 Second Order Necessary Condition

-7.7 Second Order Sufficient Condition

--7.7 Second Order Sufficient Condition

-7.8 Duality

--7.8 Duality

-课件

-【作业须知】

-作业

Chapter 8. Further Discussions on Constrained Optimization

-8.1 KKT conditions

--8.1 KKT conditions

-8.2 An Example

--8.2 An Example

-8.3 Dual Problem

--8.3 Dual Problem

-课件

-【作业须知】

-测试题

Chapter 9. Penalty and Augmented Lagrangian Methods

-9.1 Quadratic Penalty Method

--9.1 Quadratic Penalty Method

-9.2 Exact Penalty Function

--9.2 Exact Penalty Function

-9.3 Augmented Lagrangian Method

--9.3 Augmented Lagrangian Method

-9.4 Quadratic Penalty Method for Hypergraph Matching

--9.4 Quadratic Penalty Method for Hypergraph Matching

-9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

--9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

-9.6 Augmented Lagrangian Method for SVM

--9.6 Augmented Lagrangian Method for SVM

-9.7 Augmented Lagrangian Method for SVM: Ⅱ

--9.7 Augmented Lagrangian Method for SVM: Ⅱ

-课件

-【作业须知】

-作业

9.2 Exact Penalty Function笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。