8958247

当前课程知识点:现代优化方法(英文) >  Chapter 1. Introduction >  1.3 Preliminaries in convex analysis >  1.3 Preliminaries in convex analysis

返回《现代优化方法(英文)》慕课在线视频课程列表

1.3 Preliminaries in convex analysis在线视频

下一节:课件

返回《现代优化方法(英文)》慕课在线视频列表

1.3 Preliminaries in convex analysis课程教案、知识点、字幕

Hi, everyone.

Today we will talk about the third part of chapter one,

which is the preliminaries in convex analysis.

In this part,

we will mainly

talk about the cone

and also several different kinds of cones,

including normal cone,

polar cone and dual cone.

Now let's first recall the definition of convex set.

So what is convex set?

As we introduced in previous lecture,

the convex set actually means that

the line segment of any two points

inside of this set,

then the set will be called convex set.

Now let's introduce the definition of cone.

What is the cone?

So everyone,

I think you eat ice cream

So ice cream is actually a kind of cone.

Now, mathematically,

what is a cone?

A cone should satisfy the following condition.

For any point x in C

and any non negative scalar λ,

this λx must be inside C,

then C will be called a cone.

Now I give two examples here.

So the first example is

that the cone is

consist by this ray and also this ray.

So the two rays actually form a cone.

The second example is similar to this one.

However,

it contains the inner part of the two rays.

So both of the two cases they are cones.

Now, there is a definition called convex cone.

What is convex cone?

It is actually the intersection of cone and convex set.

So if a cone is a convex set,

then we call it a convex cone.

Now, for example here,

as we can see that this example

this example consists of the ray x1 and ray x2

as well as the inner part.

It will be a convex cone.

Because it is a cone firstly.

And then this set is a convex set.

So it is a convex cone.

Notice that we got our origin here.

Now we also have some

special cases of convex cones.

Let's first define

open half spaces and closed half spaces.

First, the closed half spaces

are defined as this x

such that the inner product of x and b

are not greater than this β.

This set will be a half space.

Another corresponding half space is defined

as greater than or equal to.

So both of the two sets,

they are called half closed spaces.

And for open half spaces,

it is actually different in the inequality here.

So compared to closed half spaces,

actually the inequalities

are chosen as strictly satisfied.

In this case,

the two sets are called open half spaces.

Now the special case for convex cones

include all the subspaces.

So all the subspaces are in n-dimensional space.

They are actually convex cones.

Also, if the closed half spaces

or the open half spaces,

they are through the origin

Then they are also convex cones.

So for general closed half spaces,

they are not necessarily convex cones.

However, if the closed half spaces

are through the origin,

then they are convex cones.

Now let's introduce

the normal cone.

Normal cone actually consists of normal vectors.

So we first introduce what is normal vector.

So a vector is called a normal vector

to a convex set C at point a,

if the following condition holds.

So you can see that

this condition is actually

a inner product of two vectors.

The inner product of the two vectors is smaller

than or equal to zero.

It is equivalent to that

the angle formed by the two vectors

are actually greater than or equal to π/2.

So in other words, this normal cone,

the normal vector

must form an angle greater than π/2

with any of the vector here.

Now let's take a look at this example.

So you got a set AB.

This is a line segment.

And we consider the point a here.

So what is the normal vector

of this point a with respect to the set C.

So let's draw out the x-a this vector.

I draw AB this set here,

and point a here.

So for any x,

for example, x is here,

x-a is the vector like this.

And also it is possible that x lies

on this side.

So the two vectors

are actually represent this part x-a.

So we are looking for a vector x*

such that the angle

must be greater than or equal to π/2.

So in this case,

x* may be like this.

Also, it is possible that

x* will go down here.

And the angle both are π/2 .

So for the first example here,

x* have two cases

Of course, any positive scalar of these vectors,

they're also normal vectors.

Similarly, we can consider the case

that if we move this point a to the end of this set,

you can similarly find out

that the normal vector is actually any vector

starting from BD

and you turn right until BE,

they are all normal vectors.

Now

the normal cone consists of normal vectors.

So we have the definition of normal cone here.

And if this point a is inside of C,

then we have the normal cone defined like this.

It's actually consists of normal vectors.

However, if a is not a point in C,

then we define this normal cone

as an empty set.

This is the definition for the normal cone.

Similarly, for the two examples here,

we can actually

find out the normal cone of line segment AB

at point a

and also the normal cone of point AB at B.

So I think we take the second example to take a look.

So for the second example,

this is the end point B and this is A.

All the x will take the direction like this.

And for all the regions

in this part,

they will form the normal vector of C at point B.

So this is the definition for normal cone.

Now let's take a look at the second type of cone

called the polar cone.

The polar cone actually is defined

as a simple case that normal cone.

So in this definition,

let's take a look at the condition here.

So it's actually looking for the vector y

such that the angle between y and x

is greater than or equal to π/2.

Because the inner product is smaller than

or equal to zero.

So for any x in C.

So given the set of C

then we should find the polar cone of C.

Now let's take a look at this particular example.

We got a cone K here,

which is a ray actually.

And if you try to find the normal cone,

we just need to take the vector

which form the angle of π/2 here.

So this is π/2 here.

And this is π/2 here.

So all the vectors below this line,

they actually form the polar cone.

Now let's take a look at the dual cone.

So for dual cone it's defined similarly

as the polar cone.

However, let's take a look at the condition here.

The inequality is changed from smaller than

equal to to greater than or equal to.

So in other words,

the angle between x and y

is smaller than or equal to π/2 .

So kind of swap over.

Now we still take a look at the example here.

So you got K a cone in this dark area.

So this is a cone.

And we try to find out the dual cone.

So for dual cone,

we are looking for a vector

that form the angle smaller than or equal to π/2 .

So take the element in K

as this vector here.

Then you got a vector here.

And also if we take the vector in K here

and we take the angle as π/2 ,

then we got a vector there.

And two vectors between all the two vectors,

they will form the dual cone.

Similarly, we can swap the dual cone

by adding a sign.

Then we will get the polar cone.

So the polar cone

and dual cone there are a kind of swap.

Now this is a typical cone.

We call it an ice cream cone

or a second order cone.

If the dual cone is the same as itself,

then we call it self dual cone.

The ice cream cone or the second order cone,

it is actually a typical example of self-dual cone.

Now let's take a remark about

the relationship of dual cone and polar cone.

So this is the definition

for dual cone and polar cone.

So we summarize them here.

And let's take a look at the sign here.

So for dual cone,

it requires the inner product

to be greater than or equal to zero.

And the polar cone takes the other direction.

So if we add a minus to the dual cone,

we actually reach the polar cone.

And vice versa.

So in other words,

we can verify that

the polar cone and the dual cone,

they have the following relationship.

That if you add a minus or add a sign

between one cone,

then you reach another.

So it is a very interesting case.

Now this is the end of the third part of chapter one.

And in this part,

we mainly introduce

the cone and three different cones,

including the normal cone,

the polar cone and dual cone.

So finally, let's make a brief summary

about chapter one.

So in this chapter,

we are actually at the beginning of this course,

we mainly introduce

what is optimization

and different ways to do the classification

of optimization problem.

Finally, we also give some preliminaries

in convex analysis,

basically different kinds of cones,

which we will use later in other chapters.

So we stop here for this lecture

and see you next time.

现代优化方法(英文)课程列表:

Chapter 1. Introduction

-1.1 About optimization

--1.1 About optimization

-1.2 Classification of optimization

--1.2 Classification of optimization

-1.3 Preliminaries in convex analysis

--1.3 Preliminaries in convex analysis

-课件

-【作业须知】

-练习题

Chapter 2. Fundamentals of Unconstrained Optimization

-2.1 What is a Solution?

--2.1 What is a Solution?

-2.2 Optimality Conditions Ⅰ

--2.2 Optimality Conditions Ⅰ

-2.3 Optimality Conditions Ⅱ

--2.3 Optimality Conditions Ⅱ

-2.4 Line search strategy

--2.4 Line search strategy

-2.5 Search direction Ⅱ

--2.5 Search direction Ⅱ

-2.6 Convergence

--2.6 Convergence

-课件

-【作业须知】

-作业

Chapter 3. Line Search Methods

-3.1 Exact Step Length

--3.1 Exact Step Length

-3.2 The Wolfe conditions

--3.2 The Wolfe conditions

-3.3 Inexact Line Search II

--3.3 Inexact Line Search II

-3.4 Convergence of Line Search Methods

--3.4 Convergence of Line Search Methods

-3.5 Convergence Rate

--3.5 Convergence Rate

-课件

-【作业须知】

-作业

Chapter 4. Trust Region Methods

-4.1 Main Idea of Trust Region Methods

--4.1 Main Idea of Trust Region Methods

-4.2 Trust-Region Algorithm

--4.2 Trust-Region Algorithm

-4.3 Solving Subproblem

--4.3 Solving Subproblem

-4.4 Solving Subproblem II

--4.4 Solving Subproblem II

-4.5 Convergence

--4.5 Convergence

-课件

-【作业须知】

-作业

Chapter 5. Conjugate Gradient Methods

-5.1 Conjugate direction method

--5.1 Conjugate direction method

-5.2 Property of conjugate direction method

--5.2 Property of conjugate direction method

-5.3 Conjugate gradient method

--5.3 Conjugate gradient method

-5.4 Rate of convergence

--5.4 Rate of convergence

-5.5 Nonlinear conjugate gradient method

--5.5 Nonlinear conjugate gradient method

-5.6 Convergence of nonlinear conjugate gradient method

--5.6 Convergence of nonlinear conjugate gradient method

-课件

-【作业须知】

-作业

Chapter 6. Semismooth Newton's Method

-6.1 Semismoothness

--6.1 Semismoothness

-6.2 Semismooth Newton's Method

-6.3 Support Vector Machine

--6.3 Support Vector Machine

-6.4 Semismooth Newtons' Method for SVM

--6.4 Semismooth Newtons' Method for SVM

-6.5 Exploring Sparsity in SVM

--6.5 Exploring Sparsity in SVM

-课件

-【作业须知】

-作业

Chapter 7. Theory of Constrained Optimization

-7.1 Local and Global Solutions

--7.1 Local and Global Solutions

-7.2 Examples One

--7.2 Examples One

-7.3 Examples Two and Three

--7.3 Examples Two and Three

-7.4 Constraint Qualifications

--7.4 Constraint Qualifications

-7.5 First-Order Optimality Conditions

--7.5 First-Order Optimality Conditions

-7.6 Second Order Necessary Condition

--7.6 Second Order Necessary Condition

-7.7 Second Order Sufficient Condition

--7.7 Second Order Sufficient Condition

-7.8 Duality

--7.8 Duality

-课件

-【作业须知】

-作业

Chapter 8. Further Discussions on Constrained Optimization

-8.1 KKT conditions

--8.1 KKT conditions

-8.2 An Example

--8.2 An Example

-8.3 Dual Problem

--8.3 Dual Problem

-课件

-【作业须知】

-测试题

Chapter 9. Penalty and Augmented Lagrangian Methods

-9.1 Quadratic Penalty Method

--9.1 Quadratic Penalty Method

-9.2 Exact Penalty Function

--9.2 Exact Penalty Function

-9.3 Augmented Lagrangian Method

--9.3 Augmented Lagrangian Method

-9.4 Quadratic Penalty Method for Hypergraph Matching

--9.4 Quadratic Penalty Method for Hypergraph Matching

-9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

--9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

-9.6 Augmented Lagrangian Method for SVM

--9.6 Augmented Lagrangian Method for SVM

-9.7 Augmented Lagrangian Method for SVM: Ⅱ

--9.7 Augmented Lagrangian Method for SVM: Ⅱ

-课件

-【作业须知】

-作业

1.3 Preliminaries in convex analysis笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。