当前课程知识点:现代优化方法(英文) >  Chapter 1. Introduction >  1.2 Classification of optimization >  1.2 Classification of optimization

返回《现代优化方法(英文)》慕课在线视频课程列表

1.2 Classification of optimization在线视频

下一节:1.3 Preliminaries in convex analysis

返回《现代优化方法(英文)》慕课在线视频列表

1.2 Classification of optimization课程教案、知识点、字幕

Hi,everyone.

Welcome to this class.

In this class,

So the classification of optimization.

In this part,

we will mainly talk about

how to do classification

for optimization problems

and then

pay attention to convex optimization.

Finally we will present what are algorithms

Now actually,

there are different kinds of

optimization classifications.

They divide optimization problems into different ways.

For example,

we have constrained and unconstrained optimization.

And we got continuous and discrete optimization.

Also global optimization and local optimization.

And also

stochastic optimization and deterministic optimization.

Finally is the convex and nonconvex optimization.

In the following,

we will discuss them one by one.

Now,

for constrained and unconstrained optimization problems,

they are actually depend on

whether there are restrictions on the problem.

So if there is no restrictions on variables,

then we call it unconstrained optimization.

If there are restrictions on variables,

then it is actually

a constrained optimization problem.

So if you look at this figure here,

and for figure (a)

you got no restrictions on x.

So this is the curve of function values.

And for the variable x,

they are all in n-dimensional space.

And for figure (b)

we have some restrictions on variables.

So we require the interval must between

a and b.

In other words,

for figure (b),

it is constrained optimization problem.

So it is easy to see that

for unconstrained optimization problem,

it may be easier

because there is no restrictions on variables.

However, for constrained optimization,

you may design complicated algorithms

because you have to take care of the restrictions.

Let's take a look at continuous and discrete optimization.

For continuous optimization,

it actually means that

the feasible region is uncountable infinite.

In this case,

we call it continuous optimization.

For example,

you got a feasible region like this,

the dark part is the feasible region.

Of course,

there are uncountable infinite points in this region.

So for this case,

the problem is continuous optimization.

However,

if not

or if the feasible region contains finite

number of points,

then we call it discrete optimization.

For example,

you got the feasible region as the dotted point.

So for this dotted point,

these are the feasible region of this problem.

In this case,

it is called discrete optimization.

So what is the theoretical difference

that discrete and

continuous optimization will make?

As we can see

for continuous optimization,

the benefit that you can have

is you may use the local information

to estimate the status of the point.

However,

for discrete optimization,

it is not that easy

to use neighboring information.

Because even the two points

they are neighbors,

there may be great changes

from this point to this point.

So it brings potential difficulty

to solve discrete optimization.

So as we can see,

discrete optimization

may be more difficult

than continuous optimization.

In the following,

we give a very typical class

of discrete optimization,

which is called integer programming.

Integer programming

actually deals with variables

whose value would be only binary

or integers.

So if the variables

they can choose only integers,

then it will be called

integer programming.

For example,

whether or not we

put a factory in a certain city,

this will be an integer programming.

We will use an integer variable

to denote this decision variables.

In this case,

it will be an integer programming.

Now let's take a look at the

third part of classification.

Actually,

it is global or local optimization.

So the difference

between global and local optimization

are actually the aims.

If you're seeking for local optimizers,

then it is actually

local optimization problem.

If you're trying to seek

the global minimizers,

then it is called global optimization.

For example,

I plot a figure here.

And this is the curve of the function values.

So you can see that

there are many valleys here

And for each valley here,

there will be a local minimizer.

And of course

we can find that zero

is the global minimizer.

so this is the local and global minimizer.

So roughly speaking,

what is local minimizer?

Local minimizer means that

at this point x,

the function value

reaches the lowest value

within a neighborhood of this x.

Then x will be a local minimizer.

In contrast,

if at point x the function value

reaches the lowest

value among all the feasible points,

then it is called the global minimizer.

So this is just roughly speaking,

we will talk about more details about

local and global minimizers

in chapter two.

Now

let's take a look at

stochastic and deterministic optimization.

First let's take a look at this

optimization problem.

So we got objective.

And also we got decision variables

in n-dimensional space.

Finally, we got some restrictions,

which are equality constraints.

So to find this x,

this vector c and b

are parameters of this model.

In other words,

we need to know

c and b before we solve this problem.

However,

in some real applications,

c and b may not

be totally known

when we solve this problem.

In this case,

this will be called the stochastic optimization.

Otherwise,

if c and b are totally known

before we solve this problem,

then we will call it deterministic

optimization problem.

Finally,

let's talk about convex

and nonconvex optimization.

So what is convex optimization?

Roughly speaking,

you got convex function

for minimization problem,

and also the convex feasible set.

If the two issues are satisfied,

then the problem will be called

convex optimization problem.

Otherwise,

it is nonconvex optimization problem.

Let's take a look at the two curves here.

And for this function,

we can actually

know that this function is a convex function.

And for this function here,

it is nonconvex

because you got many local minimizers here.

We will talk about

convex optimization

in more details in the following.

Now,

to talk about

a convex optimization problem,

the first definition we need to know

is the convexity.

Let's first introduce

what is convex set.

So what is convex set?

If you take a look at the

four figures here,

can you identify

what are the convex sets

among them?

The first one, the second one

they are convex sets actually.

Why?

Ok so before

we introduce the mathematical definition,

let's take any two points

in the figure,

and we draw the line segment

between the two points.

If the line segment

entirely lies in this set,

then this set will be called a convex set.

Now, for the two convex sets here,

it is convex.

However, if we draw two

feasible points here,

and the line segment

may not lie in the set.

So in this case,

the two sets are not convex sets.

so the mathematical definition

for convex set is actually like this.

If for any two points in this set S,

the following point also lies in S.

So what is the following point meaning?

It is actually called

the convex combination of the two points.

In other words,

if the

convex combination of the two points

still lies in this set,

then this set will be convex.

So notice that

for convex combination,

the definition requires that the coefficients

the two coefficients here must be nonnegative

and must be smaller than 1.

And also the sum of the coefficients must be 1.

So this is the convex combination.

After the convex set,

we can introduce

what is convex function.

So what is convex function?

Convex function, actually,

it must be defined

on a convex set, capital S.

In other words,

its domain of this function must be convex.

Then

f(x)

must satisfy the following inequality.

So what does the following inequality mean?

So if you still remember the convex combination,

then this condition actually means that

the function value of convex combination

is not greater than

the combination of function values.

It's not greater than

the convex combination of function values.

So this is

what the convex function means.

I also draw a figure here.

So basically it means that

curve of function value must lie

below the line segment of

the two function values here.

For convex functions,

we also got a similar definition

called concave function.

So f is a concave function

means that the -f is convex.

So the definition is very simple.

And we got several examples

for convex functions.

For example, the linear function.

So linear function is a very special function.

It is both convex and also concave.

Secondly, for convex quadratic function,

which is defined as the x transpose Hx.

If H is a

symmetric positive semidefinite matrix,

then the function f

is called convex quadratic function.

Otherwise,

if H

is not positive semidefinite,

then it is not a convex function.

Now we also have some

properties of convex function.

For example,

the sum of two convex functions.

They are also convex.

And also the positive scalar

of convex function is also convex.

The two equivalent conditions here

actually characterize

what is convex function.

For example,

if f is continuously differentiable,

in other words, we got gradient,

then the convexity can also be

equivalent to the condition here.

So for this condition,

we actually characterize the convexity

by the gradient.

We can use this equivalent condition

to characterize the convexity

Moreover,

if f is twice continuously differentiable,

we can also use the second order information

to characterize the convexity.

The second order

equivalent condition is that

the Hessian matrix here is positive semidefinite.

So these are the two equivalent conditions

for convex functions.

They will be used quite often in our class.

Now let's take a look at

convex optimization.

So for convex optimization

as we mentioned before

the function must be convex,

and the feasible region must be convex.

So in terms of equality constraints

and inequality constraints here,

So the corresponding requirement becomes

this equality constraints ci must be linear function.

And for inequality constraints,

we must require this to be concave function.

So please note that

here for inequality constraints,

it is greater than or equal to zero.

In this case,

this ci must be a concave function

to make it a convex region.

So this is the requirement

for convex optimization.

In this book,

we will mainly focus on convex optimization.

Now finally,

let's talk about numerical algorithms.

So what is optimization algorithms?

Optimization algorithms

are actually kinds of loops.

So it starts from an initial point x0 .

And it generates the next iterate x1,

then x2 and so on.

So the key point you can see that

is how we generate the next iterate

at the current iterate xk.

After we figure this out,

then the algorithm

will start and we can run it.

And also there are several ways

to measure the algorithms.

The first one is the robustness.

Robustness actually means that

the algorithm whether it performs well

on a variety of problems

and based on reasonable values of starting points.

If an algorithm

can be applied

to a large amount of problems

and also it can be started

from many initial points,

then it can be viewed as robust.

Otherwise, it will not that robust.

And for efficiency,

it actually means that

these algorithms take less cputime

and also take less memories.

Because cputime and the memory

are valuable resources

in computational aspect.

Now finally is the accuracy.

Accuracy means that

what is the returned solution by the algorithm

close enough to a true solution.

If it is exactly the true solution,

of course it is good.

However, in real computational algorithms,

people

will always have some arithmetic rounding errors.

So it means that basically the returned problem

is actually not the exact solution.

So we need to do some accuracy analysis.

And we actually assume,

or we hope that the solution will be within

reasonable accuracy statement.

So for this lecture, we stop here.

And let's make a brief summary.

And we introduced different ways

to classify optimization problems.

to classify optimization problems.

And we

talk a little bit about what is convex optimization.

Finally,

we end up with numerical algorithms

how it works, the rough idea.

We will talk about more

about convex optimization

and algorithms in chapter two.

Okay.

So we stop here and see you next time.

现代优化方法(英文)课程列表:

Chapter 1. Introduction

-1.1 About optimization

--1.1 About optimization

-1.2 Classification of optimization

--1.2 Classification of optimization

-1.3 Preliminaries in convex analysis

--1.3 Preliminaries in convex analysis

-课件

-【作业须知】

-练习题

Chapter 2. Fundamentals of Unconstrained Optimization

-2.1 What is a Solution?

--2.1 What is a Solution?

-2.2 Optimality Conditions Ⅰ

--2.2 Optimality Conditions Ⅰ

-2.3 Optimality Conditions Ⅱ

--2.3 Optimality Conditions Ⅱ

-2.4 Line search strategy

--2.4 Line search strategy

-2.5 Search direction Ⅱ

--2.5 Search direction Ⅱ

-2.6 Convergence

--2.6 Convergence

-课件

-【作业须知】

-作业

Chapter 3. Line Search Methods

-3.1 Exact Step Length

--3.1 Exact Step Length

-3.2 The Wolfe conditions

--3.2 The Wolfe conditions

-3.3 Inexact Line Search II

--3.3 Inexact Line Search II

-3.4 Convergence of Line Search Methods

--3.4 Convergence of Line Search Methods

-3.5 Convergence Rate

--3.5 Convergence Rate

-课件

-【作业须知】

-作业

Chapter 4. Trust Region Methods

-4.1 Main Idea of Trust Region Methods

--4.1 Main Idea of Trust Region Methods

-4.2 Trust-Region Algorithm

--4.2 Trust-Region Algorithm

-4.3 Solving Subproblem

--4.3 Solving Subproblem

-4.4 Solving Subproblem II

--4.4 Solving Subproblem II

-4.5 Convergence

--4.5 Convergence

-课件

-【作业须知】

-作业

Chapter 5. Conjugate Gradient Methods

-5.1 Conjugate direction method

--5.1 Conjugate direction method

-5.2 Property of conjugate direction method

--5.2 Property of conjugate direction method

-5.3 Conjugate gradient method

--5.3 Conjugate gradient method

-5.4 Rate of convergence

--5.4 Rate of convergence

-5.5 Nonlinear conjugate gradient method

--5.5 Nonlinear conjugate gradient method

-5.6 Convergence of nonlinear conjugate gradient method

--5.6 Convergence of nonlinear conjugate gradient method

-课件

-【作业须知】

-作业

Chapter 6. Semismooth Newton's Method

-6.1 Semismoothness

--6.1 Semismoothness

-6.2 Semismooth Newton's Method

-6.3 Support Vector Machine

--6.3 Support Vector Machine

-6.4 Semismooth Newtons' Method for SVM

--6.4 Semismooth Newtons' Method for SVM

-6.5 Exploring Sparsity in SVM

--6.5 Exploring Sparsity in SVM

-课件

-【作业须知】

-作业

Chapter 7. Theory of Constrained Optimization

-7.1 Local and Global Solutions

--7.1 Local and Global Solutions

-7.2 Examples One

--7.2 Examples One

-7.3 Examples Two and Three

--7.3 Examples Two and Three

-7.4 Constraint Qualifications

--7.4 Constraint Qualifications

-7.5 First-Order Optimality Conditions

--7.5 First-Order Optimality Conditions

-7.6 Second Order Necessary Condition

--7.6 Second Order Necessary Condition

-7.7 Second Order Sufficient Condition

--7.7 Second Order Sufficient Condition

-7.8 Duality

--7.8 Duality

-课件

-【作业须知】

-作业

Chapter 8. Further Discussions on Constrained Optimization

-8.1 KKT conditions

--8.1 KKT conditions

-8.2 An Example

--8.2 An Example

-8.3 Dual Problem

--8.3 Dual Problem

-课件

-【作业须知】

-测试题

Chapter 9. Penalty and Augmented Lagrangian Methods

-9.1 Quadratic Penalty Method

--9.1 Quadratic Penalty Method

-9.2 Exact Penalty Function

--9.2 Exact Penalty Function

-9.3 Augmented Lagrangian Method

--9.3 Augmented Lagrangian Method

-9.4 Quadratic Penalty Method for Hypergraph Matching

--9.4 Quadratic Penalty Method for Hypergraph Matching

-9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

--9.5 Quadratic Penalty Method for Hypergraph Matching: Ⅱ

-9.6 Augmented Lagrangian Method for SVM

--9.6 Augmented Lagrangian Method for SVM

-9.7 Augmented Lagrangian Method for SVM: Ⅱ

--9.7 Augmented Lagrangian Method for SVM: Ⅱ

-课件

-【作业须知】

-作业

1.2 Classification of optimization笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。