当前课程知识点:算法设计与分析 >  11 Randomized Algorithms >  11.2 Linearity of Expectation >  Linearity of Expectation

返回《算法设计与分析》慕课在线视频课程列表

Linearity of Expectation在线视频

Linearity of Expectation

下一节:MAX 3-SAT

返回《算法设计与分析》慕课在线视频列表

Linearity of Expectation课程教案、知识点、字幕

为了方便后面的讨论

我们回顾一下概率论中的一些基本知识

对于离散随机变量X

我们定义它的期望E(x)=j从0到无穷

jPr[X=j]之和

假设抛掷一个硬币

它出现字的概率是p

出现背的概率为1-p

抛掷硬币多少次能出现第一次字呢

这个次数是一个随机变量

设为X

则X的期望E(x)=j从0到无穷

jPr[X=j]之和

X=j表示在第j次抛掷

第一次出现了字的事件

即 前j-1次出现背

第j次出现字

它发生的概率为(1-p)^j-1 p

经过整理和计算级数和

它等于1/p

对于取值为0或1的随机变量X

我们有一个有用的性质

E[X]=Pr[X=1]

证明 E[X]=j从0到无穷

jPr[X=j]之和=j从0到1

jPr[X=j]之和=Pr[X=1]

我们经常要用到期望的线性性质

给定两个定义在

同一个概率空间上随机变量X和Y

那么 E[X+Y]=E[X]+E[Y]

我们经常用这条性质

把复杂的计算分解为一些简单情形的累加

我们来看一个游戏

从n张卡片中抽取一张

翻开后猜它的点数

全部卡片猜完后

游戏结束

先假设在猜卡片的过程中

没有记忆能力

每次猜的时候

从n种可能中均匀随机的猜一个结果

无记忆的猜法

猜中卡片的期望值为1

如果第i次猜对了

令随机变量X_i=1

否则X_i=0

令X=游戏结束时猜对的卡片数

等于X_1+X_2+…X_n

对于0-1变量X_i

E(X_i)=Pr[X_i=1]=1/n

由期望的线性性质

E[X]=E[X_1]+…+E[X_n]=1/n+…+1/n=1

还是同样的游戏

如果在猜测的过程中

有记忆的能力

可以在每次记住出现过的卡片

之后猜的时候

从没出现的卡片中均匀随机的猜一个结果

有记忆版本的猜卡片

猜中卡片的期望值为Φ(log n)

证明

和之前的证明一样

第i次猜对了

令随机变量X_i=1

否则X_i=0

令X=游戏结束时猜对的卡片数

等于X_1+X_1+…X_n

对于0-1变量X_i

E(X_i)=Pr[X_i=1]=1/n-i-1

E[X]=E[X1]+…+E[Xn]=1/n+…+1/2+1/1

这个数等于Φ(log n)

算法设计与分析课程列表:

1 Introduction of Algorithm

-1.1 Introduction

--Introduction

-1.2 A First Problem: Stable Matching

--A First Problem: Stable Matching

-1.3 Gale-Shapley Algorithm

--Gale-Shapley Algorithm

-1.4 Understanding Gale-Shapley Algorithm

--Understanding Gale-Shapley Algorithm

-Homework1

-Lecture note 1

--Lecture note 1 Introduction of Algorithm

2 Basics of Algorithm Analysis

-2.1 Computational Tractability

--Computational Tractability

-2.2 Asymptotic Order of Growth

--Asymptotic Order of Growth

-2.3 A Survey of Common Running Times

--A Survey of Common Running Times

-Homework2

-Lecture note 2

--Lecture note 2 Basics of Algorithm Analysis

3 Graph

-3.1 Basic Definitions and Applications

--Basic Definitions and Applications

-3.2 Graph Traversal

--Graph Traversal

-3.3 Testing Bipartiteness

--Testing Bipartiteness

-3.4 Connectivity in Directed Graphs

--Connectivity in Directed Graphs

-3.5 DAG and Topological Ordering

--DAG and Topological Ordering

-Homework3

-Lecture note 3

--Lecture note 3 Graph

4 Greedy Algorithms

-4.1 Coin Changing

--Coin Changing

-4.2 Interval Scheduling

--Interval Scheduling

-4.3 Interval Partitioning

--Interval Partitioning

-4.4 Scheduling to Minimize Lateness

--Scheduling to Minimize Lateness

-4.5 Optimal Caching

--Optimal Caching

-4.6 Shortest Paths in a Graph

--Shortest Paths in a Graph

-4.7 Minimum Spanning Tree

--Minimum Spanning Tree

-4.8 Correctness of Algorithms

--Correctness of Algorithms

-4.9 Clustering

--Clustering

-Homework4

-Lecture note 4

--Lecture note 4 Greedy Algorithms

5 Divide and Conquer

-5.1 Mergesort

--Mergesort

-5.2 Counting Inversions

--Counting Inversions

-5.3 Closest Pair of Points

--Closest Pair of Points

-5.4 Integer Multiplication

--Integer Multiplication

-5.5 Matrix Multiplication

--Video

-5.6 Convolution and FFT

--Convolution and FFT

-5.7 FFT

--FFT

-5.8 Inverse DFT

--Inverse DFT

-Homework5

-Lecture note 5

--Lecture note 5 Divide and Conquer

6 Dynamic Programming

-6.1 Weighted Interval Scheduling

--Weighted Interval Scheduling

-6.2 Segmented Least Squares

--Segmented Least Squares

-6.3 Knapsack Problem

--Knapsack Problem

-6.4 RNA Secondary Structure

--RNA Secondary Structure

-6.5 Sequence Alignment

--Sequence Alignment

-6.6 Shortest Paths

--Shortest Paths

-Homework6

-Lecture note 6

--Lecture note 6 Dynamic Programming

7 Network Flow

-7.1 Flows and Cuts

--Flows and Cuts

-7.2 Minimum Cut and Maximum Flow

--Minimum Cut and Maximum Flow

-7.3 Ford-Fulkerson Algorithm

--Ford-Fulkerson Algorithm

-7.4 Choosing Good Augmenting Paths

--Choosing Good Augmenting Paths

-7.5 Bipartite Matching

--Bipartite Matching

-Homework7

-Lecture note 7

--Lecture note 7 Network Flow

8 NP and Computational Intractability

-8.1 Polynomial-Time Reductions

--Polynomial-Time Reductions

-8.2 Basic Reduction Strategies I

--Basic Reduction Strategies I

-8.3 Basic Reduction Strategies II

--Basic Reduction Strategies II

-8.4 Definition of NP

--Definition of NP

-8.5 Problems in NP

--Problems in NP

-8.6 NP-Completeness

--NP-Completeness

-8.7 Sequencing Problems

--Sequencing Problems

-8.8 Numerical Problems

--Numerical Problems

-8.9 co-NP and the Asymmetry of NP

--co-NP and the Asymmetry of NP

-Homework8

-Lecture note 8

--Lecture note 8 NP and Computational Intractability

9 Approximation Algorithms

-9.1 Load Balancing

--Load Balancing

-9.2 Center Selection

--Center Selection

-9.3 The Pricing Method: Vertex Cover

--The Pricing Method: Vertex Cover

-9.4 LP Rounding: Vertex Cover

--LP Rounding: Vertex Cover

-9.5 Knapsack Problem

--Knapsack Problem

-Homework9

-Lecture note 9

--Lecture note 9 Approximation Algorithms

10 Local Search

-10.1 Landscape of an Optimization Problem

--Landscape of an Optimization Problem

-10.2 Maximum Cut

--Maximum Cut

-10.3 Nash Equilibria

--Nash Equilibria

-10.4 Price of Stability

--Price of Stability

-Homework10

-Lecture note 10

--Lecture note 10 Local Search

11 Randomized Algorithms

-11.1 Contention Resolution

--Contention Resolution

-11.2 Linearity of Expectation

--Linearity of Expectation

-11.3 MAX 3-SAT

--MAX 3-SAT

-11.4 Chernoff Bounds

--Chernoff Bounds

-Homework11

-Lecture note 11

--Lecture note 11 Randomized Algorithms

Exam

-Exam

Linearity of Expectation笔记与讨论

也许你还感兴趣的课程:

© 柠檬大学-慕课导航 课程版权归原始院校所有,
本网站仅通过互联网进行慕课课程索引,不提供在线课程学习和视频,请同学们点击报名到课程提供网站进行学习。