You are on page 1of 17

Recitation 1A: Probability Review

Hung-Bin Chang and Yu-Yu Lin


Electrical Engineering Department University of California (UCLA), USA,
hungbin@seas.ucla.edu and skywoods2001@ucla.edu

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

1 / 17

Outline

Course Information

Probability Review

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

2 / 17

Course Information

Administrative Stuff

Instructor: Prof. Izhak Rubin


E-mail: rubin@ee.ucla.edu
Office: Rm 58-115, Engineering IV
office hour: MW 3:00 PM - 3:50 PM

TA: Hung-Bin (Bing) Chang and Yu-Yu Lin


E-mail: hungbin@seas.ucla.edu and skywoods2001@ucla.edu
Office: Rm 67-112 ENGR IV
Office hour:
Bing: T and R 5:00 PM - 5:50 PM
Yu-Yu: W 6:00 PM - 6:50 PM

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

3 / 17

Course Information

Homework, Exam and Grading Policies


Homework Policy
Announcement: Every Friday before 12 pm on UCLA CCLE
website
6 assignments and 2 computer workouts
Assignment 1 is announced on Fri. (10/3).
Note: The deadline of HW 1 is 10/17 (Friday).

Submission: Homework Box (Rm 67-112 ENGR IV)


NO LATE HOMEWORK!
Hard copy

Exam: One Midterm (2 hours) and One Final (3 hours)


Grading Policy
HW assignments (including computer workouts): 25%
Midterm: 25%
Final: 49%
Course survey: 1%
Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

4 / 17

Probability Review

Probability Space
In probability theory, a probability space is a mathematical
construct that models a real-world process (or experiment)
consisting of states that occur randomly.
A probability space is defined by three parameters (V , E , P).
V represents the sample space that is the set of all the outcomes .
E represents the collection of subsets of V called events. An event
is a set of outcomes. The set of events is E.
P is a function of V that maps events to the interval [0, 1], i.e., P
assigns probabilities to different events in E.

For example: the probability space for tossing a coin


V = {H, T }
E = {, {H}, {T }, {H, T }} (i.e., all possible combinations of V )
P: P() = 0, P(H) = 12 , P(T ) = 12 , P(H, T ) = 1

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

5 / 17

Probability Review

Random Variable

Definition: A random variable X is a function that associates a real


number with each element of the sample space, i.e., (in more
technical terms,) maps V R, or, assigns a value X () R to
each outcome .
An example of the random variable for tossing a coin can be:
X (H) = +1 (if you get a head, you win one dollar)
X (T ) = 1 (if you get a tail, you lose one dollar)

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

6 / 17

Probability Review

Three axioms of probability

Any probability function must obey the following:


P(A) 0, A E. P
P(A1 A2 . . . ) = i P(Ai ) iff A1 , A2 , . . . are pairwise disjoint,
where Ai E, for i.
P(V ) = 1

For example (Toss a coin):


A1 = {H} and A2 = {T }.
P(A1 ) = 0.5, P(A2 ) = 0.5 and
P(A1 A2 ) = P(V ) = P(A1 ) + P(A2 ) = 1.

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

7 / 17

Probability Review

Probability Distribution Function or Cumulative Density


Function (c.d.f .)
FX (x) = P(X x) (either discrete or continuous) has the following
properties:
F is non-decreasing, i.e., FX (y ) FX (x ) if y > x .
limx FX (x ) = 0 and limx FX (x ) = 1.
FX (x ) is a right continuous function.
Roughly speaking, a function is right-continuous if no jump occurs
when the limit point is approached from the right.
limx a+ FX (x) = FX (a), a R
1

0.5

Figure : An example of a right continuous function


Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

8 / 17

Probability Review

Probability Density Function (p.d.f .) and Probability


Mass Function (p.m.f .)

d
Continuous distribution: fX (x) = dx
FX (x), where fX (x) is called
probability density function (p.d .f .).
P
Discrete distribution: FX (x) = y x fX (y), where
fX (y) = P(X = y) is called probability mass function (p.m.f .).

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

9 / 17

Probability Review

Joint Distribution Function

General form:
FX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) = P(X1 x1 , X2 x2 , . . . , Xn xn )
If X1 , X2 , . . . , Xn are mutually independent, then,
FX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) = FX1 (x1 )FX2 (x2 ) . . . FXn (xn ).

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

10 / 17

Probability Review

Conditional Probability

Conditional probability: P(A | B) = P(A,B)


P(B) .
PK
Total probability theorem: P(A) = i=1 P(A | Bi )P(Bi ), where
P
B1 , B2 , . . . , BK are disjoint and Ki=1 P(Bi ) = 1.
Bayes theorem: P(Bi | A) =

Prof. Izhak Rubin (UCLA)

P(Bi )P(A|Bi )
P(A)

EE 132B

P(Bi )P(A|Bi )
PK
.
i=1 P(A|Bi )P(Bi )

2014 Fall

11 / 17

Probability Review

Marginal Probability
For continuous distribution, given f (X = x, Y = y), then
Z
Z
f (X = x) = f (X = x, Y = y)dy = fX |Y (x | y)fY (y)dy.
y

For discrete distribution, given P(X = m, Y = n), then


X
P(X = m) =
P(X = m, Y = n).
n

If X and Y are independent, then,


For continuous distribution, f (X = m, Y = n) = f (X = m)f (Y = n).
For discrete distribution, P(X = m, Y = n) = P(X = m)P(Y = n).

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

12 / 17

Probability Review

Expectation and Variance of a Random Variable

Continuous distribution
R
E[X ] = xfX (x )dx .
R
E[g(X )] = g(x )fX (x )dx .
R
nth moment: E[X n ] = x n fX (x )dx .

Discrete distribution

P
E[X ] = xP
xP(X = x ).
E[g(X )] = x g(x )P(XP= x ).
nth moment: E[X n ] = x x n P(X = x ).



Variance: Var [X ] = E (X E [X ])2 = E [X 2 ] E [X ]2 .

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

13 / 17

Probability Review

Moment Generating Function (m.g.f .)


In probability theory and statistics, the moment generating
function (m.g.f .) of a random variable X is defined as
h i
(t) = E etX , t R.
Continuous distribution (Laplace transform by setting t = s)
(s) = E[esX ] =
Interesting fact

esx fX (x )dx .

d
(s) |s=0 = E[X ]
ds
d2
(s) |s=0 = E[X 2 ]
d2 s

Discrete distribution (Z transform by setting et = z)


(z) = E[z x ] =
Interesting fact

n=

d
(z) |z=1 = E[X ]
dz
d2
(z) |z=1 = E[X (X
d2 z
Prof. Izhak Rubin (UCLA)

z n P(X = n).

1)]
EE 132B

2014 Fall

14 / 17

Probability Review

For Example: Poisson Distribution


The probability mass function (p.m.f .) of a Poisson random
variable with parameter is given by
P(X = n) =

e n
, n 0.
n!

E [X ] = and Var [X ] = .
Derive (directly):

P
n
n e n! =
n=1
P

m
m = n 1 and
m=0 m! = e .
E[X ] =

n=0

e n1
(n1)!

m=0

e m
m!

= , where

Derive (by m.g.f .):

P
P
n e n
(x) = E[z X ] =
= e
n=0 z
n=0
n!
e (z1).
E[X ] = dzd (z) |z=1 = .

Prof. Izhak Rubin (UCLA)

EE 132B

(z)n
n!

= e e z =

2014 Fall

15 / 17

Probability Review

Distribution of the Sum of Two Independent Random


Variables

Given two independent random variables X and Y along with their


respective p.d .f .: fX (x) = P(X = x) and fY (y) = P(Y = y), what
is the p.d .f . (or p.m.f .) of W = X + Y ?
Two methods
(Directly) P(W = n) = P(X + Y = n) (most useful in HWs)
Continuous
distribution:
R
R
f (X + Y = n | Y = m)fY (m)dm = f (X = n m)fY (m)dm.
Discrete
distribution:
P
P
P(X
+
Y = n | Y = m)P(Y = m) = m P(X = n m)P(Y = m).
m

(by m.g.f .) (W ) = (X )(Y )

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

16 / 17

Probability Review

Q&A

Prof. Izhak Rubin (UCLA)

EE 132B

2014 Fall

17 / 17

You might also like