Professional Documents
Culture Documents
Outcome
s can be from a finite or countable sample space (set) Ω of events or be
tuples drawn over reals R.
Coin
toss: Ω = {H,T}
Packets
to a URL per day: Ω = N (positive integers)
Rain in
cms/month in Prov.: Ω = R (real number)
Probability Space
Sample space: all possible outcomes
Events: A family F of subsets of sample space Ω.
E.g. Ω = {H,T}3, F0 = {TTT, HHT, HTH, THH} (Even no.
Hs). F1 = {HTT, THT, TTH, HHH} (Odd no. Hs).
Events are mutually exclusive if they are disjoint.
E.g. F0 and F1 above.
A probability distribution is a function:
The probability distribution assigns a probability
0 ≤ P(E) ≤ 1 to each event E.
Properties of Probability
Function
1. For any event E in Ω, 0 ≤ P(E) ≤ 1.
P(Ω) = 1
For any finite or countable infinite sequence of disjoint
events E1, E2, …
Probabilities of Events
2. Joint probability:
Notation:
3. Probability of a union :
P(A) = ∑j P(A,Bj),
( )
P { s ∈ S : X ( s ) ∈ B} = P { X ∈ B} = ∫ f ( x ) dx
B
∑ p( x ) =1
i =1
i
i
where n n!
=
i i !( n − i ) !
X is a binomial random variable with parameters n and p.
Geometric Random Variable
A sequence of independent Bernoulli trials is performed with
p = P(success)
X is the number of trials until (including) the first success.
Then X may equal 1, 2, … and
p ( i ) = P { X = i} = ( 1 − p )
i −1
p, i = 1, 2,...
i =1 1− r
Use this to verify that : ∞
∑ p ( i) = 1
i =1
Poisson Random Variable
X is a Poisson random variable with parameter l > 0 if
e−λ λ i
p ( i ) = P { X = i} = , i = 0,1,...
i!
Note:
∑ i =0 p ( i ) = 1 follows from eλ = ∑i=0 λ i i !
∞ ∞
f ( x) ≥ 0
∞
∫ f ( x ) dx = 1
−∞
b
P { a ≤ X ≤ b} = ∫ f ( x ) dx (note P { X = a} = 0)
a
a dF ( x )
The cdf is: F ( a ) = P { X ≤ a} = ∫ f ( x ) dx, so f ( x ) =
−∞ dx
ε ε
P a − ≤ X ≤ a + ≈ ε f ( means
a) that f(a) measures how
2 2 likely X is to be near a.
Uniform random variable
X is uniformly distributed over an interval (a, b) if its pdf
is 1
, a< x<b all we know
f ( x) = b − a about
0, otherwise
X is that it takes
a
Then its cdf is: value between a
0, x ≤ a and b
x−a
F ( x) = ,a< x<b
b − a
1, x ≥ b
Exponential random variable
X has an exponential distribution with parameter l > 0 if
its pdf is
λ e − λ x , x ≥ 0
f ( x) =
0, otherwise
Then its cdf is:
0, x < 0
F ( x) = −λ x
1 − e ,x≥0
If a is an integer, then
Γ ( α ) = ( α −1) !
Normal random variable
X has a normal distribution with parameters m and s2 if its
pdf is
1 −( x − µ ) 2
f ( x) =
2σ 2
e , −∞ < x < ∞
2π
∑ i xi p ( xi ) , discrete
E[ X ] = ∞
∫- ∞ xf ( x ) dx, continuous
Binomial: E[X] = np
Gamma: E[X] = ab
∑ i g ( xi ) p ( xi ) , X discrete
E g ( X ) = ∞
∫- ∞ g ( x ) f ( x ) dx, X continuous
If g(X) is a linear function of X:
E [ aX + b] = aE [ X ] + b
Higher-order moments
The nth moment of X is E[Xn]:
∑ xi n p ( xi ) , discrete
i
E X = ∞
n
∫- ∞ x f ( x ) dx, continuous
n
The variance is
Var ( X ) = E ( X − E [ X ] )
2
It is sometimes easier to calculate as
Var ( X ) = E X 2 − ( E [ X ] )
2
Variances of Discrete Random
Variables
Bernoulli: E[X2] = 1(p) + 0(1-p) = p; Var(X) = p – p2 =
p(1-p)
Binomial: Var(X) = np(1-p)
variance
Jointly Distributed Random Variables
For definitions of joint cdf, pmf, pdf, marginal
distributions, main results that we can use:
E[ X +Y ] = E[ X ] + E[ Y]
E [ aX + bY ] = aE [ X ] + bE [ Y ]
E [ a1 X 1 + a2 X 2 + ... + an X n ] = a1 E [ X1 ] + ... + an E [ X n ]
p ( x, y ) = p X ( x ) pY ( y ) (discrete)
f ( x, y ) = f X ( x ) fY ( y ) (continuous)
E g ( X ) h ( Y ) = E g ( X ) E h ( Y )
Also, if X and Y are independent, then for any functions h and g,
Covariance
The covariance of X and Y is:
Cov ( X , Y ) = E ( X − E [ X ] ) ( Y − E [ Y ] ) = E [ XY ] − E [ X ] E [ Y ]
n n
Var ∑ X i = ∑ Var ( X i )
i=1 i=1
Moment generating function
The moment generating function of a r.v. X is
∑ i etxi p ( xi ) , X discrete
φ ( t ) = E etX = ∞
∫- ∞ e f ( x ) dx, X continuous
tx
Random
variables
Real number
Sample functions
or realizations
(deterministic
function)
time (t)
Random process-Basic concepts
Random processes like any other types of signals can be
classified in a number of different ways.
- Continuous or discrete
- Analog or digital
- Deterministic or non-deterministic
- Stationary or non-stationary
- Ergodic or non-Ergodic
Cross-correlation and Auto-
correlation
Correlation determines the degree of similarity between two signals.
If the signals are identical, then the correlation coefficient (ρ)is 1;
if they are totally different, then ρ =0.
and if they are identical except that the phase is shifted by exactly 180
degrees(i.e. mirrored), then ρ = -1.
When two independent signals are compared, the procedure is known
as cross-correlation, and when the same signal is compared to phase
shifted copies of itself, the procedure is known as autocorrelation.
Autocorrelation
Autocorrelation of an energy signal
p x ( x1 , t1 ) = p x ( x1 , t1 + ∆) for all t1 , ∆ ∈ ℜ
If x(t) is a first order stationary process then
E{x(t ) } = x = constant
E{x(t1 ) x(t1 + τ )} = Rxx (τ )
∞ 1 T
∫−∞ x(t ) p( x) dx = Tlim
→∞ 2T ∫−T
x(t )dt
∞ 1 T
∫−∞ ( x(t ) x(t − τ ) ) p( x) dx = Tlim
→∞ 2T ∫
−T
x(t ) x(t + τ ) dt
frequency, n
The spectral density, (auto-spectral density, power spectral density,
spectrum) describes the average frequency content of a random process, x(t)
54
Spectral density:
2 2
Sx (n) = Lim X T (n)
T →∞ T
Where XT(n) is the Fourier Transform of the process x(t)
taken over the time interval -T/2<t<+T/2
τ x
time, t T
y(t)
y
time, t T
The cross-correlation function describes the
general dependency of x(t) with another
random process y(t+τ ), delayed by a time
delay, τ
cxy (τ) =Lim
T→∞
1 T
T ∫0
[x(t) ][
- x . y(t +τ) - y dt ]
Covariance:
The covariance is the cross correlation function with the
time delay, τ , set to zero
1 T
[ ][ ]
c xy (0) = x′(t).y′(t) = Lim ∫ x(t) - x . y(t) - y dt
T→∞ T 0
z2
z1
u'(z1 ).u'(z2 )
For heights, z1, and z2 : ρ(z1 , z 2 ) =
σ u (z1 ).σ u (z2 )
Cross spectral density:
By analogy with the spectral density:
∞
Sxy (n) = 2 ∫ c xy (τ )e −i 2πnτ dτ
-∞
The cross spectral density is twice the Fourier Transform of
the cross-correlation function for the processes x(t) and y(t).