Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

A Graduate Course in Probability
A Graduate Course in Probability
A Graduate Course in Probability
Ebook386 pages2 hours

A Graduate Course in Probability

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Suitable for a graduate course in analytic probability theory, this text requires no previous knowledge of probability and only a limited background in real analysis. In addition to providing instruction for graduate students in mathematics and mathematical statistics, the book features detailed proofs that offer direct access to the basic theorems of probability theory for mathematicians of all interests.
The treatment strikes a balance between measure-theoretic aspects of probability and distribution aspects, presenting some of the basic theorems of analytic probability theory in a cohesive manner. Statements are rendered as simply as possible in order to make them easy to remember and to demonstrate the essential idea behind each proof. Topics include probability spaces and distributions, stochastic independence, basic limiting operations, strong limit theorems for independent random variables, the central limit theorem, conditional expectation and Martingale theory, and an introduction to stochastic processes, particularly Brownian motion. Each section concludes with problems that reinforce the preceding material.
LanguageEnglish
Release dateDec 10, 2013
ISBN9780486782119
A Graduate Course in Probability

Read more from Howard G. Tucker

Related to A Graduate Course in Probability

Related ebooks

Mathematics For You

View More

Related articles

Reviews for A Graduate Course in Probability

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    A Graduate Course in Probability - Howard G. Tucker

    Index

    CHAPTER 1

    Probability Spaces

    1.1 Sigma Fields

    The very beginning of our considerations deals with a space or set Ω. The set Ω consists of elements or points, each of which is an individual outcome of a game or experiment or other random phenomenon under consideration. If the game is a toss of a die, then Ω consists of six points. Sometimes the points of Ω are the set of real numbers, sometimes they are a set of functions, and sometimes they are the set of all points in Euclidean n-dimensional space E(n). Each point or element ω in Ω will be referred to as an elementary event, and Ω will be called the fundamental probability set, or the sure event.

    A we mean that ω is an elementary event in A. As usual, A B denotes the union of A and B, and A B or AB denotes the intersection of A and B. The symbol ϕ denotes the empty set. The complement of A in Ω is denoted by Ac, and A\B denotes the set of points in A which are not in B, that is, A\B = ABc. If {Bn} is a collection of subsets, then ∪Bn denotes their union and ∩Bn denotes their intersection.

    Associated with a sure event , called a sigma field (or sigma algebra) of subsets of Ω.

    Definitionof Ω is called a sigma field if

    (a) for every A , then also Ac ,

    (b) if A1, A2, . . ., An, then ∪An , and

    (c) ϕ .

    , of subsets of Ω are called events, and much of the set-theoretic terminology is translated into the terminology of events. If the elementary event ω A, then we say that the event A occurs. If A and B , then AB or A B means that both the events A and B occur, and A B means the event that at least one of these two events occurs. The complement Ac of A means the event that A does not occur, and A\B means that A occurs and B does not occur. Since we call Ω the sure event, we shall refer to ϕ as the impossible event. If A and B are disjoint events, that is, if they have no elementary events in common or AB = ϕ, then we say that A and B are incompatible; that is, they cannot both occur. If A B, that is, if every elementary event in A is also in B, we say that the occurrence of the event A implies the occurrence of the event B, or A implies B.

    Theorem 1. .

    Proof: , ϕ , and hence ϕc .

    Theorem 2. If {An, then ∩An .

    Proof: , Anc , and thus ∪Anc , (∪Anc)c , and by the DeMorgan formula, (∪Anc)c = ∩An, which completes the proof.

    Definition: If {An} is a (denumerable) sequence of sets, then we define lim sup An and lim inf An by

    and

    If lim sup An = lim inf An, then we refer to this set by lim An, and if we denote lim An by A, then we write An A.

    Theorem 3. If {An} is a sequence of events, then lim sup An and lim inf An .

    Proof: This is an immediate consequence of the definition and Theorems 1 and 2.

    The event lim sup An means the event that infinitely many of the events An occur, or An occurs infinitely often. This is because a point (or elementary event) is in lim sup An if and only if it is in infinitely many of the An. The event lim inf An means the event that all but a finite number of the events in {An} occur, or An occurs almost always. This is because a point is in lim inf An if and only if it is in all but a finite number of the An.

    The following redundant definition is stated in order to avoid possible confusion.

    Definition: .

    Theorem 4. is a sigma field.

    Proof : The proof is immediate upon verifying the three requirements of a sigma field.

    Definition: be a collection of subsets of Ω. By the smallest sigma field containing or the sigma field generated by } or σ

    We have just defined an object. We must now prove that it exists and is unique.

    Theorem 5. .

    Proofis by , thus proving the theorem.

    n is a sigma field by n n+1 for all nn is a sigma field. In this case, requirement (b) of the definition cannot be verified.

    EXERCISES

    1. be the sigma field generated by the subsets [0, 1), [1, 2), ..., [ n − 1, n n +1 for all n is not a sigma field.

    2. In 2 ?

    3. Let { A n } be a sequence of events. Define B m to be the event that the first among the events A 1 , A 2 , . . . that occurs is A m . (a) Express B m in terms of A 1 , A 2 , ..., A m . (b) Prove that { B m

    4. Prove that lim inf A n lim sup A n .

    5. Write in terms of set-theoretic operations: exactly two of the events A 1 , A 2 , A 3 , A 4 occur.

    6. Prove: if A n A n +1 for all n , then lim sup A n = lim inf A n = lim A n .

    7. Prove in two ways that if A n A n+1 for all n , then lim sup A n = lim inf A n = lim A n .

    8. Prove Theorem 3 .

    9. Prove Theorem 4 .

    10. Let { A n for n ≥ 2, and prove that the events { B n } are disjoint.

    11. Let { a n } be a sequence of real numbers, and let A n = ( − ∞, a n ]. Prove that

    and

    12. }.

    1.2 Probability Measures

    , of events in Ω. In this section the notions of probability and conditional probability are introduced.

    Definition: A probability P ); that is, P is a real-valued function which assigns to every A a number P(A) such that

    (a) P(A) ≥ 0 for every A ,

    (b) P(Ω) = 1, and

    (c) if {An} is any denumerable union of disjoint events, then

    One refers to P(A) as "the probability of (the event) A." From here on, whenever we speak of events and their probabilities it should be understood that a silent reference is made to some fixed fundamental probability space, a sigma field of events, and a probability measure. There are a number of immediate consequences of the definition of a probability.

    Theorem 1. P(ϕ) = 0.

    Proof : Denote An = ϕ for n and from (c) in the definition.

    Theorem 2. If A1, ..., An are any n disjoint events, then

    Proof: Let ϕ = An+1 = An+2 = ... . By (e) in the above definition and by Theorem 1,

    which proves the assertion.

    Theorem 3. If A and B are events, and if A B, then P(A) ≤ P(B).

    Proof: Since B = A AcB, and since A and AcB are disjoint, then by Theorem 2 and by (a) in the above definition we have

    which yields the desired inequality.

    Corollary to Theorem 3. For every A , P(A) ≤ 1.

    Proof: Since A implies that A Ω and since P(Ω) = 1, then by Theorem 3, P(A) ≤ P(Ω) = 1.

    Theorem 4 (Boole’s Inequality). If {An} is a countable sequence of events, then

    Proof: By where B1 = Afor n ≥ 2. Since the Bn are disjoint, then

    However, Bn An for every n, and so by Theorem 3, P(Bn) ≤ P(An), which yields the conclusion of the theorem.

    Theorem 5. For every event A, P(Ac) = 1 − P(A).

    Proof: Since Ω = A Ac, we obtain from Theorem 2 that 1 = P(Ω) = P(A) + P(Ac), which is equivalent to the conclusion.

    , P) will be referred to as a probability spaceis usually the set of all subsets of Ω, and there is no difficulty in defining a probability measure P cannot in general be the set of all subsets.

    ), then one can define other probabilities that are called conditional probabilities.

    Definition: If A and B and if P(B) > 0, then the conditional probability of A given B, P(A | B), is defined by P(A | B) = P(AB)/P(B).

    An interpretation of P(A | B) is that it is the probability of the event A occurring if one knows that B occurs.

    Theorem 6. If B and P(B) > 0, then P(· | B, is a probability; that is,

    (a) P(A | B) ≥ 0 for every A ,

    (b) P(Ω | B) = 1, and

    (c)

    for every denumerable sequence of disjoint events {An.

    Proof: One can easily verify (a), (b), and (c) by direct application of the definitions of probability and conditional probability.

    Two very important and useful properties of conditional probabilities are the following two theorems.

    Theorem 7 (Multiplication Rule). For every n + 1 events A0, A1, . . ., An for which P(A0A1 . . . An+1) > 0, we have

    Proof: Since

    then

    and consequently all the conditional probabilities involved in the statement of the theorem are well defined. The conclusion is clearly true for n = 1 by direct application of the definition of conditional probability. The rest of the proof is an easy application of mathematical induction.

    Theorem 8 (Theorem of Total Probabilities). where {Bn} are a finite or denumerable sequence of disjoint events, if P(Bn) > 0 for every n, and if A , then

    Proof : then

    which concludes the proof.

    A commonly used probability space for the construction of examples and counterexamples in probability theory is the unit-interval probability space be the sigma field of all Lebesgue-measurable subsets of Ω, and P is the ordinary Lebesgue measure defined over [0, 1]. The experiment or game which gives rise to such a probability space is that of selecting a real number at random between 0 and 1.

    EXERCISES

    1. Prove that P ( A B ) = P ( A ) + P ( B ) − P ( AB ).

    2. where { B n } are a finite or a denumerable sequence of disjoint events such that P ( B n ) > 0 for all n, and if A and P(A) > 0, then for every k

    3. Prove that if A n A , then P ( A n ) → P ( A ).

    4. For A , B , define ρ ( A, B ) = P ( AB c ) + P ( A c B , ρ ) is a pseudometric space.

    5. Complete the proofs of Theorems 6 and 7 .

    6. be the set of all subsets of (− ∞, + ∞). For every A is a positive integer and n A }. Is P a probability?

    1.3 Random Variables

    In the first two sections we developed the concept of a probability space. This mathematical model completely describes the outcome of an experiment or game. However, in order to answer questions concerning the experiment one would have to be able to observe an ω selected at random according to the probability measure P. Usually all that is needed is that a function of ω be observed. Such a function is sometimes called a random variable. This section is an introduction to the concept of random variable.

    Definition: A random variable X -measurable, that is, for every real number xΩ | X(ω) ≤ x.

    The expression in the curly brackets above denotes the set of elementary events ω in Ω for which X(ω) ≤ x. We shorten this notation to [X x]; that is, we denote

    Similarly we denote

    and, in general, for any Borel set B we denote

    Definition: If f and if D then we define

    is a collection of subsets {D, then we define

    Proposition 1. If f then

    .)

    Proof: We repeatedly use the fact that for any collection of subsets

    = {C | C or Cc If M Again by the above remark fis a sigma field and in fact is the sigma field generated by f). Hence f}) = σ{f)}, which is the conclusion.

    Definition: An n-dimensional random variable or an n-dimensional random vector or a vector random variable, X = (X1, . . ., Xn), is a function whose domain is Ω, whose range is in Euclidean n-space E(n), and such that for every Borel-measurable subset B E(n),

    Enjoying the preview?
    Page 1 of 1