You are on page 1of 3

PART 1

HISTORY OF PROBABILITY

Probability has a dual aspect: on the one hand the probability or likelihood of hypotheses given
the evidence for them, and on the other hand the behavior of stochastic processes such as the
throwing of dice or coins. The study of the former is historically older in, for example, the law of
evidence, while the mathematical treatment of dice began with the work of Pascal and Fermat in
the 1650s.

18th century

Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's The Doctrine of
Chances (1718) put probability on a sound mathematical footing, showing how to calculate a wide
range of complex probabilities. Bernoulli proved a version of the fundamental law of large
numbers, which states that in a large number of trials, the average of the outcomes is likely to be
very close to the expected value - for example, in 1000 throws of a fair coin, it is likely that there
are close to 500 heads (and the larger the number of throws, the closer to half-and-half the
proportion is likely to be).

19th century

The power of probabilistic methods in dealing with uncertainty was shown by Gauss's
determination of the orbit of Ceres from a few observations. The theory of errors used the method
of least squares to correct error-prone observations, especially in astronomy, based on the
assumption of a normal distribution of errors to determine the most likely true value.

Towards the end of the nineteenth century, a major success of explanation in terms of
probabilities was the Statistical mechanics of Ludwig Boltzmann and J. Willard Gibbs which
explained properties of gases such as temperature in terms of the random motions of large
numbers of particles.

The field of the history of probability itself was established by Isaac Todhunter's monumental
History of the Mathematical Theory of Probability from the Time of Pascal to that of Lagrange
(1865).

20th century

Probability and statistics became closely connected through the work on hypothesis testing of R.
A. Fisher and Jerzy Neyman, which is now widely applied in biological and psychological
experiments and in clinical trials of drugs. A hypothesis, for example that a drug is usually
effective, gives rise to a probability distribution that would be observed if the hypothesis is true. If
observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is
rejected.[5]

The theory of stochastic processes broadened into such areas as Markov processes and
Brownian motion, the random movement of tiny particles suspended in a fluid. That provided a
model for the study of random fluctuations in stock markets, leading to the use of sophisticated
probability models in mathematical finance, including such successes as the widely-used Black-
Scholes formula for the valuation of options.[6]

The twentieth century also saw long-running disputes on the interpretations of probability. In the
mid-century frequentism was dominant, holding that probability means long-run relative frequency
in a large number of trials. At the end of the century there was some revival of the Bayesian view,
according to which the fundamental notion of probability is how well a proposition is supported by
the evidence for it.
The mathematical treatment of probabilities, especially when there are infinitely many possible
outcomes, was facilitated by Kolmogorov's axioms (1931).

INTRODUCTION

Probability is an experiment is a situation involving chance or probability that leads to results


called outcomes, an outcome is the result of a single trial of an experiment, an event is one or
more outcomes of an experiment, and probability is the measure of how likely an event is.

Probability in daily life

1) Business
Probability is used throughout business to evaluate financial and decision-making risks. Every
decision made by management carries some chance for failure, so probabiity analysis is
conducted formally ("math") and informally (i.e. "I hope"). Math is the preferred method but
requires some advanced training, like college courses. For everyone else theres "I hope I guess
right" .

Probability is "a number expressing the likelihood of occurrence of a specific event" (Shao, 1994,
p. 217). For use in inferential statistics, this probability must be statistically independent (Peebles,
2003).

The central limit theorem is relevant to probability analysis, and it is especially relevant to the use
of probability in business. The central limit theorem holds that the totals (and therefore the
means) of random samples will be normally distributed no matter what the distribution in the
population is like, provided only that the samples are large enough. In most instances where
inferential statistics are applied in hypothesis testing, population distributions are unknown.
Problems in hypothesis testing related to the central limit theorem most often occur when the
sample data apply to perceptions and subjective evaluations by individuals, as opposed to
objective data (Peebles, 2003).

In classical statistical analysis, probability is predicated on the condition that the outcomes of an
experiment are equally likely to occur. The approach in classical statistical analysis to probability
is that the lack of knowledge implies that all possibilities are equally likely. The classical
conception of probability applies when the events have the same chance of occurring and the set
of events are mutually exclusive and collectively exhaustive. This approach allows business
researchers to project future outcomes with some degre.

2) Transportation
In 1989 the Dutch government started the project “Safety in Inland Waterway Transport” to
establish a minimum safety level and to develop a model to assess the effect and effectiveness of
new safety measures. This model is called the Risk Effect Model, and calculates the integral
impacts of safety measures for the entire waterway system including the risks of transporting
dangerous goods. The final result of the project is a framework for evaluation, which supports
cost-benefit analysis by weighing negative economical effects against achievements in safety, for
different measures.

In this paper we will present the methods which have been used to calculate the probability of an
accident, using casuistry. In this project the probability of an accident is modelled per elementary
traffic situation (a combination of several ships carrying out a ship motion produces a traffic
situation). The number of accidents can be estimated by the number of elementary traffic
situations multiplied by the probability of an accident per elementary traffic situation.

In the paper we describe fitting procedures in order to obtain the model that “forecasts” the
probability of accidents as function of waterway attributes and circumstances. We have used
Generalized Linear Models (GLM), which do not need the assumption that the accident
probability is normally distributed. We have used the binomial approach in the GLM models.

We present the results of the fitting procedures for one group of elementary traffic situations
(through-going traffic mutually). The primary governing variables appear to be visibility, wind
speed, the ratio of the navigable width and the necessary width for an elementary traffic situation,
and the bend radius of the waterway. The circumstances (visibility and wind speed) are more
explanatory with respect to the probability of accidents than the waterway characteristics are.

THE DIFFERENCES BETWEEN THE THEORETICAL PROBABILITY AND EMPIRICAL


PROBABILITY

Theoretical Probability

Definition of Theoretical Probability

Probability is a likelihood that an event will happen.

We can find the theoretical probability of an event using the following ratio:

Let’s do a couple of examples.

Solved Examples on Theoretical Probability

Example 1

If we toss a fair coin, what is the probability that a tail will show up?

Solution:

Tossing a tail is the favorable outcome here.

When you toss a coin there are only 2 possible outcomes: a Head or a Tail

So the options for tossing a tail are 1 out of 2.

We can also represent probability as a decimal or as a percent.

Empirical Probability

A form of probability that is based on some event occurring, which is calculated using collected
empirical evidence. An empirical probability is closely related to the relative frequency in a given
probability distribution.
In order for a theory to be proved or disproved, empirical evidence must be collected. An
empirical study will be performed using actual market data. For example, many empirical studies
have been conducted on the capital asset pricing model (CAPM), and the results are slightly
mixed.

You might also like