You are on page 1of 7

Q.

1 Function of Statistics

Statistics is used for various purposes. It is used to simplify mass data and to make
comparisons easier. It is also used to bring out trends and tendencies in the data
as well as the hidden relations between variables. All this helps to make decision
making much easier. Let us look at each function of Statistics in detail.

1. Statistics simplifies mass data

The use of statistical concepts helps in simplification of complex data. Using statistical
concepts, the managers can make decisions more easily. The statistical methods
help in reducing the complexity of the data and consequently in the understanding
of any huge mass of data.

2. Statistics makes comparison easier

Without using statistical methods and concepts, collection of data and comparison
cannot be done easily. Statistics helps us to compare data collected from different
sources. Grand totals, measures of central tendency, measures of dispersion,
graphs and diagrams, coefficient of correlation all provide ample scopes for
comparison.

Hence, visual representation of numerical data helps you to compare the data
with less effort and can make effective decisions. The graphical curve represented in
figure 1.7 and figure 1.8 shows the profits of CBA Company and ZYX Company
respectively, for ten years from 1998 to 2008. The profits are plotted on the Y-Axis and
the timeline in years on X-Axis. From the graphs, we can compare the profits of two
companies and derive to a conclusion that profits of CBA Company in the year 2008 are
higher than that of ZYX Company. The graphical curve in case of figure 1.7 shows that
the profits for CBA Company are increasing, whereas the profits curve in figure 1.8
is constant for ZYX Company from middle of the decade (1998-2008).

3. Statistics brings out trends and tendencies in the data

After data is collected, it is easy to analyse the trend and tendencies in the data by using
the various concepts of Statistics.

4. Statistics brings out the hidden relations between variables

Statistical analysis helps in drawing inferences on data. Statistical analysis brings out the
hidden relations between variables.
5. Decision making power becomes easier

With the proper application of Statistics and statistical software packages on the
collected data, managers can take effective decisions, which can increase the profits
in a business.

Q.2 Methods of Statistical Survey

There are several ways of administering a survey, including:

Telephone

 use of interviewers encourages sample persons to respond, leading to higher


response rates.[1]
 interviewers can increase comprehension of questions by answering
respondents' questions.
 fairly cost efficient, depending on local call charge structure
 good for large national (or international) sampling frames
 some potential for interviewer bias (e.g. some people may be more willing to
discuss a sensitive issue with a female interviewer than with a male one)
 cannot be used for non-audio information (graphics, demonstrations, taste/smell
samples)
 unreliable for consumer surveys in rural areas where telephone penetration is
low[2]
 three types:
o traditional telephone interviews
o computer assisted telephone dialing
o computer assisted telephone interviewing (CATI)

Mail

 the questionnaire may be handed to the respondents or mailed to them, but in


all cases they are returned to the researcher via mail.
 cost is very low, since bulk postage is cheap in most countries
 long time delays, often several months, before the surveys are returned and
statistical analysis can begin
 not suitable for issues that may require clarification
 respondents can answer at their own convenience (allowing them to break up
long surveys; also useful if they need to check records to answer a question)
 no interviewer bias introduced
 large amount of information can be obtained: some mail surveys are as long as
50 pages
 response rates can be improved by using mail panels
o members of the panel have agreed to participate
o panels can be used in longitudinal designs where the same respondents
are surveyed several

Online surveys

 can use web or e-mail


 web is preferred over e-mail because interactive HTML forms can be used
 often inexpensive to administer
 very fast results
 easy to modify
 response rates can be improved by using Online panels - members of the panel
have agreed to participate
 if not password-protected, easy to manipulate by completing multiple times to
skew results
 data creation, manipulation and reporting can be automated and/or easily
exported into a format which can be read by PSPP, DAP or other statistical
analysis software
 data sets created in real time
 some are incentive based (such as Survey Vault or YouGov)
 may skew sample towards a younger demographic compared with CATI
 often difficult to determine/control selection probabilities, hindering quantitative
analysis of data
 use in large scale industries.

Personal in-home survey

 respondents are interviewed in person, in their homes (or at the front door)
 very high cost
 suitable when graphic representations, smells, or demonstrations are involved
 often suitable for long surveys (but some respondents object to allowing
strangers into their home for extended periods)
 suitable for locations where telephone or mail are not developed
 skilled interviewers can persuade respondents to cooperate, improving response
rates
 potential for interviewer bias

Personal mall intercept survey

 shoppers at malls are intercepted - they are either interviewed on the spot, taken
to a room and interviewed, or taken to a room and given a self-administered
questionnaire
 socially acceptable - people feel that a mall is a more appropriate place to do
research than their home
 potential for interviewer bias
 fast
 easy to manipulate by completing multiple times to skew results

Q.5 Conditional Probability

Conditional probability is the probability of some event A, given the occurrence


of some other event B. Conditional probability is written P(A|B), and is read "the
(conditional) probability of A, given B" or "the probability of A under the condition B".
When in a random experiment the event B is known to have occurred, the possible
outcomes of the experiment are reduced to B, and hence the probability of the
occurrence of A is changed from the unconditional probability into the conditional
probability given B.

Joint probability is the probability of two events in conjunction. That is, it is the
probability of both events together. The joint probability of A and B is written
or
Marginal probability is then the unconditional probability P(A) of the event A; that is,
the probability of A, regardless of whether event B did or did not occur. If B can be
thought of as the event of a random variable X having a given outcome, the marginal
probability of A can be obtained by summing (or integrating, more generally) the joint
probabilities over all outcomes for X. For example, if there are two possible outcomes
for X with corresponding events B and B', this means that . This is
called marginalization.

In these definitions, note that there need not be a causal or temporal relation between
A and B. A may precede B or vice versa or they may happen at the same time. A may
cause B or vice versa or they may have no causal relation at all. Notice, however, that
causal and temporal relations are informal notions, not belonging to the probabilistic
framework. They may apply in some examples, depending on the interpretation given to
events.

Conditioning of probabilities, i.e. updating them to take account of (possibly new)


information, may be achieved through Bayes' theorem. In such conditioning, the
probability of A given only initial information I, P(A|I), is known as the prior probability.
The updated conditional probability of A, given I and the outcome of the event B, is
known as the posterior probability, P(A|B,I).

Introduction

Consider the simple scenario of rolling two fair six-sided dice, labelled die 1 and die 2.
Define the following three events (not assumed to occur simultaneously):

A: Die 1 lands on 3.

B: Die 2 lands on 1.

C: The dice sum to 8.

The prior probability of each event describes how likely the outcome is before the dice
are rolled, without any knowledge of the roll's outcome. For example, die 1 is equally
likely to fall on each of its 6 sides, so P(A) = 1/6. Similarly P(B) = 1/6. Likewise, of the
6 × 6 = 36 possible ways that a pair of dice can land, just 5 result in a sum of 8 (namely 2
and 6, 3 and 5, 4 and 4, 5 and 3, and 6 and 2), so P(C) = 5/36.

Some of these events can both occur at the same time; for example events A and C can
happen at the same time, in the case where die 1 lands on 3 and die 2 lands on 5. This is
the only one of the 36 outcomes where both A and C occur, so its probability is 1/36.
The probability of both A and C occurring is called the joint probability of A and C and is
written , so . On the other hand, if die 2 lands on 1, the
dice cannot sum to 8, so .

Now suppose we roll the dice and cover up die 2, so we can only see die 1, and observe
that die 1 landed on 3. Given this partial information, the probability that the dice sum
to 8 is no longer 5/36; instead it is 1/6, since die 2 must land on 5 to achieve this result.
This is called the conditional probability, because it is the probability of C under the
condition that A is observed, and is written P(C | A), which is read "the probability of C
given A." Similarly, P(C | B) = 0, since if we observe die 2 landed on 1, we already know
the dice can't sum to 8, regardless of what the other die landed on.

On the other hand, if we roll the dice and cover up die 2, and observe die 1, this has no
impact on the probability of event B, which only depends on die 2. We say events A and
B are statistically independent or just independent and in this case

In other words, the probability of B occurring after observing that die 1 landed on 3 is
the same as before we observed die 1.

Intersection events and conditional events are related by the formula:

In this example, we have:

As noted above, , so by this formula:

On multiplying across by P(A),


In other words, if two events are independent, their joint probability is the product of
the prior probabilities of each event occurring by itself.

Definition

Given a probability space (Ω, F, P) and two events A, B ∈ F with P(B) > 0, the conditional
probability of A given B is defined by

If P(B) = 0 then P(A | B) is undefined (see Borel–Kolmogorov paradox for an


explanation). However it is possible to define a conditional probability with respect to a
σ-algebra of such events (such as those arising from a continuous random variable).

For example, if X and Y are non-degenerate and jointly continuous random variables
with density ƒX,Y(x, y) then, if B has positive measure,

The case where B has zero measure can only be dealt with directly in the case that
B={y0}, representing a single point, in which case

If A has measure zero then the conditional probability is zero. An indication of why the
more general case of zero measure cannot be dealt with in a similar way can be seen by
noting that that the limit, as all δyi approach zero, of

depends on their relationship as they approach zero. See conditional expectation for
more information.

You might also like