You are on page 1of 65

Descriptive Statistics

David Ramsey
e-mail: david.ramsey@ul.ie website: www.ul.ie/ramsey

May 14, 2011

1 / 67

Brief Course Outline

1. Data Collection and Descriptive Statistics. 2. Probability Theory 3. Statistical Methods of Estimation A more detailed description of the course (lectures and tutorials), together with suggested reading, is available on my website.

2 / 67

1 Data Collection and Descriptive Statistics

Populations of objects and individuals show variation with respect to various traits (e.g. height, political preferences, the working life of a light bulb). It is impractical to observe all the members of the population. In order to describe the distribution of a trait in the population, we select a sample. On the basis of the sample we gain information on the population as a whole.

3 / 67

1.1 Types of Variables


Qualitive variables: These are normally categorical (non-numerical) variables. We distinguish between two types of qualitative variables: a) nominal: these variables are not naturally ordered in any way (e.g. i. department - mechanical engineering, mathematics, economics ii. industrial sector). b) ordinal: there is a natural order for such categorisations e.g. with respect to smoking, people may be categorised as 1: non-smokers, 2: light smokers and 3: heavy smokers. It can be seen that the higher the category number, the more an individual smokes. Exam grades are ordinal variables.
4 / 67

Quantitative Variables

These are variables which naturally take numerical values (e.g. age, height, number of children). Such variables can be measured or counted. As before we distinguish between two types of quantitative variables. a) Discrete variables: these are variables that take values from a set that can be listed (most commonly integer values, i.e. they are variables that can be counted). For example, number of children, the results of die rolls.

5 / 67

b) Continuous variables

These are variables that can take values in a given range to any number of decimal places (such variables are measured, normally according to some unit e.g. height, age, weight). It should be noted that such variables are only measured to a given accuracy (i.e. height is measured to the nearest centimetre, age is normally given to the nearest year). If a discrete random variable takes many values (i.e. the population of a town), then for practical purposes it is treated as a continuous variable.

6 / 67

1.2 Collection of data


Since it is impractical to survey all the individuals in a population, we need to base our analysis on a sample. A population is the entire collection of individuals or objects we wish to describe. A sample is the subset of the population chosen for data collection. A sampling frame is a list that is used to choose a sample (e.g. electoral register, telephone book, list of addresses). A unit is a member of the population. A variable is any trait that varies from unit to unit. The sample size is the number of individuals in a sample and is denoted by n.
7 / 67

1.2.1 Parameters and Statistics

A parameter is an unkown number describing a population. For example, it may be that 9% of the population of eligible voters (the electorate) wish to vote for the Green Party (we do not, however, observe this population proportion). A statistic is a number describing a sample. For example, 8% of a sample may wish to vote for the Green Party. This is the sample proportion. Statistics may be used to describe a population, but they only estimate the real parameters of the population.

8 / 67

Parameters and Statistics - Precision of Statistics

Naturally, the statistics from a sample will show some variation around the appropriate parameters e.g. 9% of the population wish to vote for the Green Party, but only 8% in the sample. The greater the sample size, the more precise the results (suppose we take a large number of samples of size n, the larger n the less variable the sample proportion from the various samples, i.e. the more replicable the results).

9 / 67

Parameters and Statistics - Bias


However, there may be intrinsic bias from two possible sources: a) Sampling bias - when a sample is chosen in a way such that some members of the population are more likely to be chosen than others. e.g. Suppose that the Labour party is most popular in Dublin. If we used samples of voters from Dublin to estimate support in the whole of Ireland, we would systematically overestimate the support of the Labour party. b) Non-Sampling Bias This results from mistakes in data entry and/or how interviewees react to being questioned. For example, supporters of the government may be more likely to hide their preference than other individuals. In this case, we would systematically underestimate the support of the government.
10 / 67

An Example of Sampling Bias - Estimation of the Population Mean


The sampling bias can be eliminated by choosing a sample in a more appropriate way, but not non-sampling bias. e.g. Suppose the population of interest is the Irish population as a whole and the variable of interest height. Suppose I base my estimate of the mean height of the population on the mean height of a sample of students (i.e. the sampling frame or means of selecting a sample is inappropriate). Since students tend to be on average taller than the population as a whole, I will systematically overestimate the mean height in the population. That is to say, if I consider many samples of students of say size 100, a large majority of such samples would give me an overestimate of the mean height of the population as a whole.
11 / 67

Non-Sampling Bias

Other sources of non-sampling bias may be: 1. Lack of anonymity. 2. The wording of a question. 3. The desire to give an answer that would please the interviewer. For example, surveys may systematically overestimate the willingness of individuals to pay extra for environmentally friendly goods, as stating that you are prepared to pay more is seen to be the politically correct answer.

12 / 67

Precision and Bias

It should be noted that bias is a characteristic of the way in which data are collected not a single sample. Increasing the sample size will improve the precision of an estimate, but will not aect the bias. Returning to the example of height. As the sample size increases, the sample mean becomes more replicable. However, if we are estimating the mean height of the entire population based on samples of students, there will always be a tendency to overestimate the mean height of the population.

13 / 67

1.3 Descriptive Statistics - 1.3.1 Qualitative (Categorical Data)

We may describe qualitative data using a) Frequency tables. b) Bar charts. c) Pie charts.

14 / 67

Frequency tables
Frequency tables display how many observations fall into each category (the frequency column), as well as the relative frequency of each category (the proportion of observations falling into each category). Let ni denote the number of observations in category i . The relative frequency of category i is fi , where ni fi = n Multiplying by 100, we obtain the relative frequency as a percentage. If there are missing data we may also give the relative frequencies in terms of the actual number of observations, n0 i.e. ni fi 0 = 0 n
15 / 67

Frequency tables

For example 200 students were asked which of the following bands they preferred: Franz Ferdinand, Radiohead or Coldplay. The answers may be presented in the following frequency table Band Coldplay Franz F. Radiohead Frequency 62 66 72 Relative Frequency (% ) 62 100/200 = 31 66 100/200 = 33 72 100/200 = 36

16 / 67

Bar chart
In a bar chart the height of a bar represents the relative frequency of a given category (or the number of observations in that category).

17 / 67

Pie chart
The size of a slice in a pie chart represents the relative frequency of a category. Hence, the angle made by the slice representing category i is given (in degrees) by i , where i = 360fi = 360ni n

(i.e. we multiply the relative frequency by the number of degrees in a full revolution). If the relative frequency of observations in group i is given in percentage terms, denoted pi . 1% of the observations in the sample correspond to an angle of 3.6 degrees. Thus, i = 3.6pi .
18 / 67

Pie chart

These graphs were obtained using the SPSS (PASW) package.


19 / 67

1.3.2 Graphical Presentation of Quantitative Data

Discrete data can be presented in the form of frequency tables and/or bar charts (as above). The distribution of continuous data can be presented using a histogram. The histogram estimates the probability density function of a continuous random variable (see later).

20 / 67

Histograms for continuous variables

In order to draw a histogram for a continuous variable, we need to categorise the data into intervals of equal length. The end points of these intervals should be round numbers. The number of categories used should be approximately n (normally between 5 and 20 categories are used). For example, if we have 30 observations then we should use about 30 5.5 categories. Hence, 5 and 6 are sensible choices for the number of categories. Let k be the number of categories.

21 / 67

Histograms

In order to choose the length of each interval, L, we use xmax xmin , k where xmax is smallest round number larger than all the observations and xmin is the largest round number smaller than all the observations. L If necessary L is rounded upwards, so that the intervals are of nice length and the whole range of the data is covered.

22 / 67

Histograms

The intervals used are [xmin , xmin + L], (xmin + L, xmin + 2L], . . . , (xmax L, xmax ]. In general the lower end-point of an interval is assumed not to belong to that interval (to avoid a number belonging to two classes).

23 / 67

Histograms

A histogram is very similar to a bar chart. The height of the block corresponding to an interval is the relative frequency of observations in that block. Thus, the height of a block is the number of observations in that interval divided by the total number of observations.

24 / 67

Example 1.2

We observe the height of 20 individuals (in cm). The data are given below 172, 165, 188, 162, 178, 183, 171, 158, 174, 184, 167, 175, 192, 170, 179, 187, 163, 156, 178, 182. Draw a histogram representing these data.

25 / 67

Example 1.2

We rst consider the histogram. First we choose the number of classes and the corresponding intervals. 20 4.5, thus we should choose 4 or 5 intervals.

26 / 67

Example 1.2
The tallest individual is 192cm tall and the shortest 156cm. 200cm is the smallest round number larger than all the observations and 150cm is the largest round number smaller than all the observations. To calculate the length of the intervals L= 200 150 . k

Taking k to be 4, L = 12.5. Taking k = 5, L = 10 (a nicer length). Hence, it seems reasonable to use 5 intervals of length 10, starting at 150.
27 / 67

Example 1.1

If we assume that the upper endpoint of an interval belongs to that interval, then we have the intervals [150,160], (160, 170], (170,180], (180,190], (190,200]. Now we count how many observations fall into each interval and hence the relative frequency of observations in each interval.

28 / 67

Example 1.1

Height (x) 150 x 160 160 < x 170 170 < x 180 180 < x 190 190 < x 200

No. of Observations 2 5 7 5 1

Rel. Frequency 2/20 = 0.1 5/20 = 0.25 7/20 = 0.35 5/20 = 0.25 1/20 = 0.05

29 / 67

Example 1.2
The histogram is given below:

30 / 67

Interpretation of the histogram of a continuous variable

A histogram is an estimator of the density function of a variable (see the chapter on the distribution of random variables in Section 2). The distribution of height seems to be reasonably symmetrical around 175cm.

31 / 67

1.3.3 Symmetry and Skewness of Distributions

From a histogram we may infer whether the distribution of a random variable is symmetric or not. The histogram of height shows that the distribution is reasonably symmetric (even if the distribution of height in the population were symmetric, we would normally observe some small deviation from symmetry in the histogram as we observe only a sample).

32 / 67

Right-Skewed distributions

A distribution is said to be right-skewed if there are observations a long way to the right of the centre of the distribution, but not a long way to the left. The distribution of wages is right-skewed, since a small proportion of individuals will earn several times more than the mean wage.

33 / 67

A right-skewed distribution

34 / 67

Left-skewed distributions

A distribution is said to be left-skewed if there are observations a long way to the left of the centre of the distribution, but not a long way to the right. For example, the distribution of weight of participants in the coxed boat races will have a left-skewed distribution. This is due to the fact that the majority of participants will be heavy rowers, while a minority will be very light coxes.

35 / 67

A Leftskewed Distribution

36 / 67

1.4 Numerical Methods of Describing Quantitative Data

We consider two types of measure: 1. Measures of centrality - give information regarding the location of the centre of the distribution (the mean, median). 2. Measures of variability (dispersion) - give information regarding the level of variation (the range, variance, standard deviation, interquartile range).

37 / 67

1.4.1 Measures of centrality

1. The Sample Mean, x . Suppose we have a sample of n observations, the mean is given by the sum of the observations divided by the number of observations. 1 x= xi , n
i =1 n

where xi is the value of the i -th observation.

38 / 67

The Population Mean

denotes the population mean. If there are N units in the population, then N xi = i =1 , N where xi is the value of the trait for individual i in the population. is normally unknown. The sample mean x (a statistic) is an estimator of the population mean (a parameter).

39 / 67

2. The sample median Q2

In order to calculate the sample median, we rst order the observations from the smallest to the largest. The order statistic x(i ) is the i -th smallest observation in a sample (i.e. x(1) is the smallest observation and x(n) is the largest observation). The notation for the median comes from the fact that the median is the second quartile (see quartiles in the section on measures of dispersion).

40 / 67

The sample median Q2

If n is odd, then the median is the observation which appears in the centre of the ordered list of observations. Hence, Q2 = x(0.5[n+1]) . If n is even, then the median is the average of the two observations which appear in the centre of the ordered list of observations. Hence, Q2 = 0.5[x(0.5n) + x(0.5n+1) ] One half of the observations are smaller than the median and one half are greater.

41 / 67

The sample median


One advantage of the median as a measure of centrality is that it is less sensitive to extreme observations (which may be errors) than the mean. When the distribution is skewed, it is preferable to use the median as a measure of centrality. e.g. the median wage rather than the average wage should be used as a measure of what the average man on the street earns. The distribution of wages is right-skewed and the small proportion of people who earn very high wages will have a signicant eect on the mean. The mean is greater than the median. For left-skewed distributions the mean is less than the median.

42 / 67

1.4.2 Measures of Dispersion - 1. The Range

The range is dened to be the largest observation minus the smallest observation. Since the range is only based on 2 observations it conveys little information and is sensitive to extreme values (errors).

43 / 67

2 2. The sample variance sn 1

The sample variance is a measure of the average square distance from the mean.
2 The formula for the sample variance sn n 2 sn 1 1

is given by

1 = (xi x )2 . n1
i =1

2 2 sn 1 0 and sn to each other.

= 0 if and only if all the observations are equal

44 / 67

3. The sample standard deviation s

The sample standard deviation is given by the square root of the variance. It (and hence the sample variance) can be calculated on a scientic calculator by using the n 1 or sn 1 function as appropriate. In simple terms, the standard deviation is a measure of the average distance of an observation from the mean. It cannot be greater than the maximum deviation from the mean.

45 / 67

4. The interquartile range


The i -th quartile, Qi , is taken to be the value such that i quarters of the observations are less than Qi . Thus, Q2 is the sample median. If
n+1 4

is an integer, then the lower quartile Q1 is given by Q1 = x( n+1 ) 4

Otherwise, if a is the integer part of n+1 4 [this is obtained by simply removing everything after the decimal point], then Q1 = 0.5[x(a) + x(a+1) ]

46 / 67

The interquartile range


3n+3 4

If

is an integer, then the upper quartile Q3 is given by Q3 = x( 3n+3 ) 4

Otherwise, if b is the integer part of

3n+3 4 ,

then

Q3 = 0.5[x(b) + x(b+1) ] The interquartile range (IQR) is the dierence between the upper and lower quartiles IQR = Q3 Q1

47 / 67

Choice of the measure of dispersion


The units of all the measures used so far (except for the variance) are the same units as those used for the measurement of observations. The units of variance are the square of the units of measurement. For example, if we observe velocity in metres per second, the variance is measured in metres squared per second squared. For this reason the standard deviation is generally preferred to the variance as a measure of dispersion. If a distribution is skewed then the interquartile range is a more reliable measure of the dispersion of a random variable than the standard deviation.

48 / 67

Comparison of the dispersion of two variables

Sometimes we wish to compare the dispersion of two variables. In cases where dierent units are used to measure the two variables or the means of two variables are very dierent, it may be useful to use a measure of dispersion which does not depend on the units in which it is measured. The coecient of variation C .V . does not depend on the units of measurement. It is the standard deviation divided by the sample mean sn 1 C .V . = . x

49 / 67

Example 1.2 - The sample mean

Calculate the measures of centrality and dispersion dened above for the following data. 6, 9, 12, 9, 8, 10 There are 6 items of data hence, x= 6
i =1 xi

6 + 9 + 12 + 9 + 8 + 10 =9 6

50 / 67

Example 1.2 - The sample median


In order to calculate the median, we rst order the data. If an observation occurs k times, then it must appear k times in the list of ordered data. The ordered list of data is 6, 8, 9, 9, 10, 12. Since there is an even number of data (n = 6), the median is the average of the two observations in the middle of this ordered list. Hence, Q2 = 0.5[x(n/2) + x(1+ n ) ] = 0.5[x(3) + x(4) ] = 2 9+9 2

51 / 67

Example 1.2 - The range

The range is the dierence between the largest and the smallest observations Range = 12 6 = 6.

52 / 67

Example 1.2 - The variance and standard deviation

The variance is given by


2 sn 1=

(6 9)2 + (9 9)2 + (12 9)2 + (9 9)2 + (8 9)2 + (10 9)2 = 5 =4 2 The standard deviation is given by sn 1 = sn 1 = 2.

1 (xi x )2 n1
i =1

53 / 67

Example 1.2 - The interquartile range


In order to calculate the interquartile range, we rst calculate the lower and upper quartiles. n = 6, hence n+1 4 = 1.75. The integer part of this number is 1. Hence, the lower quartile is Q1 = 0.5[x(1) + x(2) ] = 0.5(6 + 8) = 7
+3 Similarly, 3n4 = 5.25. The integer part of this number is 5. Hence, the upper quartile is

Q3 = 0.5[x(5) + x(6) ] = 0.5(10 + 12) = 11. Hence, IQR = 11 7 = 4.


54 / 67

Example 1.2 - The coecient of variation

sn 1 2 = . x 9 Suppose a variable is by denition positive, e.g. height, weight. C .V . = A coecient of variation above 1 is accepted to be very large (such variation may occur in the case of wages when wage inequality is high). With regard to the physical traits of people, values for the coecient of variation of around 0.1 to 0.3 are common (in humans the coecient of variation of height is around 0.1, the coecient of variation for weight is somewhat bigger).

55 / 67

1.5 Measures of Location and Dispersion for Grouped Data - a) Discrete Random Variables
A die was rolled 100 times and the following data were obtained Result 1 2 3 4 5 6 No. of observations 15 18 20 14 15 18

56 / 67

Grouped discrete data


Suppose the possible results are {x1 , x2 , . . . , xk } and the result xi occurs fi times. The total number of observations is n=
k i =1

fi .

The sum of the observations is given by


k i =1

xi fi .

57 / 67

Grouped discrete data

It follows that the sample mean is given by x= k


i =1 fi xi

The variance of the observations is given by


2 sn 1

1 = fi (xi x )2 n1
i =1

58 / 67

Grouped discrete data


The following table is useful in calculating the sample mean xi 1 2 3 4 5 6 fi 15 18 20 14 15 18 100
350 100

fi xi 15 36 60 56 75 108 350 = 3.5.

Hence, the sample mean is x =

59 / 67

Grouped discrete data


Once the mean has been calculated, we can add two columns for (xi x )2 and fi (xi x )2 : xi 1 2 3 4 5 6 fi 15 18 20 14 15 18 100 fi xi 15 36 60 56 75 108 350 (xi x )2 2.52 1.52 0.52 0.52 1.52 2.52 fi (xi x )2 15 2.52 = 93.75 18 1.52 = 40.5 20 0.52 = 5 14 0.52 = 3.5 15 1.52 = 33.75 18 2.52 = 112.5 289

60 / 67

Grouped discrete data

The sample variance is given by


2 sn 1

1 289 = fi (xi x )2 = = 2.92. n1 99


i =1

61 / 67

Calculation of the sample median for grouped discrete data

In this case we know the exact values of the observations and hence we can order the data. In this way we can calculate the median. Since there are 100 observations, the median is Q2 = 0.5[x(50) + x(51) ]

62 / 67

Calculation of the sample median for grouped discrete data

The 15 smallest observations are equal to 1 i.e. x(1) = x(2) = . . . = x(15) = 1. The next 18 smallest observations are equal to 2 i.e. x(16) = x(17) = . . . = x(33) = 2. The next 20 smallest observations are all equal to 3 i.e. x(34) = x(35) = . . . = x(53) = 3.

63 / 67

Calculation of the sample median for grouped discrete data

It follows that x(50) = x(51) = 3. Hence, Q2 = 0.5[x(50) + x(51) ] = 3.

64 / 67

1.5 Measures of Location and Dispersion for Grouped Data - b) Continuous Random Variables

In such cases we have data grouped into intervals. Let xi be the centre of the i -th interval and fi the number of observations in the i -th interval. The approach to calculating the sample mean and variance is the same as in the case of discrete data. In order to carry out the calculations, we assume that each observation is in the middle of the appropriate interval.

65 / 67

You might also like