You are on page 1of 8

Statistics for Management

MB 0040 Assignment Q.1 SET-2

(a) What are the characteristics of a good measure of central tendency ?

Solution :
The characteristics of a good measure of central tendency are: Present mass data in a concise form: The mass data is condensed to make the data readable and to use it for further analysis. Facilitate comparison: It is difficult to compare two different sets of mass data. But we can compare those two after computing the averages of individual data sets. While comparing, the same measure of average should be used. It leads to incorrect conclusions when the mean salary of employees is compared with the median salary of the employees. Establish relationship between data sets: The average can be used to draw inferences about the unknown relationships between the data sets. Computing the averages of the data sets is helpful for estimating the average of population. Provide basis for decision-making: In many fields, such as business, finance, insurance and other sectors, managers compute the averages and draw useful inferences or conclusions for taking effective decisions. The following are the requisites of a measure of central tendency: It should be simple to calculate and easy to understand It should be based on all values It should not be affected by extreme values It should not be affected by sampling fluctuation It should be rigidly defined

It should be capable of further algebraic treatment

Statistics for Management


MB 0040 Assignment Q.1 SET-2

(b) What are the uses of averages ?

Solution :
Appropriate Situations for the use of Various Averages 1. Arithmetic mean is used when: In depth study of the variable is needed The variable is continuous and additive in nature The data are in the interval or ratio scale When the distribution is symmetrical

2. Median is used when: The variable is discrete There exists abnormal values The distribution is skewed The extreme values are missing The characteristics studied are qualitative The data are on the ordinal scale

3. Mode is used when: The variable is discrete There exists abnormal values The distribution is skewed The extreme values are missing The characteristics studied are qualitative

4. Geometric mean is used when: The rate of growth, ratios and percentages are to be studied The variable is of multiplicative nature

5. Harmonic mean is used when: The study is related to speed, time

Average of rates which produce equal effects has to be found

Statistics for Management


MB 0040 Assignment Q.2 SET-2

Calculate the 3 yearly and 5 yearly averages of the data in table below. Table 1: Production data from 1988 to 1997 Year 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 18 16 22 19 24 20 28 22 30

Productio 15 n (in Lakh ton)

Solution :
The below tables displays the calculated values of 3 yearly and 5 yearly averages Year 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 Values 15 18 16 22 19 24 20 28 22 30 3 Yearly Moving Average 16.33 18.67 19 21.67 21 24 23.33 26.67 -

Total 49 56 57 65 63 72 70 80 -

Year 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997

Values 15 18 16 22 19 24 20 28 22 30

Total 90 99 101 113 113 124 -

5 Yearly Moving Average 18 19.8 20.2 22.6 22.6 24.8 -

Statistics for Management


MB 0040 Assignment SET-2

Q.3 (a) What is meant by secular trend? Discuss any two methods of isolating trend values in a time series? Solution :
A secular trend is one that will is sustained (or is expected to be sustained) over the long term. The term is most often used to distinguish underlying long-term trends from seasonal variations and the effects of economic cycles. There are many techniques for separating out secular trends from seasonal, and other, variations in historical data. These range from simple year on year comparisons to complex econometric models. Secular trends are obviously what matter to investors, but investors need to look beyond historical trends, or even short term forecasts, and consider the long term sustainability of trends.

Q.3 (b)What is seasonal variation of a time series? Describe the various methods you know to evaluate it and examine their relative merits? Solution :
A secular trend is one that will is sustained (or is expected to be sustained) over the long term. The term is most often used to distinguish underlying long-term trends from seasonal variations and the effects of economic cycles. There are many techniques for separating out secular trends from seasonal, and other, variations in historical data. These range from simple year on year comparisons to complex econometric models. Secular trends are obviously what matter to investors, but investors need to look beyond historical trends, or even short term forecasts, and consider the long term sustainability of trends.

Q.4 The probability that a contractor will get an electrical job is 0.8, he will get a plumbing job is 0.6 and he will get both 0.48. What is the probability that he get at least one? Is the probabilities of getting electrical and plumbing job are independent ? Solution :
Probability of getting Electrical Job, P(A) = 0.8 Probability of getting Plumbing Job, P(B) = 0.6 Probability of getting both, P(A B) = 0.48 i.e. P(A B) = P(A) * P(B)

Statistics for Management


MB 0040 Assignment SET-2

Probability of getting at least one Job is P(A U B) = P(A) + P(B) - P(A B) 0.8 + 0.6 0.48 = 0.92 The Probability of getting Electrical and Plumbing Job are Independent.

Q.5

(a) Discuss the errors that arise in statistical survey ?

Solution :
Disability surveys and questionnaires are subject to the same general rules about surveys one would find in any standard textbook on epidemiology or survey methodology. The two familiar textbook requirements of good survey data is that it should be, Valid (measures what it is intended to measure) and Reliable (gives consistent results over repeated measurements) Of course, assessing data is not a simple matter. Though reliability is relatively easy to assess, validity can only be definitely determined if there is a 'gold standard' against which the data can be measured. Yet in the case of disability data, other than impairment information, it is doubtful whether a suitable gold standard exists. This is in part why there are various, less demanding standards of validity (construct validity, being the most prominent) for assessing the quality of data. Surveys, by their nature, attract several varieties of potential error that affect both validity and reliability. There are two sources of error in survey data: sampling error and non-sampling error: Sampling error arises because survey estimates are based on a sample rather than a complete enumeration of the population, and the sample may not be, for a variety of reasons, representative of the whole population. Sampling error is minimised by increasing the sample size of collections and improving sample design. Some of these issues are discussed below. Non-sampling error is bias in survey estimates, not traceable to features of the sample that affect the validity of the data collected. Non-sample error is very difficult to measure, and can only be minimised by paying close attention to every step in the process, from survey development, question design, data collection and processing. In a census, since it has no sampling error, all errors are therefore attributed to non-sampling error. Diagram 5.3 sets out some major sources of non-sampling error, grouped by problem area. We do not have space in this manual to discuss all of these potential sources of error. Specialists in survey methodology are the best people to be on guard against errors associated with frame, non-response and processing. Non-sample errors associated with the specification of the underlying concepts, objectives and data elements are problems that arise in the early development phase, some of which we have already discussed. Finally, measurement errors linked to respondents'

Statistics for Management


MB 0040 Assignment SET-2

characteristics, interviewers and instruments, are all familiar problems to statisticians and we will mention these only in passing in what follows.

Q.5

(b) What is quota sampling and when do we use it ?

Solution :
Quota sampling is a method for selecting survey participants. In quota sampling, a population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgment is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60. This means that individuals can put a demand on who they want to sample (targeting) This second step makes the technique non-probability sampling. In quota sampling, the selection of the sample is non-random sample and can be unreliable. For example, interviewers might be tempted to interview those people in the street who look most helpful, or may choose to use accidental sampling to question those closest to them, for time-keeping sake. The problem is that these samples may be biased because not everyone gets a chance of selection. This non-random element is a source of uncertainty about the nature of the actual sample and quota versus probability has been a matter of controversy for many years. Quota sampling is useful when time is limited, a sampling frame is not available, the research budget is very tight or when detailed accuracy is not important. Subsets are chosen and then either convenience or judgment sampling is used to choose people from each subset. The researcher decides how many of each category is selected. Quota sampling is the non probability version of stratified sampling. In stratified sampling, subsets of the population are created so that each subset has a common characteristic, such as gender. Random sampling chooses a number of subjects from each subset with, unlike a quota sample, each potential subject having a known probability of being selected.

Q.6

(a) Why do we use a chi-square test ?

Solution :
A Chi-Square is a non parametric test which can be applied on categorical data or qualitative data. This test can be applied when we have few or no assumptions about the population. Actually, Chi-Square tests allow us to do a lot more than just test for the quality of several proportions. If we classify a population into several categories with respect to two attributes (such as age and job performance), we can then use a Chi-Square test to determine whether

Statistics for Management


MB 0040 Assignment SET-2

the two attributes are independent of each other. So, Chi-Square tests can be applied on contingency table. The c2 test is used broadly to: Test goodness of fit for one way classification or for one variable only Test independence or interaction for more than one row or column in the form of a contingency table concerning several attributes Test population variance s2 through confidence intervals suggested by c2 test

Q.6

(b) Why do we use analysis of variance ?

Solution :
Analysis of variance is useful in such situations as comparing the mileage achieved by five different brands of gasoline, testing which of four different training methods produce the fastest learning record, or comparing the first-year earnings of the graduates of half a dozen different business schools. In each of these cases, we would compare the means of more than two samples. Hence, in most of the fields, such as agriculture, medical, finance, banking, insurance, education, the concept of Analysis Of Variance (ANOVA) is used. In statistical terms, the difference between two statistical data is known as variance. When two data are compared for any practical purpose, their difference is studied through the techniques of ANOVA. With the analysis of variance technique, we can test the null hypothesis and the alternative hypothesis. Null hypothesis, H0: All sample means are equal. Alternate Hypothesis, HA: all sample means are not equal or at least one of sample means differ.

Statistics for Management


MB 0040 Assignment SET-2

Initially the technique was applied in the field of Zoology and Agriculture, but in a later stage, it was applied to other fields also. In analysis of variance, the degree of variance between two or more data as well as the factors contributing towards the variance is studied. In fact, Analysis of Variance is the classification and cross-classification of statistical data with the view of testing whether the means of specific classification differ significantly or whether they are homogeneous. The Analysis of Variance is a method of splitting the total variation of data into constituent parts which measure different sources of variations. The total variation is split up into the following two-components. Variance within the subgroups of samples Variation between the subgroups of the samples

Hence, the total variance is the sum of variance between the samples and the variance within the samples. After obtaining the above two variations, these are tested for their significance by F-test which is also known as variance ratio test.

You might also like