You are on page 1of 27

Unit-IV

FORECASTING METHODS
Dr. D. Joseph Anbarasu
After completion of this course, students will be able to y define a forecasting problem in a business decision-making context, y select appropriate techniques and data, y prepare and communicate forecasts, and y evaluate and monitor forecasts.

SECTIONS
     Introduction to Forecasting Judgmental Methods Smoothing Methods Trend Projections Trend and Seasonal Components

Introduction to Forecasting
Fore.an ancient term of warning bearing the threat of harm at worst, and uncertainty at best, to those within potential range... + Cast serving up a projectile to the unseen and usually unknown beneath the deceptive surface = Forecast.... a warning to those who use it... a confession of uncertainty (or deception) by those who create it... a threat of harm to those in its path The quantitative forecasting methods can be studied as independent modules. But then if it is studied with other tools like averages and dispersion, it will be more meaningful. Predicting, with some measure of accuracy or reliability, what those levels of demand will be is our subject. Forecasts are more than simple extrapolations of past data into the future using mathematical formulas, or gathering trends from experts Forecasts are mechanisms of arriving at measures for planning the future. When done correctly, they provide an audit trail and a measure of their accuracy. Not only do forecasts help us plan, they help us save money. For example, one company reduced its investment in inventory from Rs. 28 million to Rs. 22 million by adopting a formal forecasting method that reduced forecast error by 10%. This is an example of forecasts helping product companies replace inventory with information, which not only saves money but improves customer response and service. Quantitative time series forecasting methods constitute the core of forecasting in quantitative method course. They are based on some assumptions. They are: 1) past information about the variable being forecast is available, 2) the information can be quantified, and

3) it is assumed that patterns in the historical data will continue into the future. If the historical data is restricted to past values of the response variable of interest, the forecasting procedure is called a time series method. For example, many sales forecasts rely on the classic time series methods that we will cover in this module. When the forecast is based on past sales, we have a time series forecast. A side note: although it is said "sales" above, whenever possible, one can try to forecast sales based on past demand rather than sale. Why? Suppose you own a book shop at the downtown area in Chennai. You stock 100 "Management of Christian Services in India" books getting ready for sales. Further suppose that 110 customers enter your store to buy book. What are your sales? That's right, 100. But what is your demand? It is right again, 110. You would want to use the demand figure, rather than the sales figure, in preparing for next year as the sales figures do not capture your stock outs. So why do many companies make sales forecasts based on past sales and not demand? The chief reason is cost - sales are easily captured at the check out station, but you need some additional feature on your management information system to capture demand.

Let us go back to the introduction. The other major category of forecasting methods that rely on past data are regression models, often referred to as "causal" models as in our text. These models base their prediction of future values of the response variable, sales for example, on related variables such as disposable personal income, gender, and may be age of the consumer. However, we should use the term "causal" with caution, as age, gender, or disposable personal income may be highly related to sales, but age, gender or disposable personal income may not cause sales. We can only prove causation in an experiment. The final major category of forecasting models includes qualitative methods which generally involve the use of expert judgment to develop the forecast. These methods are useful when we do not have historical data, such as the case when we are launching a new product line without past experience. These methods are also useful when we are making projections into the far distant future. We will cover one of the qualitative models in this introduction. First, let us examine a simple classification scheme for general guidelines in selecting a forecasting method, and then cover some basic principles of forecasting.

Selecting a Forecasting Method


The table 1 illustrates general guidelines for selecting a forecasting method based on time span and purpose criteria. Table 1 Time Span Purpose Forecasting Method Long Range (3 or more years) Capital Budgets Delphi Product Selection Expert Judgment Plant Location Sales Force Composite Intermediate (1 to 3 years) Capacity Planning Regression Sales Planning Time Series Decomposition Short Range (1 year or less) Sales Forecasting Trend Projection Scheduling Moving Average Inventory Control Exponential Smoothing These are general guidelines. You may find a missionary organisation using trend projection 2

to make reliable forecasts for missionaries required for 3 years into the future. It should also be noted that since companies use computer software time series forecasting packages rather than hand computations, they may try several different techniques and select the technique which has the best measure of accuracy (lowest error). As we discuss the various techniques, and their properties, assumptions and limitations, It is sure that you will gain an appreciation for the above classification scheme.

Forecasting Principles
Classification schemes such as the one above are useful in helping select forecasting methods appropriate to the time span and purpose at hand. There are also some general principles that should be considered when we prepare and use forecasts; especially those based on time series methods. Oliver W. Wight in Production and Inventory Control in the Computer Age, and Thomas H. Fuller in Microcomputers in Production and Inventory Management developed a set of principles for the production and inventory control community a while back that have universal application. 1. Unless the method is 100% accurate, it must be simple enough so people who use it know how to use it intelligently (understand it, explain it, and replicate it). 2. Every forecast should be accompanied by an estimate of the error (the measure of its accuracy). 3. Long term forecasts should cover the largest possible group of items; restrict individual item forecasts to the short term. 4. The most important element of any forecast scheme is that thing between the keyboard and the chair. The first principle suggests that you can get by with treating a forecast method as a "black box," as long as it is 100% accurate. That is, if an analyst simply feeds historical data into the computer and accepts and implements the forecast output without any idea how the computations were made, that analyst is treating the forecast method as a black box. This is alright as long as the forecast error (actual observation - forecast observation) is zero. If the forecast is not reliable (high error), the analyst should be, at least, highly embarrassed by not being able to explain what went wrong. There may be much worse ramifications than embarrassment if budgets and other planning events relied heavily on the erroneous forecast. The second principle is really important. In another section, it is introduced in a simple way to measure forecast error, the difference between what actually occurs and what was predicted to occur for each forecast time period. Suppose an auto company predicts sales of 30 cars next month using Method A. Method B also comes up with a prediction of 30 cars. Without knowing the measure of accuracy of the two Methods, we would be indifferent as to their selection. However, if we knew that the composite error for Method A is +/- 2 cars over a relevant time horizon; and the composite error for Method B is +/- 10 cars, we would definitely select Method A over Method B. Why would one method have so much error compared to another? That will be one of our learning objectives in this module. It may be because we used a smoothing method rather than a method that incorporates trend projection when we should not have - such as when the data exhibits a growth trend. Smoothing methods such as exponential smoothing always lag trends which results in forecast error. 3

The third principle might best be illustrated by an example. Suppose you are Medical Superintendent for a hospital, and you are responsible for forecasting demand for patient beds. If your forecast was going to be for capacity planning three years from now, you might want to forecast total patient beds for the year 2003. On the other hand, if you were going to forecast demand for patient beds for April 2000, for scheduling purposes, then you would need to make separate forecasts for emergency room patient beds, surgery recovery patient beds, OB patient beds, and so forth. When much detail is required, stick to a short term forecast horizon; aggregate your product lines/type of patients/etc. when making long term forecasts. This generally reduces the forecast error in both situations. One should apply the last principle to any quantitative method. There is always room for judgmental adjustments to our quantitative forecasts. How can we improve the application of judgment? That is our next subject. 1. JUDGMENTAL FORECASTING

Short-term forecasting with Judgmental forecasting methods


The forecasting techniques covered thus far all involve the manipulation of historical data to produce forecasts of important variables of interest. It must be emphasized that good judgment is an essential component of all good forecasting techniques. Good judgment is required in deciding on the data that are relevant to the problem and in interpreting the results of the data analysis process, and sometimes constitutes a major portion of the analysis itself. To make a long story short: all forecasts involve judgment. Despite the range of software available and enormous technical advances, most businesses still forecast judgmentally, with computers merely providing historical information. Therefore, the role of judgment goes much further in that it is directly applied to the quantity to forecast. In fact in studies of forecasting practices, researchers have found that only around 10% of firms use quantitative forecasting methods, with most practitioners favoring judgmental methods. There appears to be a large number of reasons for the predominance of \direct" judgment in forecasting. These relate to the nature of statistical forecasting methods and the nature and attitude of the personnel involved in producing and using the forecasts.

WHY predominance of direct Judgment?


1) Quantitative forecasting methods extrapolate established patterns and/or existing relationships in order to predict their continuation, assuming that such patterns/relationships will not change during the forecasting phase. When changes occur, unless detected early, large and costly forecast errors are unavoidable. Thus, on their own, statistical methods are perceived being slow to react to a change which is a reality in the dynamic environment we live in. 2) There may be little or no past data that are relevant to the current forecasting problem. When it is available, past data may contain the effects of unusual events, like strikes, so that data have to be massaged to remove these effects before statistical methods can be applied. Moreover, statistical methods may have difficulty in taking into account special 4

events that are known to be occurring in the future although some statistical packages now include a facility for modeling events like sales promotions. 3) Many organizations lack personnel who are skilled in the use of statistical methods. 4) Even if forecasting staff are available, a number of behavioral factors favor judgment. Managers who are well regarded for their knowledge of their products or markets may feel a loss of control and ownership if forecasts are delegated to a statistical model. Indeed, the variable to forecast may be partly controllable by the manager who is making the forecasts. In addition, forecasting meetings, where managers combine their judgments, may be seen as useful events in their own right, so that there would be opposition to any replacement of these by statistical methods. 5) The processes underlying complex statistical models may not be transparent to forecast users and the outputs of these methods therefore attract skepticism.

Disadvantages of direct judgments


The human mind has limited information capacity and the prevailing view is that people use simplifying mental strategies, called heuristics, to cope with the complexities of the forecasting task. A typical heuristic will be to use this week's sales figure as a starting point for estimating next week's sales. This is known as Anchor and adjusts heuristic. Similarly, people may judge the amount of systematic variation there is in a time series by assessing how representative it appears to be of their stereotypical view of a random pattern. In this case they are using what is known as the representativeness heuristic. While heuristics like these sometimes provide people with efficient ways of tackling problems they can also lead to systematic biases in judgment. A large number of judgmental biases are documented. 1) When judgment is used to extrapolate time-series patterns people tend to underestimate the amount of growth or decay that is present in the series. 2) People also tend to see systematic patterns in randomness, and possibly as a consequence of this, they tend to overreact to the most recent observation. 3) When judgmental forecasters have access to noon time-series information (e.g. information about promotion campaigns) they tend to use it inconsistently or see pre conceived relationships that do not exist.

What should be done? integration


Why integration of judgmental forecasts with statistical methods is worth considering? Maintaining a role for judgment in forecasting should help to mitigate some of the behavioral objections to \pure" statistical forecasting methods, while also reducing the effects of judgmental bias. Human judges are adaptable and can take into account one-off events, but they are inconsistent, can only take into account small amount of data and suffer form cognitive biases. In contrast, statistical methods are rigid, but consistent and can make optimal use of large volumes of data. In light of the above, it is not surprising that there is much evidence to show that more accurate forecasts can be obtained by integrating judgment and statistical methods in appropriate ways. There are two methods of integration: Voluntary integration and Mechanical integration

Voluntary Integration
The judgmental forecaster is supplied with details of the statistical forecast and decides how to use this in forming his or her judgment. The forecaster is therefore free to completely ignore or completely accept the statistical forecast or merely to take some account of it in making the judgment forecast. Usually, voluntary integration involves the application of judgmental adjustments to statistical forecasts, but it might also involve the forecaster modifying \prior" judgmental forecasts in light of a newly arrived statistical forecasts.

Mechanical Integration
Here the integrated forecast is obtained through the application of a statistical method to the judgmental forecast. More details will be given in class problems with Mechanical integration Long-term / medium term forecasting with Judgmental forecasting methods There are situations which suggest the use of imagination and brainstorming rather than complete reliance on the collection and manipulation of historical data. 1) Jury of executive opinion / The Delphi method 2) Scenario writing

The Delphi method


When experts are gathered in a single meeting location and asked about the future, group dynamics can sometimes distort the process and result in a consensus that may not be carefully thought out by all participants. The presence of dominating individuals in the group and pressures to conform can lead to judgments being formed without sufficient exchange of information and views. Problems like these have led to the development of structured group techniques like the Delphi method. The Delphi method was first used by an Air-Force funded Rand corporation project in the 1950s.

How does Delphi work?


In the first round of the method, the experts reply in writing to the questions posed by the investigating team. The team then summarizes the comments of the participants and mails them back. Participants are then able to read the reactions of the others and to either defend their original views or modify them based on the views of others. This process continues through two or three rounds until the investigators are satisfied that many of the viewpoints have been developed and carefully considered. Participants may then be invited to meet to share and debate their viewpoints. At the conclusion of this process, the investigating team should have good insight into the future and can begin to plan their organization's posture accordingly. Any Delphi procedure has four key features: anonymity, iteration, controlled feedback, and aggregation of group response.

Example
A firm that does most its business in the US is considering expanding its presence in Europe, Japan and Australia, where it currently has relatively small percentages of its product market. Before committing to the capital expenditures associated with global expansion, the company 6

is interested in developing forecasts of sales growth in Europe, Japan, and Australia for the next 10 years.

Scenario Writing
Scenario writing involves defining the particulars of an uncertain future by writing a \Script" for the environment of an organization over many years in the future. New technology, population shifts, and changing consumer demands are among the factors that are considered and woven into this speculation to provoke the thinking of top management. A most likely scenario is usually written along with one or more less likely, but possible scenarios. By considering the posture of the company for each of these possible future environments, top management is in a better position to react to actual business environment changes as they occur and to recognize the long-range implications of subtle changes that might otherwise go unnoticed. The scenario writing process is often followed by a discussion phase, sometimes by a group other than the one that developed the scenarios. Discussions among the groups can then be used to defend and modify viewpoints so that a solid consensus and alternative scenarios are developed. For example, scenarios might be developed by a company's planning staff and then discussed by the top management team. Even if none of the scenarios are subsequently proven to be totally true, this process encourages the long-range thinking of the top management team and better prepares it to recognize and react to important environmental changes.

Example
A company that manufactures industrial telephone and television cables decides to conduct a scenario-writing exercise prior to its annual weekend retreat. Each member of the retreat group is asked to write three scenarios that might face the company 5 years from now: a worst-case, almost-likely, and a best-case scenario. After these writing assignments are completed, and just before the weekend retreat, the president and his senior vice president summarize the contributions into the following three scenarios on which they intend to focus the group's discussion during the two-day retreat. Let's do a "for fun" (not graded and purely volunteer) Delphi Exercise. Suppose you are a market expert and wish to join the other experts in our class in predicting what the BSE Share Price Index will be on April 16, 2005 (as close to tax due date as possible). I will post a Conference Topic called "BSE Predictions" on the course. Please reply to that conference topic by simply stating what you think the BSE will close at on April 16, 2001. Please respond by January 27, 2005, so I can post the summary statistics before we leave the forecasting material on February 3rd. We will now begin our discussion of quantitative time series forecasting methods. 2. Estimating a model: Least Squares Regression In order to estimate the parameters we will need data on the two variables sales and ads (or in general Y and X). We can then apply a suitable technique to fit the model (i.e. to estimate the parameters). The usual method which is used is called least squares estimation. This ensures that the sum of squares of the differences between the actual and fitted values over the sample is as small as possible. Other methods could be used to fit the model but the properties of the resulting estimates may not so suitable (or even known with any certainty). Least squares 7

principles mean that we can produce formulae for the two parameter estimates, in terms of the data on X and Y. In fact by rearranging these formulae several alternative versions can be derived. One of the simplest forms to remember is in terms of the deviations of the observations from their sample means. Using lower case letters to denote deviations from sample means and putting "hats" over the parameter letters to indicate that they are estimators rather than true parameter values we can write down the following formulae.

Notice that the equation for emphasizes that the mean point lies on the estimated regression line. Suppose we have 12 observations on a quarterly basis for the last three years as follows: Table 1 Quarter 1 2 3 4 5 6 7 8 9 10 11 12 Sales 1420 1500 1640 1560 1600 1650 1690 1800 1760 1790 1850 2000 Advertisements 700 750 800 770 790 810 840 900 875 880 920 1000

NB All figures in value terms, measured in Rs.1000's. This is not really a very large sample (n=12) but will serve for expository purposes. Before we fit the regression model let us examine the time series plot and scatter diagram to confirm that a linear model seems to be appropriate. Having satisfied ourselves that a linear model will be suitable we can now go ahead and calculate the mean values of X and Y, then the deviations from the mean values and the relevant totals for the formulae, eventually arriving at our estimates for the parameters a and b. It makes sense to lay out the calculations in a table as shown below. (In fact these results were obtained using the spreadsheet package Excel which has the advantages of providing accurate results, less susceptible to errors than if you use a calculator because the deviations formula can be copied down the columns and also neat accurate graphs. Such packages also typically incorporate built in regression commands which produce additional statistical information.). Why dont you try with your calculator?

Quarter Sales Ads X y Y 1420 00 1 1 00 0 2 1 40 800 3 1 0 0 4 1 00 90 5 1 0 810 6 1 90 840 7 1800 900 8 1 0 8 9 1 90 880 10 18 0 920 11 2000 1000 12 202 0 1003

x -2 8.333 -188.333 -48.3333 -128.333 -88.3333 -38.3333 1. 111. 1. 101. 1 1. 311. 0.00123

x*x -13 .2 -8 .2 -3 .2 - .2 -4 .2 -2 .2 3. 3. 38. 43. 83. 1 3. 0.0

x*y 18 4.0 439.0 3 1314.0 3 4389.0 3 2139.0 3 89.0 2 14.0 2 40 4.0 3 1 01. 3 1914.0 3 014.0 3 2 814.0 8 .249

3 0.42 1 243. 1 2.083 8 02.083 408 .41 100 .2 .2 118. 2 .083 444 .91 13 39. 8 103 .42 14 0 .003

he a pri ri restri ti s are satisfied and both esti ates are plausible. he results suggest that every rupee extra spent on advertising raises sales by Rs.1.9388 and that there is a " ore" level of sales of Rs. 9 . 9 whi h can be expected whatever the level of ad ertising. v Figure, as given under, shows the scatter diagram with the fitted equation plotted on it.

a ! Y  b X ! 83 .

 (1.939)1 88.333 !

.9 8

b!

xy ! 1

x!x

. 3 ! 1.9389 9 . 9

Y!

1 3 ! 83 . 1

n Y

X!

X !

! 1 88.333

3. Smoothing Methods In this section we want to cover the components of a time series; naive, moving average and exponential smoothing methods of forecasting; and measuring forecast accuracy for each of the methods introduced.

Pause and Reflect


Recall that there are three general classes of forecasting or prediction models. Qualitative methods, including the Delphi, rely on expert judgment and opinion, not historical data. Regression models rely on historical information about both predictor variables and the response variable of interest. Quantitative time series forecasting methods rely on historical numerical information about the variable of interest and assume patterns in the past will continue into the future. This section begins our study of the time series models, beginning with patterns or components of time series.

Components of a Time Series


The patterns that we may find in a time series of historical data include the average, trend, seasonal, cyclical and irregular components. The average is simply the mean of the historical data. Trend describes real growth or decline in average demand or other variable of interest, and represents a shift in the average. The seasonal component reflects a pattern that repeats within the total time frame of interest. For example, 10 years ago in Nilgiri Hills, tourists inflow was much higher in March through May, peaking in May. October was the low month. This seasonal pattern repeated through 2001. Between 2001 and 2003, March through May continued to repeat each year as high months, but the peaks were not as high as before, nor the off-season inflow as low as before, much to the delight of the hotel and tourism industries in Ooty and around areas. The point is, seasonal peaks repeat within the time frame of interest usually monthly or quarterly seasons within a year, although there can be daily seasonality in the stock market in Mumbai or Chennai as an example. The cyclical component shows recurring values of the variable of interest above or below the average or long-run trend line over a multiyear planning horizon. The length of cycles is not constant, as with the length of seasonal peaks and valleys, making economic cycles much tougher to predict. Since the patterns are not constant, multiple variable models such as econometric and multiple regression models are better suited to predict cyclical turning points than time series models. The last component is irregular component. The irregular component is the random variation in demand that is unexplained by the average, trend, seasonal and/or cyclical components of a time series. As in regression models, we try to make the random variation as low as possible. Quantitative models are designed to address the various components covered above. Obviously, the trend projection technique will work best with time series that exhibit an historical trend pattern. Time series decomposition which decomposes the trend and seasonal components of a time series works best with times series having trend and seasonal patterns. 10

Where does that leave our first set of techniques, smoothing methods? Actually, smoothing methods work well in the presence of average and irregular components. We start with them next. Before we start, let us get some data. This time series consists of quarterly demand for a product. Historical data is available for 12 quarters, or three years. Table 2 provides the history. Table 2 Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Quarter 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Actual Demand 398 395 361 400 410 402 378 440 465 460 430 473

Moving Average Method


A simple technique which works well with data that has no trend, seasonality nor cyclic components is the moving average method. Admittedly, this example data set has trend (note the overall growth rate from period 1 to 12), and seasonality (note that every third quarter reflects a decrease in historical demand). But let us apply the moving average technique to this data so we will have a basis for comparison with other methods later on. A three period moving average forecast is a method that takes three periods of data and creates an average. That average is the forecast for the next period. For this data set, the first forecast we can compute is for Period 4, using actual historical data from Periods 1, 2 and 3 (since its a three period moving average). Then, after Period 4 occurs, we can make a forecast for Period 5, using historical data from Periods 2, 3, and 4. Note that Period 1 dropped off, hence the term moving average. This technique then assumes that actual historical data in the far distant past, is not as useful as more current historical data in making forecasts. Before showing the formulas and illustrating this example, let me introduce some symbols. In this module, I will be using the symbol Ft to represent a forecast for period t. Thus, the forecast for period 4 would be shown as F4. I will use the symbol Yt to represent the actual historical value of the variable of interest, such as demand, in period t. Thus, the actual demand for period 1 would be shown as Y1. 11

The forecast for period four is: F4 = (Y1 + Y2 + Y3 ) / 3 = (398 + 395 + 361) / 3 = 384.7 To generate the forecast for period five: F5 = (Y2 + Y3 + Y4 ) / 3 = (395 + 361 + 400) / 3 = 385.3 We continue through the historical data until we get to the end of Period 12 and make our forecast for Period 13 based on actual demand from Periods 10, 11 and 12. Since Period 12 is the last period for which we have data, this ends our computations. If someone was interested in making a forecast for Periods 14, 15, and 16, as well as Period 13, the best that could be done with the moving average method would be to make the "out period" forecasts the same as the most current forecast. This is true because moving average methods cannot grow or respond to trend. This is the chief reason these types of methods are limited to short term applications, such as what is the demand for the next period. The forecast calculations are summarized in Table 3 Table 3 Actual Demand (Yt) 398 395 361 400 410 402 378 440 465 460 430 473

Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Quarter 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

Forecast (Ft)

384.7 385.3 390.3 404.0 397.6 406.7 427.7 455.0 451.7 454.3 454.3 454.3 454.3

Thus finishes our first time series forecast...but wait a minute...is it any good? To answer that question, we need to measure the accuracy of the forecast. Then, for all other forecasts presented, we will include that method's measure of accuracy.

Measuring the Error: Forecast Accuracy


The criteria for selecting between forecasting models, and for keeping tabs of how well a forecast is doing once it is implemented is called measuring the accuracy or the error of the forecast. To do this, we simply have to compute the average error of a forecast over an appropriate period of time. Typically, the appropriate period of time would be the period of time from which data was gathered and forecasts were applied.

12

Forecast error in time period t (Et) is the actual value of the time series minus the forecasted value in time period y. Error in time t = Et = ( Yt - Ft ) Table 2.2.3 illustrates the error computations for the three period moving average model.

Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Quarter 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

Table 4 Actual Forecast Demand (Yt) (Ft) 398 395 361 400 384.7 410 385.3 402 390.3 378 404.0 440 397.6 465 406.7 460 427.7 430 455.0 473 451.7 454.3 454.3 454.3 454.3

Error (Et )

Error2 (Et2)

15.3 24.7 11.7 -26.0 43.3 58.3 32.3 -25.0 21.3

235.1 608.4 136.1 676.0 1877.8 3402.8 1045.4 625 455.1

SSE MSE RMSE

9061.78 1006.86 31.73

Since we are interested in measuring the magnitude of the error to determine forecast accuracy, note that I square the error to remove the plus and minus signs. Then, we simple average the squared errors. To compute an average or a mean, first sum the squared errors (SSE), then divide by the number of errors to get the mean squared error (MSE), then take the square root of the error to get the Root Mean Square Error (RMSE). SSE = (235.1 + 608.4 +...+ 625.0 + 455.1) = 9061.78 MSE = 9061.78 / 9 = 1006.86 RMSE = Square Root (1006.86) = 31.73 From your statistics course(s), you will recognize the RMSE as simply the standard deviation of forecast errors and the MSE is simply the variance of the forecast errors. Like the standard deviation, the lower the RMSE the more accurate the forecast. Thus, the RMSE can be very helpful in choosing between forecast models. We can also use the RMSE to do some probability analysis. Since the RMSE is the standard deviation of the forecast error, we can treat the forecast as the mean of a distribution, and 13

apply the important empirical rule, assuming that forecast errors are normally distributed. I will bet that some of you remember this rule: 68% of the observations in a bell-shaped symmetric distribution lie within the area: mean +/1 standard deviation 95% of the observations lie within: mean +/- 2 standard deviations 99.7% (almost all of the observations) lie within: Mean +/- 3 standard deviations Since the mean is the forecast, and the standard deviation is the RMSE, we can express the empirical rule as follows: 68% of actual values are expected to fall within: Forecast +/- 1 RMSE = 454.3 +/- 31.73 = 423 to 486 95% of the actual values are expected to fall within: Forecast +/- 2 RMSE = 454.3 +/- (2*31.73) = 391 to 518 99.7% of the actual values are expected to fall within: Forecast +/- 3 RMSE = 454.3 +/- (3*31.73) = 359 to 549 As in studying the mean and standard deviation in descriptive statistics, this is very important and has similar applications. One thing we can do is use the 3 RMSE values to determine if we have any outliers in our data that need to be replaced. Any forecast that is more than 3 RMSE's from the actual figure (or has an error greater than the absolute value of 3 * 31.73 or 95 is an outlier. That value should be removed since it inflates the RMSE. The simplest way to remove an outlier in a time series is to replace it by the average of the value just before the outlier and just after the outlier.

Another very hand use for the RMSE is in the setting of safety stocks in inventory situations. Lets draw out the 2 RMSE region of the empirical rule for this forecast: | _2.5%_ | _________________95%__________________ | _2.5% _ | 359 .......391...................................454 ....... .................................518.........549 Since the middle 95% of the observations fall between 391 and 518, 5% of the observations fall below 391 and above 518. Assuming the distribution is bell shaped, 2.5 % of the observations fall below 391 and 2.5% fall above 518. Another way of stating this is that 97.5% of the observations fall below 518 (when measuring down to negative infinity, although the actual data should stop at 359. Bottom line: if the firm anticipates actual demand to be 518 (2 RMSE's above the forecast), then by stocking an inventory of 518 they will cover 97.5% of the actual demands that theoretically could occur. That is, the are operating at a 97.5% customer service level. In only 2.5% of the demand cases should they expect a stock out. That's really slick, isn't it. Following the same methodology, if the firm stocks 549 items, or 3 RMSE's above the forecast, they are virtually assured they will not have a stock out unless something really unusual occurs (we call that an outlier is statistics). Finally, if the firm stocks 486 items (2 RMSE's above the forecast), they will have a stock out in 16% of the cases, or cover 84% of 14

the demands that should occur (100% - 16%). In this case, they are operating at an 84% customer service level. | _16%_ | _________________68%__________________ | _16% _ | 359 ......423...................................454 ......................................486.........549 We could compute other probabilities associated with other areas under the curve by finding the cumulative probability for z scores, z = (observation - forecast) / RMSE (do you remember that from the stat course(s)?). For our purposes here, it is only important to illustrate the application from the statistics course. FORECASTING WITH MOVING AVERAGES THE MOVING AVERAGE USES 3 TIME PERIODS TIME PERIOD TIME SERIES VALUE FORECAST FORECAST ERROR TIME PERIOD 1 2 3 4 5 6 7 8 9 10 11 12 TIME SERIES VALUE 398 395 361 400 410 402 378 440 465 460 430 473 FORECAST FORECAST ERROR

384.67 385.33 390.33 404.00 396.67 406.67 427.67 455.00 451.67

15.33 24.67 11.67 -26.00 43.33 58.33 32.33 -25.00 21.33

THE MEAN SQUARE ERROR 1,006.86 THE FORECAST FOR PERIOD 13 454.33 Before we do one more moving average example, take a look at the forecast error column. Note that most of the errors are positive. Since error is equal to actual time series value minus the forecasted values, positive errors mean that the actual demand is generally greater than the forecasted demand - we are under forecasting. In this case, we are missing a growth trend in the data. As pointed out earlier, moving average techniques do not work well with time series data that exhibit trends.

Five Periods Moving Average Forecast


FORECASTING WITH MOVING AVERAGES THE MOVING AVERAGE USES 5 TIME PERIODS TIME PERIOD TIME SERIES VALUE FORECAST ERROR

15

TIME PERIOD 1 2 3 4 5 6 7 8 9 10 11 12

TIME SERIES VALUE 398 395 361 400 410 402 378 440 465 460 430 473

FORECAST

FORECAST ERROR

392.80 393.60 390.20 406.00 419.00 429.00 434.60

9.20 -15.60 49.80 59.00 41.00 1.00 38.40

THE MEAN SQUARE ERROR 1,349.37 THE FORECAST FOR PERIOD 13 453.60 The RMSE for the Five-Period Moving Average forecast is 36.7, which is about 16% worse than the error of the three- period model. The reason for this is that there is a growth trend in this data. As we increase the number of periods in the computation of the moving average, the average begins to lag the growth trend by greater amounts. The same would be true if the historical data exhibited a downward trend. The moving average would lag the trend and provide forecasts that would be above the actual.

Pause and Reflect


The moving average forecasting method is simple to use and understand, and it works well with time series that do not have trend, seasonal or cyclical components. The technique requires little data, only enough past observations to match the number of time periods in in the moving average. Forecasts are usually limited to one period ahead. The technique does not work well with data that is not stationary - data that exhibits trend, seasonality, and/or cyclic patterns.

One-Period Moving Average Forecast or the "Naive Forecast"


A naive forecast would be one where the number of periods in the moving average is set equal to one. That is, the next forecast is equal to the last actual demand. Don't laugh! This technique might be useful in the case of rapid growth trend; the forecast would only lag the actual by one quarter or by one month, whatever the time period of interest. Of course, it would be much better to use a model that can make a trend projection if the trend represents a real move from a prior stationary pattern - we will get to that a bit later. FORECASTING WITH MOVING AVERAGES THE MOVING AVERAGE USES 1 TIME PERIODS TIME PERIOD TIME SERIES VALUE FORECAST FORCAST ERROR TIME PERIOD TIME SERIES 16 FORECAST FORECAST ERROR

1 2 3 4 5 6 7 8 9 10 11 12

VALUE 398 395 361 400 410 402 378 440 465 460 430 473

398.00 395.00 361.00 400.00 410.00 402.00 378.00 440.00 465.00 460.00 430.00

-3.00 -34.00 39.00 10.00 -8.00 -24.00 62.00 25.00 -5.00 -30.00 43.00

THE MEAN SQUARE ERROR 969.91 THE FORECAST FOR PERIOD 13 473.00 This printout reflects a slightly lower RMSE than the three period moving average. That concludes our introduction to smoothing techniques by examining the class of smoothing methods called moving averages. The last smoothing method we will examine is called exponential smoothing, which is a form of a weighted moving average method.

Exponential Smoothing
This smoothing model became very popular with the production and inventory control community in the early days of computer applications because it did not need much memory, and allowed the manager some judgment input capability. That is, exponential smoothing includes a smoothing parameter that is used to weight either past forecasts (places emphasis on the average component) or the last observation (places emphasis on a rapid growth or decline trend component). The exponential smoothing model is: Ft+1 = aYt + (1 - a) Ft where Ft+1 = forecast of the time series for period t + 1 Yt = actual value of the time series in period t Ft = forecast of the time series for period t a = smoothing constant or parameter (0 <a< 1) The smoothing constant or parameter, a, is shown as the Greek symbol alpha in the text - I am limited to alpha characters. In any case, if the smoothing constant is set at 1, the formula becomes the naive model we already studied: Ft+1 = Yt If the smoothing constant is set at 0, the formula becomes a weighted average model which gives most weight to the most recent forecast, with diminishing weight the farther back in the time series. Ft+1 = Ft 17

Setting a can be done by trial and error, perhaps trying 0.1, 0.5 and 0.9, recording the RMSE for each run, then choosing the value of a that gives forecasts with the lowest RMSE. Some guidelines are, set a relatively high when there is a trend and you want the model to be responsive; set a relatively low when there is just the irregular component so the model will not be responding to random movements. Let's do some exponential smoothing forecasts with a set at 0.6, relatively high. To get the model started, we begin by making a forecast for Period 2 simply based on the actual demand for Period 1. F2 = Y1 = 398 Then the first exponential smoothing forecast is actually made for Period 3, using information from Period 2. Thus t = 2, t+1 = 3, and Ft+1 = F2+1 = F3. For this forecast, we need the actual demand for Period 2 (Yt = Y2 = 395), the forecast for Period 2 (F2 = 398. The result is: F3 = a Y2+ (1 - a) F2 = 0.6 (395) + (1-0.6) (398) = 396.2 The next forecast is for Period 4: F4 = a Y3+ (1 - a) F3 = 0.6 (361) + (1-0.6) (396.2) = 375.08 This continues through the data until we get to the end of Period 12 and are ready to make our last forecast for Period 13. Note that all we have to maintain in historical data is the last forecast, the last actual demand and the value of the smoothing parameter - that is why the technique was so popular since it did not take much data. However, I do not subscribe to "throwing away" data files today - they should be archived for audit trail purposes. Anyway, the forecast for Period 13: F13 = a Y12+ (1 - a) F12 = 0.6 (473) + (1-0.6) (439.86) = 459.74 FORECASTING WITH EXPONENTIAL SMOOTHING THE SMOOTHING CONSTANT IS 0.6 TIME PERIOD TIME SERIES VALUE FORECAST FORECAST ERROR TIME TIME PERIOD SERIES VALUE 1 398 2 395 3 361 4 400 5 410 6 402 7 378 8 440 9 465 10 460 11 430 12 473 FORECAST FORECAST ERROR

398.00 396.20 375.08 390.03 402.01 402.01 387.60 419.04 446.62 454.65 439.86 18

-3.00 -35.20 24.29 19.97 -0.01 -24.01 52.40 45.96 13.38 -24.65 33.14

THE MEAN SQUARE ERROR 871.52 THE FORECAST FOR PERIOD 13 459.74 This model provides a single forecast since, like the moving average techniques, it does not have the capability to address the trend component. The Root Mean Square Error is 29.52, (square root of the mean square error), or slightly better than the best results of the moving average and naive techniques. However, since the time series shows trend, we should be able to do much better with the trend projection model that is demonstrated next.

Pause and Reflect


The exponential smoothing technique is a simple technique that requires only five to ten historical observations to set the value of the smoothing parameter, then only the most recent actual observation and forecasting values. Forecasts are usually limited to one period ahead. The technique works best for time series that are stationary, that is, do not exhibit trend, seasonality and/or cyclic components. While historical data is generally used to "fit the model" - that is set the value of a, analysts may adjust that value in light of information reflecting changes to time series patterns.

Trend Projections
When a time series reflects a shift from a stationary pattern to real growth or decline in the time series variable of interest (e.g., product demand or student enrollment at the university), that time series is demonstrating the trend component. The trend projection method of time series forecasting is based on the simple linear regression model. However, we generally do not require the rigid assumptions of linear regression (normal distribution of the error component, constant variance of the error component, and so forth), only that the past linear trend pattern will continue into the future. Note that is the trend pattern reflects a curve, we would have to rely on the more sophisticated features of multiple regression. The trend projection model is: Tt = b0 + b1 t where, Tt = Trend value for variable of interest in Period t b0 = Intercept of the trend projection line b1 = Slope, or rate of change, for the trend projection line FORECASTING WITH LINEAR TREND THE LINEAR TREND EQUATION: T = 367.121 + 7.776 t where T = trend value of the time series in period t TIME PERIOD TIME SERIES VALUE FORECAST FORECAST ERROR TIME TIME PERIOD SERIES VALUE 1 398 2 395 FORECAST FORECAST ERROR 23.10 12.33 19

374.90 382.67

3 4 5 6 7 8 9 10 11 12

361 400 410 402 378 440 465 460 430 473

390.45 398.23 406.00 413.78 421.55 429.33 437.11 444.88 452.66 460.43

-29.45 1.78 4.00 -11.78 -43.55 10.67 27.90 15.12 -22.66 12.57

THE MEAN SQUARE ERROR 449.96 THE FORECAST FOR PERIOD 13 468.21 THE FORECAST FOR PERIOD 14 475.99 THE FORECAST FOR PERIOD 15 483.76

THE FORECAST FOR PERIOD 16 491.54


Now we are getting somewhere with a forecast! Note the mean square error is down to 449.96, giving a root mean square error of 21.2. Compared to the three period moving average RMSE of 31.7, we have a 33% improvement in the accuracy of the forecast over the relevant period. Now, if this were products such as automobiles, to achieve a customer service level of 97.5%, we would create a safety stock of 2 times the RMSE above the forecast. So, for Period 13, the forecast plus 2 times the RMSE is 468.21 + (2 * 21.2) or 511 cars. With the three period moving average method, the same customer service level inventory position would be: 454.3 + (2 * 31.7) or 518. The safety stocks are 2 times 21 (42 for the trend projection) compared to 2 times 31.7 (63 for the three period moving average). This is a difference of 21 cars which could represent significant inventory carrying cost that could be avoided with the better forecasting method.

Pause and Reflect


The trend projection model is appropriate when the time series exhibits a linear trend component that is assumed to continue into the future. While rules of thumb suggest 20 observations to compute and test parameters of linear regression models, the simple trend projection model can be created with a minimum of 10 observations. The trend projection model is generally used to make multiple period forecasts for the short range, although some firms use it for the intermediate range as well.

Trend and Seasonal Components


The last time series forecasting method that we examine is very powerful in that it can be used to make forecasts with time series that exhibit trend and seasonal components. The method is most often referred to as Time Series Decomposition, since the technique involves breaking down and analyzing a time series to identify the seasonal component in what are called seasonal indexes. The seasonal indexes are used to "deseasonalize" the time series. The deseasonalized time series is then used to identify the trend projection line used to make a deseasonalized projection. Lastly, seasonal indexes are used to seasonalize the trend

20

projection. Let's illustrate how this works. As usual, we will use The Management Scientist to do our work after the illustration.

The Seasonal Component


The seasonal component may be found by using the centered moving average approach as presented in the text, or by using the season average to grand average approach described here. The latter is a simpler technique to understand, and comes very close to the centered moving average approach for most time series. The first step is to gather observations from the same quarter and find their average. I will repeat Table 2.2.1 as Table 2.4.1, so we can easily find the data: Table 5 Actual Demand 398 395 361 400 410 402 378 440 465 460 430 473

Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Quarter 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

To compute the average demand for Quarter 1, we gather all observations for Quarter 1 and find their average, then repeat for Quarters 2, 3 and 4: Quarter 1 Average = (398 + 410 + 465) / 3 = 424.3 Quarter 2 Average = (395 + 402 + 460) / 3 = 419 Quarter 3 Average = (361 + 378 + 430) / 3 = 389.7 Quarter 4 Average = (400 + 440 + 473) / 3 = 437.7 The next step is to find the seasonal indexes for each quarter. This is done by dividing the quarterly average from above, by the grand average of all observations. Grand Average = (398+395+361+400+410+402+378+ 440+465+460+430+473) / 12 = 417.7 Seasonal Index, Quarter 1 = 424.3 / 417.7 = 1.016 Seasonal Index, Quarter 2 = 419 / 417.7 = 1.003 Seasonal Index, Quarter 3 = 389.7 / 417.7 = 0.933 Seasonal Index, Quarter 4 = 437.7/ 417.7 = 1.048 21

These indexes are interpreted as follows. The overall demand for Quarter 4 is 4.5 percent above the average demand, thus making Quarter 4 a "peak quarter." The overall demand for Quarter 3 is 6.7 percent below the average demand, thus making Quarter 3 an "off peak" quarter. This confirms our suspicion that demand is seasonal, and we have quantified the nature of the seasonality for planning purposes. We will let the computer program do the next steps, but I will illustrate with a couple of examples. The next task is to "deseasonalize" the data. We do this by dividing each actual observation by the appropriate seasonal index. So for the first observation, where actual demand was 398, we note that it is a first quarter observation. The deseasonalized value for 398 is: Deseasonalized Y1 = 398 / 1.016 = 391.7 Actual demand would have been 391.7 if there were no seasonal effects. Let's do four more: Deseasonalized Y2 = 395 / 1.003 = 393.8 Deseasonalized Y3 = 361 / 0.933 = 386.9 Deseasonalized Y4= 400 / 1.048 = 381.7 Deseasonalized Y5 = 410 / 1.016 = 403.6 you have seen "deseasonalized" numbers in articles in the Finance Journal or other popular business press and journals. This is how those are computed. The next step is to find the trend line projection based on the deseasonalized observations. This trend line is a bit more accurate than the trend line projection based on the actual observations since than line contains seasonal variation. Tt = 363 + 8.4 t This trend line a close to the line we computed in Section 2.3, when the line was fit to the actual, rather than the seasonal data: Tt = 367 + 7.8 t. Once we have the trend line, making a forecast is easy. Let's say we want to make a forecast for time period 2. F2 = T2 * S2 = [363 + 8.4 ( 2 ) ] * 1.009 = 379.8 * 1.009 = 383.2 Note that number of seasons is 4 for quarterly data, 12 for monthly data, and so forth. Here is the printout. FORECASTING WITH TREND AND SEASONAL COMPONENTS SEASON SEASONAL INDEX 1. 1.046 2. 1.009 3. 0.920 4. 1.025 TIME PERIOD TIME SERIES VALUE FORECAST ERROR TIME PERIOD 1 2 TIME SERIES VALUE 398 395 FORECAST FORECAST ERROR 9.51 11.75

388.49 383.25 22

3 4 5 6 7 8 9 10 11 12

361 400 410 402 378 440 465 460 430 473

357..42 406.60 423.81 417.32 388.49 441.20 459.12 451.38 419.57 475.80

3.58 -6.60 -13.81 -15.32 -10.49 -1.20 5.88 8.62 10.43 -2.80

THE MEAN SQUARE ERROR 87.25 THE FORECAST FOR PERIOD 13 494.43 THE FORECAST FOR PERIOD 14 485.44 THE FORECAST FOR PERIOD 15 450.64 THE FORECAST FOR PERIOD 16 510.40 The Mean Square Error of 87.25, gives a root mean square error of 9.3, a spectacular improvement over the other techniques. A sketch of the actual and forecast data shows how well the trend and seasonal model can do at responding to the trend and the seasonal turn points. Note how the four periods out forecast continues the response to both components.

Pause and Reflect


The trend and seasonal components method is appropriate when the time series exhibits a linear trend and seasonality. This model, compared to the others, does require significantly more historical data. It is suggested that you should have enough data to see at least four or five repetitions of the seasonal peaks and off peaks (with quarterly data, there should be 16 to 20 observations; with monthly data, there should be 48 to 60 observations). Well, that is it to the introduction to times series forecasting material. Texts devoted entirely to this subject go into much more detail, of course. For example, there are exponential smoothing models that incorporate trend; and time series decomposition models that incorporate the cyclic component. Two parting thoughts are as follows. In each of the "Pause and Reflect" paragraphs, suggestions for number of observations are given in the historical data base. There is always some judgment required here. While we need a lot of data to fit the trend and trend and seasonal models, "a lot of data" may mean going far into the past. When we go far into the past, the patterns in the data may be different, and the time series forecasting models assume that any patterns in the past will continue into the future (not the values of the past observations, but the patterns such as slope and seasonal indexes). When worded on forecasts for airport traffic, we would love to go back 10 years, but tourist and permanent resident business travel is different today than 10 years ago so we must balance the need for a lot of data with the assumption of forecasting. The second thought is to always remember to measure the accuracy of your models. We ended with a model that had a root mean square error that was a 75% improvement over the 5-period moving average. 23

MULTIPLE CHOICES
1. What is most likely to be the major difference between forecasting sales of a private business versus forecasting the demand of a public good supplied by a governmental agency? A) Amount of data available. B) Underlying economic relationships. C) Lack of market-determined price data for public goods. D) Last of historical data. E) Lack of quantitative ability by government forecasters. 2. Which subjective forecasting method depends upon the anonymous opinion of a panel of individuals to generate sales forecasts? A) Sales Force Composites. B) Customer Surveys. C) Jury of Executive Opinion. D) Delphi Method. E) None of the above. 3. Which subjective sales forecasting method may have the most information about the spending plans of customers for a specific firm? A) Sales Force Composites. B) Index of consumer sentiment. C) Jury of Executive Opinion. D) Delphi Method. E) None of the above. 4. Which subjective sales forecasting technique may have problems with individuals who have a dominant personality? A) Sales Force Composites. B) Customer Surveys. C) Jury of Executive Opinion. D) Delphi Method. E) None of the above. 5. Which of the following is not considered a subjective forecasting method? A) Sales force composites. B) Naive methods. C) Delphi methods. D) Juries of executive opinion. E) Consumer surveys. 6. Which of the following is not an argument for the use of subjective forecasting models? A) They are easy for management to understand. B) They are quite useful for long-range forecasts. C) They provide valuable information that may not be present in quantitative models. D) They are useful when data for using quantitative models is extremely limited. E) None of the above. 7. Forecasts based solely on the most recent observation(s) of the variable of interest A) are called naive forecasts. B) are the simplest of all quantitative forecasting methods. C) leads to loss of one data point in the forecast series relative to the original series. 24

D) E)

are consistent with the random walk hypothesis in finance, which states that the optimal forecast of today's stock rate of return is yesterday's actual rate of return. All the above.

8. Suppose you are attempting to forecast a variable that is independent over time such as stock rates of return. A potential candidate-forecasting model is A) The Jury of Executive Opinion. B) Last periods actual rate of return. C) The Delphi Method. D) Last periods actual rate of return plus some proportion of the most recently observed rate of change in the series. E) None of the above. 9. Measures of forecast accuracy based upon a quadratic error cost function, notably root mean square error (RMSE), tend to treat A) levels of large and small forecast errors equally. B) large and small forecast errors equally on the margin. C) large and small forecast errors unequally on the margin. D) every forecast error with the same penalty. E) None of the above. 10. Which of the following measures is a poor indicator of forecast accuracy, but useful in determining the direction of bias in a forecasting model? A) Mean Absolute Percentage Error. B) Mean Percentage Error. C) Mean Squared Error. D) Root Mean Squared Error. E) None of the above. 11. Which measure of forecast accuracy is analogous to standard deviation? A) Mean Absolute Error. B) Mean Absolute Percentage Error. C) Mean Squared Error. D) Root Mean Squared Error. 12. Which of the following measures of forecast performance are used to compare models for a given data series? A) Mean Error. B) Mean Absolute Error. C) Mean Squared Error. D) Root Mean Squared Error. E) All of the above. 13. Which of the following is not an appropriate use of forecast errors to access the accuracy of a particular forecasting model? A) Examine a time series plot of the errors and look for a random pattern. B) Examine the average absolute value of the errors. C) Examine the average squared value of the errors. D) Examine the average level of the errors. E) None of the above. 25

14. When using quarterly data to forecast domestic car sales, how can the simple naive forecasting model be amended to model seasonal behavior of new car sales, i.e., patterns of sales that arise at the same time every year? A) Forecast next period's sales based on this period's sales. B) Forecast next period's sales based on last period's sales. C) Forecast next periods sales based on the average sales over the current and last three quarters. D) Forecast next period's sales based on sales four quarters ago. E) None of the above. 15. Of the following model selection criteria, which is often the most important in determining the appropriate forecast method? A) Technical background of the forecast user. B) Patterns the data have exhibited in the past. C) How much money is in the forecast budget? D) What is the forecast horizon? E) When is the forecast needed? 16. Which of the following is incorrect? A) The forecaster should be able to defend why a particular model or procedure has been chosen. B) Forecast errors should be discussed in an objective manner to maximize managements confidence in the forecast process. C) Forecast errors should not be discussed since most people know that forecasting is an inexact science. D) You should tailor your presentation to the sophistication of the audience to maximize credibility in the forecast process. E) None of the above. 17. Which time-series component is said to fluctuate around the long-term trend and is fairly irregular in appearance? A) Trend. B) Cyclical. C) Seasonal. D) Irregular. E) None of the above. 18. Forecasting January sales based on the previous month's level of sales is likely to lead to error if the data are _____. A) Stationary. B) Non-cyclical. C) Seasonal. D) Irregular. E) None of the above. 19. The difference between seasonal and cyclical components is: A) Duration. B) Source. C) Predictability. D) Frequency. 26

E)

All the above.

20. For which data frequency is seasonality not a problem? A) Daily. B) Weekly. C) Monthly. D) Quarterly. E) Annual. 21. One can realistically not expect to find a model that fits any data set perfectly, due to the ____ component of a time series. A) Trend. B) Seasonal. C) Cyclical. D) Irregular. E) None of the above. 22. When a time series contains no trend, it is said to be A) nonstationary. B) seasonal. C) nonseasonal. D) stationary. E) filtered. 23. An unbiased model A) is one that does not consistently over-estimate or under-estimate the true value of a parameter. B) is one that consistently produces estimates with the smallest RMSE. C) is one, which contains no independent variable; it depends solely on time-series pattern recognition. D) is one made up by a team of forecasters. Reference:
Ascher, William. Forecasting: An Appraisal for Policymakers and Planners. Almon, Clopper. 1985: Interindustry Forecasts of the American Economy. BP Bails, Dale and Pepper.Business Fluctuations, 2nd edition. Belsley, David. "Modelling and Forecasting Reliability," paper presented at ISF 87. Bernstein, Peter L. and Silbert, "Keeping Informed. Are Economic Forecasters Worth Listening To?" Harvard Business Review, 62:5 (Sept.- Oct., 1984), 32-40. Bowerman, Bruce, and O'Connel.Forecasting and Time Series. Butler, Kavesh, Platt. Methods and Techniques of Business Forecasting. Chambers, J. C., and Mullick. "How to Choose the Right Forecasting Technique," Harvard Business Review (July-August 1971), 45-74.

27

You might also like