Introduction - University of Alberta



|IMPROVING FORECAST ACCURACY FOR CLIENT COURIER – FINAL REPORT |

CLIENT: CLIENT COURIER LTD.

CAPSTONE CONSULTING GROUP,

PREPARED FOR: IGNACIO CASTILLO

APRIL 15, 2003

AUTHORS

This report was prepared by:

Adam Crowe

acrowe@ualberta.ca

Andrea Denney

adenney@ualberta.ca

David Fath

dfath@ualberta.ca

Elaine Siu

eysiu@ualberta.ca

Linda Tarshahani

lindat@ualberta.ca

Table of Contents

Executive Summary 1

Introduction 2

Background 2

Problem Definition 2

Objectives 3

Scope and Deliverables 4

Assumptions 5

Methodology 6

Phase One: Data Analysis 6

Current Methodology 6

Data Patterns and Trends 7

Phase Two: Research & Modeling 8

Research 8

Modeling 8

Our Modifications to the Triple Exponential Smoothing Model 10

Results 13

Phase Three: Build Forecast Tool 14

Tool Requirements 14

How it works 15

Managerial Discussion 17

Cost analysis 17

Forecasting Alternatives 17

Future considerations 18

Recommendations 18

Integration with other tools 18

Conclusion 19

Exhibits 20

Exhibit 1 – Forecast activity areas 20

Exhibit 2 – Example Impacts of Current Forecasting Methodology 20

Exhibit 3 – MAPE comparison for tested models 21

Exhibit 4 – TES model in-depth – Initialization 21

Exhibit 5 – TES model in-depth – Learning Phase 22

Exhibit 6 – TES model in-depth - Forecasting 23

Exhibit 7 – Solution and Modifications 24

Exhibit 8 – Monthly forecasting errors (1 month out) 25

Exhibit 9 – Daily forecasting errors (4 weeks/20 working days out) 25

Exhibit 10 – Capstone vs. Forecast Pro Error rates 26

Exhibit 11 – Tool Screenshot - The Welcome Splash Page 26

Exhibit 12 – Tool Screenshot - The Forecast Menu 27

Exhibit 13 – Tool Screenshot - Reporting Structure 28

Exhibit 14 - Revisiting our Objectives for Client 29

Appendices 30

Appendix 1: Detailed description of tested forecasting models 30

References 33

Executive Summary

Client Courier Ltd. has hired Capstone Consulting Group to support its efforts in improving forecast accuracy and efficiency in its Western Canadian distribution centres. Many of the Client’s employees have lost confidence in the current corporate forecasts and are relying on their own personal judgment to make important decisions. As a result, Client feels that inaccurate forecasts are contributing to poor decisions on both strategic and tactical levels.

Client makes many forecasts that represent different regions, activities and employee groups, and makes monthly, weekly, and daily forecasts for each of them. Capstone has agreed to provide a comprehensive tool that will enable managers and front-line staff alike to produce forecasts so that they can be easily used to make important decisions

Capstone considered several new forecasting methods that could increase forecasting accuracy, including: ARIMA, Time Series Decomposition, the Theta model, and Triple Exponential Smoothing (TES). It was decided that TES was the best method to use for Client’s monthly and daily forecasts as it produced the best forecasts for 2002 when data from this year was held back.

To further improve Client’s forecasts, Capstone has customized the TES model to incorporate several unique features of the company’s historical data. The accuracy of this new model brought the average forecasting error down 56% from Client’s previous forecast error. This difference was measured using mean absolute percent error, and is an average of all forecasts.

Once a suitable methodology was agreed upon, Capstone began creating a user friendly VBA tool that could be used across the organization. The tool is capable of incorporating new data and modifying forecasts to reduce errors. Capstone feels this tool will be incredibly useful for Client’s Western Canadian region and hopes that its success will contribute to Client’s competitive advantage in the courier industry.

Introduction

Background

Client Courier Ltd. (Client) is Canada’s leader in overnight package delivery, and is a major provider of integrated distribution solutions in North America. Client is interested in formalizing and automating the volume forecasting process for its distribution centers in western Canada. According to the Company, improving forecast accuracy and efficiency is important because it will allow a more organized approach to staff scheduling and resource allocation. As a result of this new approach, Client feels that it can enhance its competitive advantage in the Canadian courier industry by using their resources more effectively. Client Courier Ltd. has hired Capstone Consulting Group (Capstone) to analyze its historical volume, research mathematical methods for forecasting its demand, and recommend a forecast method that improves forecast accuracy. Capstone will also build a flexible and dynamic software application that will allow users to forecast volume demand using a customized forecasting approach.

Problem Definition

The major problem that Client faces is that inaccurate forecasts are leading to poor strategic and tactical decisions within the company. Front line employees and regional managers have lost confidence in the corporate forecasts and are relying on their own personal judgment to make important decisions. Client recognizes that more sophisticated forecasts are needed to create buy-in among its employees and allow them to make informed decisions.

Objectives

Capstone Consulting outlined several objectives that would address Client’s needs:

• Formalize a forecasting method by providing a mathematical basis from which forecasts can be produced. We planned to exploit the relationships and patterns that exist in historical data and use them to predict future demand.

• Improve forecasting accuracy and efficiency for forecasts relating to all regional distribution centres, business types and carriers as listed in Exhibit I.

• Provide a user interface that will allow a wide variety of people within the company to access forecast information to make decisions. This would enable:

1. Senior managers to use long-range forecasts to make strategic decisions.

2. Operational managers to use month-to-month forecasts

3. Enable tactical decision making based on changes to day-to-day forecasts

Scope and Deliverables

Capstone Consulting and Client Courier agreed the scope of this project would be as follows:

• Investigate:

o Underlying patterns and trends in daily, weekly and monthly data

o Forecast methods suitable for Client’s business

• Produce daily, weekly and monthly forecasts using the most suitable method for:

o Seven regions within Western Canada

o Four business activities (Exhibit 1)

o Two types of carriers (Exhibit 1)

• Develop:

o A user-friendly, stand-alone forecasting tool

o A descriptive user manual for the tool

• Evaluate:

o The performance of the forecasts for all regions, all types

o The strengths and limitations of the model and tool

Our scope was limited to generating forecasts and did not include:

• Integration of the delivered forecasting tool with other decision support tools

• Providing recommendations based on our findings about operational activities, such as staff scheduling

• Implementation of our tool into Client’s corporate structure, which includes training of front-line staff

Assumptions

Capstone Consulting made the following assumptions in developing the forecast models and tool:

• Each business type in each region has its own percentage allocation of work between PCL and Agents but these percentages remain relatively stable; therefore, forecasting for total volume and then breaking the forecast down by carrier type is acceptable.

• The model must use historical data to generate forecasts

• Forecasts produced by Forecast Pro provide an adequate benchmark by which forecast results can be compared.

• Mean Absolute Percent Error is the most suitable forecast error to measure since it normalizes the errors across demand volumes of different sizes and is the measurement currently used by Client.

• All the data provided by Client Courier Ltd is accurate, reliable and complete.

Methodology

The methodology undertaken by Capstone Consulting can be broken into three distinct phases. The initial phase was to analyze historical data to find recurring patterns and trends. The second phase was to research forecast models, analyze their respective properties and error rates and then build a forecast model that produced the smallest forecast errors. The third phase was to develop a forecasting tool based on the forecast model.

Phase One: Data Analysis

Current Methodology

Client is utilizing a last point method of forecasting which is then adjusted based on managerial judgment. However, Client has noticed the following symptoms from using this approach (see Exhibit 2 for visual explanation):

• Consistently lower forecasts – Management’s adjustments to the forecasts are typically conservative

• High degree of variation in forecast errors – Forecasts rely on a single historical data point; therefore, Client is assuming that each data point will represent future demand

• Unreasonable forecasts – Special causes of variation that affected last year’s demand such as severe snowstorms, terrorist attacks, and other unpredictable events, are not adjusted for in predicting future demand.

Data Patterns and Trends

Capstone Consulting utilized forecasting tools, data set graphing, and percent errors to test characteristics in Client’s historic data. The following patterns were found:

1. Seasonality - Historical volume demand for Client Courier contained daily, weekly and monthly seasonality for most regions and activities. Since many of Client’s clients are retailers, monthly seasonality is closely related to the annual consumer purchasing cycles that exist within the industry. Weekly and daily seasonality can be attributed to a variety of factors that are specific to each region and type of activity.

2. Insignificant trend - Capstone also observed that there has been no significant increase or decrease in annual demand volume for Client over the last four years for most regions. However, some regions did exhibit annual trends, but these fluctuations were small and inconsistent.

3. Weekly groupings – We also discovered that Client would group months by number of weeks (4 or 5). This meant that the last two days of a month could be considered part of the first week of the following month. When compared to the methodology of grouping dates to compile a month (eg. Feb.1-Feb.28), we found that that Client’s weekly method produced lower errors and showed a higher seasonality, thus this method was chosen for future modeling.

4. Working days in a month – We found that the number of working days in a particular month had a high correlation to that month’s volume. For example, if January 2001 had 19 working days and January 2002 had 20, the monthly volume difference would be expected to differ by one working day’s volume.

Phase Two: Research & Modeling

Research

Capstone Consulting’s research centered around four forecasting models. Three of these, ARIMA, Time Series decomposition and Triple Exponential Smoothing (TES) were chosen due to their proven capabilities and widespread acceptance as forecasting tools. The fourth, the Theta model, is relatively new and had an intriguing premise. Detailed descriptions of these four models are listed in Appendix 1.

Modeling

The modeling component combined a qualitative and quantitative analysis of the four models chosen to test. The qualitative analysis comprised a suitability test to the data series provided by Client Courier based on model characteristics. The quantitative analysis was conducted by holding back a period of 12 months from our original data set of 4 years. Forecasts were then prepared for those 12 months based on the remaining 3 years of data. Next, the forecasts were compared to the actual monthly figures and a Mean Absolute Percent Error (MAPE) was generated. This performance measure was chosen because the percentage component of it standardizes errors, which are based on different sample sizes of n. Since each month had a different demand volume, this measure was ideal. In addition, Client uses MAPE on a corporate-wide basis for benchmarking purposes.

Modeling analysis and comparative results

The characteristics of Client’s data were such that most of the forecasting models tested could be eliminated. Two characteristics in particular eliminated the practicality of the models:

1. Limited data - Certain regions had very limited amounts of data because of their recent change to a distribution centre.

2. Outliers - All regions contained outliers, which were explained by weather problems, loss or gains of clients and other factors that are not accounted for.

These reasons made the ARIMA Box Jenkins model an obvious choice for omission. ARIMA does not work well with outliers, as they violate the stationary assumption. As well, ARIMA requires lengthy time-series data, which was not available for some regions. The ARIMA model would not be effective without extensive data cleaning, which would have enlarged the scope of the project immensely. Time series decomposition proved to be effective in identifying components present in the series but was also quite susceptible to outliers and ultimately did not forecast well. The Theta model produced relatively accurate results but required another forecasting method to forecast the Theta data series. This data series exhibited many of the same basic features of the original series, therefore the ARIMA and decomposition models were not appropriate. This left the Triple Exponential Smoothing model, which worked quite well to forecast the Theta data series. Our last test, and model of choice, was the Triple Exponential Smoothing (TES) model independently. TES does not have the strict basic assumptions of the ARIMA model and it is not as susceptible to outliers, therefore large scale data cleaning is not necessary. Comparing the mean average percent error results from the 4 models validated our choice to use TES (see Exhibit 3). The basic components (Initialization, Learning and Forecasting) of the Triple Exponential Smoothing model can be found in Exhibits 4, 5 and 6.

Our Modifications to the Triple Exponential Smoothing Model

Initial trend smoothing parameter

After some initial analysis it was noticed that the initial trend component in the TES model was not always indicative of future trend and was skewing our forecast. The initial trend in the basic Triple Exponential Smoothing Model is calculated by subtracting the first term of the second period from the first term of the first period. To make our forecasts more accurate, we multiplied the calculated trend to a value between zero and one. This trend smoothing parameter is then calculated using non-linear programming to minimize the fitted errors of the data series. If a value of zero was computed, it meant that the calculated trend was highly skewed and would not be value added for future forecasts. A value of one would indicate that the calculated trend was accurate in determining future forecasts. We also found that when dealing monthly totals there was a high correlation between the number of working days in a month relative to that month’s average number of working days (see Exhibit 7).

Number of working days index

As discussed previously, the number of working days in a particular month fluctuates slightly from year to year depending on where the weekends fall and on holidays such as Easter. To take this into account for our forecasts, we created an index of the number of working days in a particular month divided by that month’s average number of working days (see Exhibit 7). This index was multiplied by a smoothing parameter and then by the forecast. This smoothing parameter was adjusted using non-linear programming as with the other parameters. These modifications greatly improved our forecast accuracy relative to the traditional Triple Exponential Smoothing Model.

Holiday Adjustments

Holidays pose a number of problems for our daily forecasts. First, because our daily forecasts use Triple Exponential Smoothing with five seasons, it is impossible for our model to automatically pick up annual holiday patterns. Second, because the actual demand for holidays is often zero, the seasonality index for that day of the week will be drastically affected, and can severely impact the forecasts for subsequent weeks. For example, if the volume on the first Monday of August (Victoria day) is zero, our model will assign the subsequent Monday a seasonality index that is very low. However, actual demand for that Monday will most likely be far higher than such an index would predict. Our model does not know that the first Monday was a holiday that represents an irregular demand volume.

To combat these problems, we make two important adjustments: first, we adjust the series of data that represents historical demand. The volume on holidays is changed to represent the average volume that would be expected on that day of the week if a holiday had not taken place. We do this by taking an average of the same day of the week over the preceding two weeks. This prevents our model from adjusting the seasonality parameter to account for the irregular demand that is experienced on holidays. The second adjustment that is made reduces the forecasted demand that would be expected on a particular day to zero if that day is a holiday. We do this by attaching a binary variable to each future day in the forecast, and assigning a value of 0 to days that are holidays. We then multiply the binary variable to the Triple Exponential Smoothing forecast, which gives us our final forecast for that day.

Weekends

Our model does not forecast demand for weekends. This is because the historical demand volume on weekends is very low and highly sporadic. Client tries to handle all volume during the regular workweek and its activity during weekends is often determined by management on a short-term basis. Because of this, Capstone feels that it is better to leave weekend forecasts to the discretion of managers than to time series analysis. Client’s management has supported this approach.

Results

Results for our methodology were measured using the mean absolute percent error for each forecast, and were averaged across forecasts to obtain summary metrics for reporting purposes. We compared our forecast errors to those produced by other forecasts, including those made by Client and those generated using specialized software.

Client: Compared to Client’s last point method, our forecasts reduce overall errors by the following amounts:

Monthly – 56% (see Exhibit 8)

Daily -- 2.2% (see Exhibit 9)

* Weekly forecasts errors for Client have not yet been given to Capstone for comparison.

For a detailed breakdown of Capstone’s forecast errors compared to Client’s errors, please see Exhibits 8 and 9 (note that these forecast horizons have been specified by the client as being the most common and widely-used internally).

Forecast Pro: Forecast Pro is a leading forecasting software that is used by thousands of businesses across the world, and has won several forecasting awards. Forecast Pro uses several conventional forecasting methods including ARIMA, Time Series Decomposition and Triple Exponential Smoothing. It fits each method of forecasting to the historical data, and chooses the forecasting method that reduces errors the most. Compared to Forecast Pro, our Errors are as follows:

Monthly – 19 %

Daily -- 27 %

For a detailed breakdown of Capstone’s forecast errors compared to Forecast Pro’s errors, please see Exhibit 10.

Phase Three: Build Forecast Tool

Tool Requirements

1. Based in Microsoft Excel

2. User friendly interface

3. Capability to forecast all regions and all types within a reasonable time period

4. Clean reporting structure for graphs and tables

5. Option to update data

6. Easy to install

7. User manual

How it works

The forecasting tool is an Excel file that can easily be transferred and saved on different computers. Everything that is needed to run the tool is in Microsoft Excel 1997, 1998, 2000 or XP. Users need to be aware that an Excel Add-in known as Solver, which comes standard with Microsoft Office, is also needed in order for the program to work correctly. Therefore if users cannot find Solver under Tools ( Add-Ins in Excel, they will need to find the Microsoft Office installation CD to add the feature before continuing.

Starting up

Upon running the program a welcome splash page is presented to the user (see Exhibit 11). From the splash page the user has three options: 1. Update the data in the tool, 2. Forecast, or 3. End the program.

Updating data

If the user chooses to update the data they must also choose whether it is daily, weekly or monthly data that they would like to update. Once a choice is made the user will be brought to a spreadsheet containing the dates, region, business type and carrier type information. The user can copy and paste the data or manually enter data into this spreadsheet and the tool will use this information next time the user chooses to forecast. Once the user is finished entering the data the user will select the “Done” button on the spreadsheet to be brought back to the splash page.

Forecasting

The forecast component of the tool is chosen when the user selects the “Start” button from the splash page. A user form will appear allowing the user to select the type of forecast he/she intends to do. This user form contains all the combinations of forecasts the user can choose from. The user may choose from; daily, weekly, monthly forecasts, the business types, carrier types, regions, and the starting and ending dates for the forecast period (see Exhibit 12). Once a selection has been made the model will use data from the most recent three years from the starting date of the forecast period. This data is used for the initializing and learning phases of the TES forecast model. From what the user has entered the tool will figure out how many combinations of forecasts it needs to perform and will update the formulas in the TES model as required. It will then run solver for each combination of forecasts to minimize MAPE from the learning phase to produce forecasts for the selected periods. Each time solver computes values for the LS, TS and SS weights, the forecast and data are copied and pasted onto a separate results sheet.

Viewing results

From the results sheet, graphs and tables are produced and used to initialize another user form as the reporting output for managerial use (see Exhibit 13). The user can use a list box on the right hand side of the form to select which graph and data he/she would like to view or save to a separate file with the system date and “Forecast” text as the filename.

When the user is finished forecasting he/she will be brought back to the splash page where they can select the “End” button to save and exit the program.

Managerial Discussion

Cost analysis

As with any feasible project, there needs to be a benefit to the client. In the case of this project, the intended benefit was derived from the ability to have more complex mathematical methods as a foundation for a forecasting model, which in turn would increase forecasting accuracy and return confidence to the front-line staff. In developing the forecasting model, it was found that significant improvements in accuracy resulted. Assuming a $1.00 cost per package over or under forecasted, Capstone estimates annual cost savings to be approximately $100,000 for the upcoming year.

Forecasting Alternatives

Other alternatives Client could have considered for this project were two of the most prominent forecasting tools as listed:

1. Forecast X - $1075 per license

2. Forecast Pro - $1800 per license,

Based on 9 users (1 senior manager, 1 regional manager, 7 distribution center managers), Client would save $5675 or $12,200 respectively by using our proposed tool. As shown in Exhibit 10 our model produced more accurate results than Forecast Pro.

Future considerations

This model will last as long as the user requires it to. TES will always take into account all available data and fit the forecasts so that error is minimized. As changes occur, the TES model will adjust its future data.

Recommendations

Our recommendations for the most effective results are the following:

• Update data regularly - Since TES is based on historic data, the more data the model has to base it’s forecasts on, the more accurately the model will forecast.

• Monitor major external changes - Once a significant impact has been realized in Client’s demand, such as a key account changes, holidays and weather disturbances. It will be up to managerial judgment to account for these changes during the time period immediately after the impact.

Integration with other tools

Currently, our deliverable is a stand-alone tool. After testing its accuracy and usefulness, one could begin integration with other corporate systems to reduce data entry errors and entry time. The tool has simple VBA coding which can be added to, or changed depending on user preference. The provided user-manual will assist programmers in understanding the operation of the tool and the purpose of each command.

Conclusion

Capstone Consulting worked diligently with the client to ensure satisfaction and deliverables that met the specified requirements. Likewise, Capstone met all of it’s objectives as demonstrated in Exhibit 14. Capstone feels this project was incredibly successful, which is reflected in the clients feedback.

Exhibits

Exhibit 1 – Forecast activity areas

*Note: the above forecast activity exists for each of 7 different regions in Western Canada

Exhibit 2 – Example Impacts of Current Forecasting Methodology

Exhibit 3 – MAPE comparison for tested models

Exhibit 4 – TES model in-depth – Initialization

Exhibit 5 – TES model in-depth – Learning Phase

Level

Trend

Seasonality

Exhibit 6 – TES model in-depth - Forecasting

Exhibit 7 – Solution and Modifications

[pic]

Exhibit 8 – Monthly forecasting errors (1 month out)

Exhibit 9 – Daily forecasting errors (4 weeks/20 working days out)

Exhibit 10 – Capstone vs. Forecast Pro Error rates

Exhibit 11 – Tool Screenshot - The Welcome Splash Page



















[pic]

Exhibit 12 – Tool Screenshot - The Forecast Menu

[pic]

Exhibit 13 – Tool Screenshot - Reporting Structure

[pic]

Exhibit 14 - Revisiting our Objectives for Client

|Capstones Objectives |Capstone Deliverables |Benefits to Client |

|Formalize a forecasting method by providing a|Developed a customized triple exponential |Provides consistency in forecasts |

|mathematical basis from which forecasts can |forecasting model which can forecast one |Relieves staff of dependence on |

|be produced. |year in advance |forecasting manager |

| | | |

|Improve forecasting accuracy and efficiency |Created a dynamic tool cutting forecasting|Decreased forecasting errors by 56% |

|for forecasts relating to all regional |process time down to 6 minutes(all 288 |Annual cost savings of $100,000 |

|distribution centres, business types and |forecasts) | |

|carriers. | | |

| | | |

|Provide a user interface that will allow a |Developed tool with user-friendly |Strategic planning, long range |

|wide variety of people within the company to |interface and reporting structure |forecasts |

|access forecast information to make | |Operational planning, month to month |

|decisions. | |Tactical planning, day to day |

Appendices

Appendix 1: Detailed description of tested forecasting models

ARIMA

The ARIMA Box Jenkins model consists of an AR (Autoregressive) component and the MA (Moving Average component). The AR component is simply a linear regression of the current value against one or more prior values, and can be calculated using linear regression techniques like least squared error. The MA component however is more complicated as it a linear regression on the white noise of one or more prior values. This white noise is assumed to come from a normal distribution and fitting it requires certain non-linear procedures. The ARIMA model requires that the data series is stationary meaning that the level and variation are constant; therefore any series with a trend component or changing variation must be first made stationary before ARIMA can be applied. This can be accomplished through differencing the series one or more times. Seasonality can also be included in the data by applying the same ARIMA components to the seasonal component of the series. It is generally not recommended that ARIMA be used for data series that are dominated by trend or seasonality. It is also recommended that ARIMA not be applied to data series with less then 50 terms although other sources recommend at least 100 terms.

Time Series Decomposition

Time series decomposition is the process of decomposing a time series into its component parts. These components include the Seasonality, Trend and Cycles. The first step is removing the seasonality by using a moving average of length equal to the amount of seasons. The Centered moving average is then taken. Seasonal indexes can then be calculated by dividing the actual values by the de-seasonalized value. The trend is then calculated by fitting a line to the de-seasonalized data. The Cycles are defined as wave like movements about the long-term trend and occur most often in natural phenomenon such as sun spots however there are virtually no multiple year cycles within Business data. Cycles within a year are considered seasonality and do exist in business data. These components are then forecasted separately and then recombined to achieve a forecast for the original Time Series.

Theta Model

The Theta model consists of modifying the local curvature of the original time series by the coefficient Theta. A Theta 0 and Theta 2 line are both calculated. Theta 0 is a straight line moving through the original data and Theta 2 is a more extreme version of the original time series. The two lines are calculated in such a way that the average of Theta 0 and Theta 2 at every point is the original time series. Theta 0 and Theta 2 are then forecasted into the future and then averaged to get a forecast for the original series. The Theta model is not actually a forecasting method but rather an adjustment to the series that then must be forecasted using other methods. The resulting forecast has proved very effective and even won the M3 forecasting competition in 2000.

Triple Exponential Smoothing

Like decomposition, Triple Exponential Smoothing also separates the data series into a level, trend and seasonality component. After the initial values for these components are calculated they are then updated as more data is added using smoothing parameters. The smoothing parameters take the value for zero to one and determine how much weight will be placed on the initial calculations and how much weight on the updated data. The smoothing parameters are calculated using non-linear programming that minimizes the errors of the fitted series. This weighting on past and current data makes the forecast quite robust and not very susceptible to outliers. The final values of level, trend and seasonality are then used to forecast into the future.

References

Websites:

Introduction to Time Series Analysis, National institute of standards and technology



Scientific Resources: Statistics, Econometrics, Forecasting,

Texts:

J. Holton Wilson - Barry Keating, (2002 McGraw-Hill Higher Education). Business Forecasting Fourth Edition, “Time Series Decomposition”, pg 249.

J. Holton Wilson - Barry Keating, (2002 McGraw-Hill Higher Education). Business Forecasting Fourth Edition, “ARIMA”, pg 287.

J. Holton Wilson - Barry Keating, (2002 McGraw-Hill Higher Education). Business Forecasting Fourth Edition, “Winter’s”, pg 112.

S.Christian Albright (2001 Wadsworth Group). VBA for Modelers

Course package "Forecasting for Planners and Managers" MGTSC 405

Special Thanks:

Ignacio Castillo, University of Alberta

Susan Budge, University of Alberta

Abdullah Dasci, University of Alberta

-----------------------

Minimizing Fitted Errors

Client

Client

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download