# Marketing Managements

W
Description

Marketing Managements document sample

Shared by:
Categories
-
Stats
views:
5
posted:
3/21/2011
language:
English
pages:
6
Document Sample

```							Glossary of Forecasting Terms for BA 544 Supply Chain Management

I have put this glossary together to highlight important concepts for our class. Only those concepts
that have an ** in front of them are important as far as testing. These, those with ** should be
studied for comprehension and are testable concepts. Also, remember that if you have questions
about a mathematical formula or term, you can refer to our spreadsheets and the powerpoint lecture

* Backcasting. Forecasting backward in time. Backcasting has been used also for extrapolation. This is
done by applying the forecasting method to the series starting from the end and going to the beginning of
the data. This can be used to provide a set of starting values for exponential smoothing values that can then
be used for applying that forecasting method to the standard, original sequence, starting from the beginning
http://forecast.umkc.edu/ftppub/ba544/scmforecastingproblems.doc

** demand filter or error filter--A standard which is set to monitor individual observations in forecasting
systems. Usually set to be tripped when an observation of a period differs from the forecast by more than
some number of mean absolute deviations or standard deviations, for example:

If |e(t)|/MAD(t) > 6, then investigate forecasting problems    Or
If |e(t)|/SEE(t) > 4, then investigate forecasting problems

Note this device is used in conjunction with a Tracking Signal. The Tracking Signal will detect cumulative
errors, while the demand or error filter (both terms are used) will detect large, one time errors. Large one
time errors can occur because of a number of reasons including, but not limited to outliers, catastrophic
events, out-of-stocks, etc.

** dependent demand--Demand is considered dependent when it is directly related to or derived from the
schedule for other items or end products. Such demands are therefore calculated, and need not and should
not be forecast. A given item may have both dependent and independent demand at any given time. See:
independent demand.

** dependent variable--A variable which is a function of some other variables is called a dependent
variable. In regression and forecasting the variable being predicted is the dependent variable.

** deseasonalized data--Removing the seasonal fluctuations in a series yields deseasonalized data. By
removing seasonality we can more easily identify trend, cyclical, promotional, and outlier influences in the
data. Much of the economic data in the media is deseasonalized to add more continuity and comparability
to reported statistics. Deseasonalization is an essential step in forecasting. We have learned how to
deseasonalize using seasonal differences (e.g., A(t) – A(t-12) as well as the use of seasonal indexes in
Winter’s Method. With multiplicative seasonal indexes, the deseasonalized value = Actual/(Seasonal
Index) . For example, Deseasonalized value = 100/.80 = 125. That is, if the actual is 100 and the seasonal
index equals .80, then the deseasonalized value is 125.

** deviation--The difference between a number and the mean of a set of numbers, or between the actual
datum and the forecast. Forecast errors are often called deviations – note that errors and deviations are
always the difference between an actual and a mean – the forecast is always a mean.

Error = Deviation = Actual - Forecast

** diagnostic checking--A step in forecast model building where the errors of a model are examined for
normality, zero mean, constant standard deviation and no other patterns. Diagnostic checking involves
studying the mean error, SEE, MAPE, TS and plots of errors.

** differencing--When a time series is nonstationary, that is has no constant mean, the series can be made
to have a constant mean by taking first differences of the series, (Y t - Yt-1). If first differences do not

1
achieve a constant mean, then first differences of first differences, called second differences, or seasonal
differences can be tried. Seasonal differences are defined as Y t-Yt-s where s is the length of the seasonal
cycle.

** Error. A forecast error is calculated by subtracting the forecast value from the actual value to give an
error value for each forecast period. In forecasting, this term is commonly used as a synonym for residual.

Error = Deviation = Actual - Forecast

** exponential smoothing--A type of weighted moving average forecasting technique in which past
observations are geometrically discounted according to their age. The heaviest weight is assigned to the
most recent data. The smoothing is termed "exponential" because data points are weighted in accordance
with an exponential function of their age. The technique makes use of a smoothing constant to apply to the
difference between the actual and the most recent forecast. The approach can be used for data that exhibit
no trend or seasonal patterns or for data with either (or both) trend and seasonality.

**exponential smoothing, single or simple --This is the most basic form of exponential smoothing. It
uses the coefficient alpha to smooth past values of the data and errors in forecast. Historically, it was most
commonly used in inventory control systems where many items were forecast and low cost was a primary
concern. F(t) = alpha*A(t-1) + (1-alpha)F(t-1)

** Holt's exponential smoothing method. An extension of single exponential smoothing which allows for
trends in the data. It uses two smoothing parameters, one of which is used to add a trend adjustment to the
single smoothed average value. The smoothing constants are called alpha and beta.

F(t) = S(t) + b(t)

F(t+m) = S(t) + m*b(t)

** Holt-Winters' exponential smoothing method. Winters extended Holt's exponential smoothing
method by including an extra equation that is used to adjust the forecast to reflect seasonality. This form of
exponential smoothing can thus account for data series that include both random, trend, and seasonal
elements. It uses three smoothing parameters, alpha, beta, and gamma controlling the level, trend, and
seasonality.

F(t) = (S(t) + b(t))*I(t-12)

F(t+m) = (S(t) + m*b(t))*I(t-12+m)

** independent demand--Demand for an item is considered independent when such demand is unrelated
to the demand for other items. Demand for finished goods and service parts requirements are some
examples of independent demand. Independent demands must and should be forecasted. See: dependent
demand.

** Mean Absolute Percentage Error (MAPE). The mean absolute percentage error is the mean or
average of the sum of all of the percentage errors for a given series taken without regard to sign. (That is,
their absolute values are summed and the average is computed.) The MAPE is useful in making statements
such as given that the MAPE equal 10% then

50% of the errors (both positive and negative) are less than or equal to 10% (the complement of this is that
50% of the errors (both positive and negative) are greater than or equal to 10%.

** Mean Percentage Error (MPE). The mean percentage error is the average of all of the percentage
errors for a given data set. The signs are retained. Because of this, it is sometimes used as a measure of bias
in the application of a forecasting method. Note that in contrast to the MAPE, the MPE can equal zero as

2
negative percent errors are offset by positive percent errors. MPE does not measure the dispersion or
scatter of data as do the MAPE, MAD, and SEE.

** Mean squared error (MSE). The sum of the squared errors for each of the observations divided by the
number of observations. This is a variance measure and consequently, its square root is a standard
deviation, alternatively called a Standard Error of Estimate (SEE), Residual Standard Error (RSE), or
Standard Error (SE). (Again, these three terms are synonyms). That is,

Square Root of MSE = SEE (or synonyms of RSE or SE). As we will see the SEE is used in prediction
intervals in forecasting future actual values.

** mean absolute deviation (MAD)--The average of the absolute values of the deviations of some
observed value from some expected value. MAD can be calculated based on the absolute value of
observations and the arithmetic mean of those absolute observations (e.g., the absolute deviations of actual
sales data minus forecast data.)

For Normally distributed data MAD = .80*SEE or

Consequently, the MAD can be used in prediction intervals

These data can be averaged in the usual arithmetic way or with exponential smoothing.

The MAD is useful in assessing the cost of errors, such as for inventory control. Also, the MAD is used in
prediction intervals as described below (see prediction intervals). Some prefer to use the MAD instead of
the SEE because the MAD does not increase as much as the SEE when very large errors occur

** moving average--An arithmetic average of the n most recent observations. As each new observation is
added, the oldest one is dropped. The value of n (the number of periods to use for the average) reflects
responsiveness versus stability in the same way that the choice of smoothing constant does in exponential
smoothing.

** Naive model. A model that does not attempt to explain why an event occurs, but simply assumes that
things will continue as they have. In time series, the latest value is used as the forecast.

Naïve model: F(t) = A(t-1) which states that A(t) = A(t-1) + e(t)

Seasonally naïve model, monthly: F(t) = A(t-12) which states that A(t) = A(t-12) + e(t)

Seasonally naïve model, quarterly: F(t) = A(t-4) which states that A(t) = A(t-4) + e(t)

When the data is highly autocorrelated, then these models can fit the past very well and forecast one period
ahead very accurately, however, they do not forecast multiple periods into the future well. Autocorrelated
means that adjacent values are very nearly equal to each other. They have a high correlation to each other
and are very smooth time series.

** normal distribution-- When graphed, the normal distribution takes the form of a bell-shaped curve.
Here are some of the characteristics of the ND:

It defines events that are the result of a relatively large number of minor independent events.
Mean and standard deviation define the ND.
It is symmetrical, mean = median = mode.
Theoretically, the ND varies from –infinity to +infinity.

Here are common confidence intervals:

3
Mean +/- Z*Standard Deviation is called a confidence interval where Z can be any number such as:
Mean +/- 1*Standard Deviation contains about 68% of the observations of ND
Mean +/- 1.96*Standard Deviations contains about 95% of the observations of ND
Mean +/- 3*Standard Deviation contains about 99.73% of the observations of ND

A Z of 3 means 3 standard deviation away from the mean as in the last confidence interval. Thus while
99.73% are in the interval, therefore 100-99.73=.27% are outside the interval (.27/2 or .135% above and
.135% below the interval). Thus,

Only about .13% of the ND lies above the Mean + 3 Standard Deviations
Only about .13% of the ND lies below the Mean - 3 Standard Deviations
Consequently, a Z of 3 is often used to detect unusual events like in the use of a demand filter when
detecting outliers.

How we use confidence intervals in forecasting: In forecasting confidence intervals are called prediction
intervals because of the much greater uncertainty associated with these future intervals. Consider an
example. In this example, we will look at prediction intervals first assuming that the firm has no
forecasting model and simply uses the mean and standard deviation of the past demand to generate
prediction intervals as follows:

Original Series – demand for computer. Mean = 1000 and the Standard Deviation of the Original Series =
100, assume the demand is normally distributed:

68% Confidence Interval A(t) = Mean +/- Std Dev
A(t) = 1000 +/- 100 which equals 900 to 1100

95.% Confidence Interval A(t) = Mean +/- 1.96*Std Dev
A(t) = 1000 +/- 200 which equals 800 to 1200 (we often round 1.96 to +/-2.00)

Thus, if the firm had no forecasting model, then the above intervals could be used to forecast and generate
the prediction intervals. However, let’s assume that the firm forecasts demand for it products using Holt’s
two parameter exponential smoothing. The resulting forecasting errors are:

Mean Error = approximately 0
MSE = 100
RSE = SEE = square root of 100 = 10.

Error Prediction Intervals
0 +/- 10       68%
0 =/- 20       95%

Prediction Interval for the Actual used in forecasting. Assume it is the end of period t-1 and we are
forecasting period t.

A(t) = F(t) + e(t) + Z*SEE

This states that the actual is expected to be in the range of the forecast plus error plus/minus Z*SEE.

Because we are forecasting in period t-1, we do not know e(t) so we are forced to use its expected value
which is zero. However, we do have F(t) (assume it is 1000) and SEE (from above it is 10) from our
forecasting and fitting process. Thus, the prediction interval for a 95% confidence becomes:

A(t) = F(t) + e(t) + Z*SEE

A(t) = 1000 + 0 + 2*10 = 1000 +/- 20

4
Note how much narrower this interval is than the previous, no forecasting model prediction interval. It is
precisely because this interval is narrower that we chose to forecast and to choose that model with the
lowest SEE (or lowest MSE or SSE).

We call these intervals prediction intervals because no one knows what the future will bring. The basic
underlying assumption of most forecasting models is that:

1) We have accurately modeled the past with our model.
2) The past will repeat.

When moving forward into the future, either assumption 1) or 2) might be wrong – thus, we use the term
prediction interval instead of confidence interval. In general, these prediction intervals are accurate for
one-period-ahead forecasts, however, the interval widens considerably as we forecast greater than one
period ahead. The approximate increase in the SEE varies greatly, for one class of models the N-period

SEE*Square Root of N = SEE*N^.5

** optimal parameter or smoothing constant --The optimal, final parameters or smoothing constant are
those values that minimize the sum of squared errors, MSE, or SEE. (Minimization of one of these
minimizes the other.) One reason we minimize these statistics is because this will minimize the width of
the prediction intervals. As we will learn, minimizing the prediction interval will improve planning,
minimize costs, and minimize investments in inventory while improving in stock positions of inventory.

** Prediction interval. The bounds within which the future observed values are expected to fall, given a
specified level of confidence. For example, one could specify the prediction intervals for 35% confidence.
As it turns out, estimated prediction intervals are typically underestimates of actual variations in the future.

** confidence limits--A confidence interval is a probability statement about some value or range of values.
Confidence limits can be placed on future forecast values. However, this must be done very cautiously
because the past may not be repeated in the future.

** Random walk. For common folk, the random walk model is to simply use the latest value in a time
series as the forecast for all horizons. Alternatively, it is a model stating that the difference between each
observation and the previous observation is random. See naive model. For statisticians, the term is more
precisely defined as follows: A random walk is a time-series model in which the value of an observation in
the current time period is equal to the value of the observation in the previous time period plus an error
drawn from a fixed probability distribution.

sales and operations planning (formerly called production planning)--The function of setting the
overall level of manufacturing output (production plan) and other activities to best satisfy the current
planned levels of sales (sales plan and/or forecasts), while meeting general business objectives of
profitability, productivity, competitive customer lead times, etc., as expressed in the overall business plan.
One of its primary purposes is to establish production rates that will achieve management's objective of
maintaining, raising, or lowering inventories or backlogs, while usually attempting to keep the work force
relatively stable. It must extend through a planning horizon sufficient to plan the labor, equipment,
facilities, material, and finances required to accomplish the production plan. As this plan affects many
company functions, it is normally prepared with information from marketing, manufacturing, engineering,
finance, materials, etc.

sales plan--The overall level of sales expected to be achieved. Usually stated as a monthly rate of sales for
a product family (group of products, items, options, features, etc.). It needs to be expressed in units
identical to the production plan (as well as dollars) for planning purposes. It represents sales and marketing
managements' commitment to take all reasonable steps necessary to make the sales forecast (a prediction)
accurately represent actual customer orders received.

5
** seasonal index--A seasonal index is a number that indicates the seasonality for a given time period. For
example, a seasonal index for observed values in July would indicate the way in which that July value is
affected by the seasonal pattern in the data. Seasonal indices are used to obtain deseasonalized data.

seasonal inventory--Inventory built up in anticipation of a peak seasonal demand in order to smooth
production. See: anticipation inventories.

** Seasonal adjustment. The process of removing from time series data systematic variations over the
course of the year. Also called deseasonalizing the data.

** Seasonal difference. A seasonal difference refers to a difference that is taken between seasonal values
that are separated by one year (e.g., four quarters, 12 months). Thus, if monthly data are used with an
annual seasonal pattern, a seasonal difference would simply compute the difference for values separated by
12 months rather than using the first difference, which is for values adjacent to one another in a series. See
differencing. See our simple forecasting competition spreadsheet.

** Seasonal exponential smoothing. F(t) = alpha*A(t-12) + (1-alpha)*F(t-12) for monthly data.
In general, F(t) = alpha*A(t-L) + (1-alpha)*F(t-L)
Where L = 4 for quarterly data, L=7 for daily etc.
Also, see Holt-Winters; exponential smoothing.

** smoothing--Averaging data by a mathematical process or by curve fitting, such as the method of
moving averages or exponential smoothing.

** smoothing constant--In exponential smoothing, the weighting factor which is multiplied against the
most recent error.

** Standard deviation. A summary statistic (parameter) for a sample (population). It is usually denoted by
S for a sample (population), and is the square root of variance. The standard deviation is a measure of the
spread in the data (population). For data that are approximately normal, about 95% of the observations
should be within approximately two standard deviations of the mean.

** Standard error. A measure of the precision of a coefficient. It tells how reliably the relationship has
been measured. It represents the standard deviation for an estimate.

** tracking signal--Since quantitative methods of forecasting assume the continuation of some historical
pattern into the future, it is often useful to develop some measure that can be used to determine when the
basic pattern has changed. A tracking signal is the most common such measure. One frequently used
tracking signal involves computing the cumulative error over time divided by some measure of error like
the MAD or SEE and setting limits so that when the cumulative error ratio goes outside those limits, the
forecaster can be notified and a new model can be considered.

Problems with tracking signals – almost all tracking signals have limitations in that they will now and then
yield false trip points when the accuracy of the forecasting model improves greatly. Because of these
problems, tracking signals must be combined with demand filters.

** trend analysis--Trend analysis is a special form of simple regression in which time is the independent
variable. It consists of fitting a linear relationship to a past series of values, with time as the independent
variable.

6

```
Related docs
Other docs by rqm12497
Marketing Manager Appointment Letter
Marketing Managements