LO 10.1 An analysis of typical financial asset returns produces a distribution that has fatter tails than
implied by a normal distribution. Which explanation is LEAST LIKELY?
LO 10.2 The three prior daily returns for a stock are +1% , +2%, and +3% (day n-1 to day n-3,
respectively). Apply Jorion's moving average (MA) over the three day window [i.e., MA(3)] to
estimate current volatility.
LO 10.3 Which are advantages of the GARCH(1,1) approach over the EWMA approach? I. More weight
on recent information, II. Mean reversion, III. Persistence
LO 10.4 Here is the GARCH(1,1) specification: variance = omega + (alpha)(lagged, squared
return)+(beta)(lagged variance). In the first series, alpha = 0.1 and beta = 0.7. In the second
series, alpha = 0.2 and beta = 0.9. Which series is mean-reverting?
LO 10.5 If the average lambda under the RiskMetrics approach is 0.94 under daily intervals, what
weight is effectively assigned to the squared return on day n-2 (not yesterday, but the day
LO 10.6 When estimating correlation, what is the main challenge of in extending the GARCH model
used for volatility to the multivariate GARCH model for correlations?
LO 10.7 Say we want to follow Jorion's advice: "whenever possible, VAR should use implied
parameters."If we want to estimate the implied standard deviation (ISD) of an option, which
of the following makes our task MOST DIFFICULT?
LO 11.1 According to Linda Allen, actual asset returns tend to differ from the a normal distribution in
each of the following ways EXCEPT FOR:
LO 11.2 Each of the following are plausible explanations for the existence of fat-tails EXCEPT FOR:
LO 11.3 Which is a critical practical implication in the estimation of volatility when the true volatility is
LO 11.4 Drawback to the GARCH(1,1) approach to volatilty estimation include each of the following
LO 11.5 Which volatility estimate approach assigns greater weight to more recent observations? I.
Moving averarge, II. EWMA, III. GARCH(1,1)
LO 11.6 The hybrid approach to nonparametric volatility estimation combines which two methods?
LO 11.7 There are three approaches to aggregating returns in order to estimate portfolio VAR
(historical simulation, VarCovar, and hybrid). Which does Allen say is "gaining popularity" and
what key idea makes this approach viable?
LO 11.8 Each of the following is a disadvantage (or challenge) to adopting an IMPLIED VOLATILITY
approach, EXCEPT FOR:
LO 11.9 How does a forecast model compare to true volatiltiy, when the forecast applies the square
root rule to forecast VaR, and when there is mean reversion, respectively, (i) in returns and
(ii) in return volatiliy?
LO 11.10 When estimating correlations, one problem is nonsynchronous data that arises from markets
closing in different time zones. Each of the following are possible solutions to the problem of
nonsynchronous data EXCEPT FOR:
LO 1.1 If two outcomes are mutually exclusive (outcome A, outcome B), which must be true?
LO 1.2 If A and B are two events, what is equal to P (A or B)?
LO 1.3 When rolling two six-sided dice, the outcome of the first die is a three. What is the
conditional probability that a total of both dice will equal five?
Correlation not necessarily mean reverting
Lack of market price
Sample mean bias
noise could infect out-of-sample forecasts
MA and EWMA
Hybrid by way of central limit theorem
Forecast overstates true volatility in both cases
Sample both market open and close quotes
P(A or B) = 1
P(A) + P(B) - P(A and B)
1 in 3
Sample too small; larger will converge to normal
Almost impossible to parameterize persistence
Volatility smile or smirk
Estimate lags (trails) actual volatility
Unstable if if persistence greater than one (>1)
I and III
EWMA and GARCH(1,1)
Options on same asset trade differently
(i) Forecast understates and (ii) overstates
Model nonnormal, dependent covariances
P(A and B) = .5
P(A) + P(B) + P(A and B)
1 in 4
mean is time-varying
II and III
Accuracy requires many lagged factors (long time series)
Bimodial distributions not common
II and III
MA and GARCH(1,1)
Forecast understates true volatility in both cases
True CoVar is observed CoVar plus lagged sample
P(A and B) = 0
P(A) + P(B) - P(A or B)
1 in 5
D i N
volatility is time-varying B
2.16% 2.40% C
I, II, and III B
Number of parameters increases exponentially D
Options on same asset trade differently A
Non-normal distribution A
Infrequent regimes difficult to parameterize B
Inferior to variations with more parameters D
I, II and III C
Historical Simulation and EWMA D
Stochastic volatilty C
(i) Forecast overstates and (ii) it depends D
Inflate covariance to account for time overlap B
P(A) + P(B) - P(A or B) A
1 in 6 D
Although the sample may be too small, the best answers given by the readings are A, C, and D.
Jorion suggests that fat tails may be explained by a non normal distribution or a changing
distribution (which could be either mean or volatility). Allen suggest either time-varying mean or
time-varying volatility could be the culprit but Allen says time-varying volatlity is more likely the
Square each return (under MA the order does not matter) to produce this series: 0.0001, 0.0004,
and 0.0009. These are squared returns, they approximate variances. Take the average, which equals
about 0.000467. That's the variance estimate; the volatility is the square root (about 2.16%)
GARCH(1,1) incorporates reversion to the mean but EWMA does not. Both models, unlike the
moving average, assign greater weight to more recent observations. Both have persistence.
Alpha and beta are here the weights assigned, respectively, to the lagged squared return and the
lagged variance. The sum of alpha + beta is persisence. If alpha+beta=1, then the series reduces to
EWMA (or integrated GARCH). If alpha+beta>1, GARCH is unstable. The second series in unstable.
But the first series has persistence of 0.8 and is mean-reverting.
The most recent weight is (1-lambda) = 6%. Throughout the series, each weight is a constant
proportion of its succeeding weight. In this case, each weight is 94% of its succeeding weight. So the
n-2 weight = (6%)(94%) = 5.64%
This is another example of the curse of dimensionality. As discussed in Jorian's Appendix 9.A, where
the number of series is (N), the number of parameters = N(N+1)/2 + 2[N(N+1)/2]^2. For two series,
that's 21 params and for three, it's already 78.
We need a market price to solve for the implied volatility. The volatilty smile/smirk and the fact that
options on the same underlying trade differently are related, and they are drawbacks. So there are
not bad answers. Model risk (i.e., this approach is model-dependent) is also a shortcoming.
Actual returns tend to be fat-tailed (leptokurtosis), skewed (asymmetrical), and unstable
(parameters vary over time). Period returns do not tend to be lognormal; rather, cumulative price
leves do tend to be lognormal
Aside from the very likely possibility that returns are not normal, the important rationales include
the possibility that either the mean or the volatility is not constant; i.e., conditional mean or
volatilty could be time-varying. (Note: Allen says time-varying volatility is the more while Jorion says
"both explanations carry some truth."). Finally, please note that REGIME-SWITCHING could also
explain fat tails.
The shift to a new regime can be abrupt. On the shift, the volatility may move into a high-volatility
regime, but the model will have to wait for the "trailing observations." The model may suffer a lag
or delay at the critical moment: when volatility enters a high-volatility regime.
Linda Allen says noise can harm out-of-sample forecasts even more so than EWMA. Hull reminds
that persistence greater than one renders GARCH(1,1) unstable. Jorion criticizes nonlinearity of
GARCH. However, Jorion says more complex version (more parameters) have generally not
improved on GARCH. (D) is not true.
The moving average is effectively UNWEIGHTED; it is indifferent to the order of the observations.
The "exponential" in EWMA refers to weights that decline in constant proportion. GARCH(1,1) is a
general case of EWMA.
The hybrid approach combines historical simulation and exponentially declining weights (EWMA, or
The CLT says that specific assets (portfolio components) can be non-normal, yet their average or
summation converges toward normality. This fact allows for the aggregated portfolio (i.e., simulated
returns) to be characterized by normal parameters; i.e., stocks or assets may individually be non-
normal but the portfolio can assumed to tend toward normal.
Implied volatiltiy is the only truly forward-looking (market consensus) approach. In addition to the
cited weaknesses, implied volatility requires a market price as an input.
If returns are mean-reverting, the forecast will overstate. If return volatility is mean reverting, it
depends on the relationship between current volatility and long-run volatility: if current > long run
volatility, then it overstates. If current < long run volatility, then it understates.
Sampling marke opens and closes, says Allen, is costly and incomplete. The other two alternatives
are preferred. One, assume the true covariance is a combination of the observed contemporaneous
covariance and and a lagged covariance (e.g., today's US change and tomorrow's change in Japan).
Two, inflate the covariance to account for partial time overlap. Both alternatives rely on the
If the outcome s are mutually exclusive, their intersection is the null set; i.e., one or the other but
not both simultaneously. Also, the P(A|B)=0; the probability of A given that B occurred, is zero.
If you add them both together, their overlap is double-counted. Therefore, the overlap (the
intersection) must be subtracted from P(A) + P(B)
It is one in six. It's a long way to go to match the intuition, but P(B|A) = P(A and B) divided by the
P(A). In this case, the P (1 in 36) divided by P (1 in 6).
moving average 1 2 3
ret 1% 2% 3%
0.0001 0.0004 0.0009 0.000467 2.160%