We’re using cookies, but you can turn them off in Privacy Settings. Otherwise, you are agreeing to our use of cookies. Learn more in our Privacy Policy.
CFA Institute Journal ReviewJune 2017Volume 47Issue 6
Value-at-Risk under Lévy GARCH Models: Evidence from Global Stock
Markets (Digest Summary)
SkanderSlimYosraKoubaaAhmedBenSaïdaJournal of International Financial Markets, Institutions & Money
Summarized by
XinyaoHuang
Privacy Settings
Functional cookies, which are necessary for basic site functionality like keeping you logged in, are always enabled.
Abstract
Since the subprime mortgage crisis, the reliability of types of econometric models used
in the financial industry has been criticized for failing to capture risk accurately
during financial market downturns. The authors test the predictive power of univariate
GARCH-type models, originally developed to estimate market volatility, under various error
distribution assumptions and make several findings.
How Is This Research Useful to Practitioners?
The authors examine 21 value-at-risk (VaR) models’ predictive power and find that
(1) generalized autoregressive conditional heteroskedasticity (GARCH) model performances
depend on markets; (2) the skewed-t and heavy-tailed Lévy
distributions significantly improve all models’ forecasting accuracy during the
crisis period; (3) long memory and conditional asymmetry characterize developed markets and
frontier markets, respectively; (4) including the crisis period in the estimation sample
significantly improves VaR predictability during the post-crisis period; and (5) the
ubiquitous normal distribution approach proves unsatisfactory in high-volatility states.
Regardless of the volatility model used, non-normal Lévy distributions clearly
outperform during the crisis period and exhibit a predictive ability close to that of highly
competitive models. This observation supports the criticism of using normal distribution to
describe return innovation and exonerates the examined models from such criticisms.
Practitioners may be able to avoid using more complicated approaches because of the
significantly improved statistical accuracy of GARCH-type models, which allow for skewed and
leptokurtic (thin-peaked) distributions, thus reducing operating costs and complexity.
No single model outperforms the others across markets. The non-normal Lévy-based
FIGARCH (fractionally integrated generalized autoregressive conditional heteroskedasticity)
model, which accounts for long memory, appears to be superior in developed markets. The
empirical evidence favors the standard GARCH model in emerging markets (EM). Apparent
volatility asymmetry is observed in frontier markets (FM), where both the GARCH and the
Glosten, Jagannathan, and Runkle (GJR) models, in combination with Student’s
t-distribution, perform well. The empirical evidence encourages risk
managers to consider market-dependent volatility specifications and supports the Basel II
Accord’s allowance of internal VaR models.
Model performance rankings differ between long and short trading positions. For
long-position risk, both skewed-t and Lévy distributions perform
well, normal distribution underestimates the risk across markets, and Student’s
t overestimates. For short positions, the empirical evidence favors
normal distribution, with the performance of Lévy distributions strongly dependent on
the conditional volatility model. Model consistency between long and short risk is scarcely
achieved, especially in the case of developed markets. Hedge funds may find distinguishing
between long and short positions helpful.
The reported regulatory capital allocation implied by the models quantifies the trade-off
between statistical accuracy and the cost of risk management, which may be of interest to
banks and regulators.
How Did the Authors Conduct This Research?
The authors look at three stock indexes—MSCI EM, MSCI FM, and MSCI World, which
consists of equities in developed markets. Data samples of daily stock returns cover 20
years (January 1995–March 2015) for the developed market and EM indexes and 13 years
(May 2002–March 2015) for the FM index. All return series distributions show skewness
and leptokurtosis, with FM exhibiting positive skewness in the post-crisis period and the
rest negatively skewed.
The crisis period (August 2007–May 2012) and the post-crisis period (May
2012–March 2015) are distinguished to reflect different volatility states.
The three examined volatility models are the GARCH model (short memory), the FIGARCH model
(long memory), and the GJR model (volatility asymmetry). The authors use two alternative
models to measure benchmarks: the mixed normal GARCH model and the asymmetric power ARCH
model with non-central t innovations. They investigate seven distribution
assumptions: normal, Student’s t, skewed-t, normal
inverse Gaussian, Meixner, variance gamma, and CGMY (the last four are non-normal
Lévy distributions).
The authors combine each distribution assumption with the three examined models. They
compare the one-day-ahead 1% VaR with the actual return. They conduct comprehensive
backtesting regarding the frequency, independence, duration, and magnitude of violations.
They compare the performance rankings of all 21 combinations to see whether the effect of
each distribution assumption is model dependent. The authors also investigate the empirical
accuracy of each combination in estimating the VaR of global stock market indexes for both
long and short positions.
Finally, the authors link their results to economic implications by calculating daily
capital requirements under the Basel II Accord.
Abstractor’s Viewpoint
VaR is a forward-looking measure that serves as an industry standard to convey information
on downside market risk. Although its importance and meaning are well understood, no method
of measurement is universally agreed on. This detailed study allows for direct comparisons
between models, along with popular distribution assumptions regarding their applicability as
risk management tools. This research serves as a convenient reference in this regard.
The authors gather a comprehensive set of backtesting exercises from the literature, which
forms a useful toolbox of risk model validation techniques. The individual hypotheses are
illustrated explicitly, which could aid practitioners in performing similar exercises for
their own needs.
The trade-off between regulatory costs and model accuracy incentivized by the Basel II
Accord reflects the important role that regulators play in the market. Using more-realistic
distributions to improve model performance is not the complete answer for risk assessment.
Static regulatory requirements and blind beliefs about model output would never be enough.
Keeping an open mind about the real-world environment and using up-to-date scenario analyses
by industry experts are more sensible approaches than pure statistical modeling. It is
important to keep communications transparent and to encourage creative ways to sense
risk.
Overall, this useful research provides detailed empirical evidence. The study is limited to
stock indexes only, and thus, the results may not be applicable to other asset classes. The
authors focus on one-day-ahead 1% VaR generated by univariate GARCH models; multivariate
GARCH models and long-term risk prediction are beyond the scope of this study.
SlimSKoubaaYBenSaïdaA2017Value-at-Risk under Lévy GARCH Models: Evidence from Global Stock
MarketsJournal of International Financial Markets, Institutions & MoneyVol. 4601 JanJanuary23