Model choice and value-at-risk performance



Model choice and value-at-risk performance

Chris Brooks; Gita Persand

5,511 words

1 September 2002

Financial Analysts Journal

87

Volume 58, Issue 5; ISSN: 0015-198X

English

Copyright (c) 2002 ProQuest Information and Learning. All rights reserved. Copyright Association for Investment Management and Research Sep/Oct 2002

Model Choice and Value-at-Risk Performance page 87 Chris Brooks and Gita Persand

Broad agreement exists in both the investment banking and regulatory communities that the use of internal risk management models can provide an efficient means for calculating capital risk requirements. The determination of the model parameters necessary for estimating and evaluating the capital adequacies laid down by the Basle Committee on Banking Supervision, however, has received little academic scrutiny.

We extended recent research in this area by evaluating the statistical framework proposed by the Basle Committee and by comparing several alternative ways to estimate capital adequacy. The study we report also investigated a number of issues concerning statistical modeling in the context of determining market-based capital risk requirements. We highlight in this article several potentially serious pitfalls in commonly applied methodologies.

Using data for 1 January 1980 through 25 March 1999, we calculated value at risk (VAR) for six assetsthree for the United Kingdom and three for the United States. The U.K. series consisted of the FTSE All Share Total Return Index, the FTA British Government Bond Index (for bonds of more than 15 years), and the Reuters Commodities Price Index; the U.S. series consisted of the S&P 500 Index, the 90-day T-bill, and a U.S. government bond index (for 10-year bonds). We also constructed two equally weighted portfolios containing these three assets for the United Kingdom and the United States.

We used both parametric (equally weighted, exponentially weighted, and generalized autoregressive conditional heteroscedasticity) models and nonparametric models to measure VAR, and we applied a method based on the generalized Pareto distribution, which allowed for the fat-tailed nature of the return distributions. Following the Basle Committee rules, we determined the adequacy of the VAR models by using backtests (i.e., out-of-sample tests), which counted the number of days during the past trading year that the capital charge was insufficient to cover daily trading losses.

We found that, although the VAR estimates from the various models appear quite similar, the models produce substantially different results for the numbers of days on which the realized losses exceeded minimum capital risk requirements. We also found that the effect on the performance of the models of using longer runs of data (rather than the single trading year required by the Basle Committee) depends on the model and asset series under consideration. We discovered that a method based on quartile estimation performed considerably better in many instances than simple parametric approaches based on the normal distribution or a more complex parametric approach based on the generalized Pareto distribution. We show that the use of critical values from a normal distribution in conjunction with a parametric approach when the actual data are fat tailed can lead to a substantially less accurate VAR estimate (specifically, a systematic understatement of VAR) than the use of a simple nonparametric approach.

Finally, the closer quartiles are to the mean of the distribution, the more accurately they can be estimated. Therefore, if a regulator has the desirable objective of ensuring that virtually all probable losses are covered, using a smaller nominal coverage probability (say, 95 percent instead of 99 percent), combined with a larger multiplier, is preferable. Our results thus have important implications for risk managers and market regulators. CC

Keywords: Risk Measurement and Management: firm/enterprise risk

Broad agreement exists among both the investment banking and regulatory communities that the use of internal risk management models is an efficient means for calculating capital risk requirements. The determination of model parameters laid down by the Basle Committee on Banking Supervision as necessary for estimating and evaluating the capital adequacies, however, has received little academic scrutiny. We investigate a number of issues of statistical modeling in the context of determining market-based capital risk requirements. We highlight several potentially serious pitfalls in commonly applied methodologies and conclude that simple methods for calculating value at risk often provide superior performance to complex procedures. Our results thus have important implications for risk managers and market regulators.

A broad consensus exists in the investment banking and regulatory communities that regulation of the amount of capital banks and securities firms hold to cover the risk inherent in their trading positions is essential. The importance of sound risk measurement and management practices has been underlined by various high-profile derivatives disasters (see Jorion 1995). Far less agreement currently exists among firms and various national and international regulatory bodies, however, about how the amount of capital required to cover trading losses should be calculated. Under a revised version of the European Capital Adequacy Directive, known as CAD II, the use of internal risk management models has recently been permitted.1 Given this recent change and the fact that value-at-risk (VAR) models have been widely adopted, a thorough evaluation of the efficacy of the regulatory framework is essential. 2

We describe and extend recent research in this area by evaluating the statistical framework that has been proposed by the Basle Committee on Banking Supervision and compare it with several alternative approaches. The work we describe here contributes to the debate about estimating and evaluating risk measurement models in four ways.

First, under the Central Limit Theorem, all else being equal, each data point represents a piece of information; therefore, more accurate parameter estimates and, consequently, more precise forecasts can be obtained by using a long run of data rather than a short one. Set against this theorem is the generally accepted principle that financial markets are dynamic, rapidly changing entities, so old vintages of data may provide no useful information for determining future market risks and returns. 3 Thus, two important questions are (1) whether one year of daily data is sufficient to estimate the required parameters and (2) whether the regulators have determined the trade-off appropriately. 4

The second issue concerns the coverage rate that should be required by the minimum capital risk requirements (MCRRs). To ensure adequate coverage, the Basle Committee chose to focus on the first percentile of the return distributions; so, banks are required to hold sufficient capital to absorb all but 1 percent of expected losses, rather than the 5 percent level used in the J.P. Morgan approach (1996). But given a limited amount of data, the farther out in the tails the cutoff is set, the fewer observations are available to estimate the required parameters and the larger the standard error is around that estimate. Kupiec (1995), for example, showed that for a normal distribution, the standard error of the first percentile will be about 50 percent larger than that of the fifth percentile. For the fattailed distributions of market returns (whose existence is an almost universally accepted fact), the problem will be compounded-with standard errors for the first percentile being at least double those of the fifth. The result will be confidence intervals around the first percentile of approximately four times the width of those around the fifth. Thus, we sought to determine whether a more "accurate" VAR estimate can be derived by using the fifth rather than the first percentile of the return distribution together with an appropriately enlarged scaling factor. Table 1.

The third issue we evaluate is whether a simple nonparametric approach is preferable to one based on the normal or extreme value distribution.

Finally, given our analysis, we draw implications for regulatory panels.

Data and Methodology

We calculated the VARs for six assets-three for the United Kingdom and three for the United States. The U.K. series consisted of the FTSE (Financial Times Stock Exchange) All Share Total Return Index, the FTA British Government Bond Index (bonds of more than 15 years), and the Reuters Commodities Price Index; the U.S. series consisted of the S&P 500 Index, a U.S. government bond index (bonds of 10 years), and the 90-day T-bill. We also constructed two equally weighted portfolios containing these assets for the United Kingdom and the United States.6,7

The data were collected from Primark Datastream and span the period 1 January 1980 through 25 March 1999, providing a total of 4,865 observations, or trading days, in the sample. We used the daily returns of the original indexes. Summary statistics for the data are given in Table 1. As one might anticipate, the series are all strongly nonnormal. All are leptokurtic; moreover, the FTSE, commodities series, and S&P 500 series are also significantly skewed to the left and the U.S. bond and T-bill series are significantly skewed to the right.

Risk Measurement. A number of methods exist for determining an institution's VAR. The most popular methods can be usefully classified as either parametric or nonparametric. In the parametric category is the "volatilities and correlations approach" popularized by J.P. Morgan; in this method, to obtain an estimate of the VAR, the estimation of a volatility parameter and (conditional on an assumption of normality) the volatility estimate are multiplied by the appropriate critical value from the normal distribution and by the value of the asset or portfolio. To estimate VARs, we used four parametric models-an equally weighted moving average, an exponentially weighted moving average, a GARCH (1,1) [that is, a generalized autoregressive conditionally heteroscedastic model of order (1,1)], and an extremevalue model based on the generalized Pareto distribution.8

In the nonparametric category, the simplest method for VAR calculation-a method somewhat misleadingly termed "historical simulation"-is to collect a sample of the historical returns on the assets that make up the book of the firm and calculate the value of the portfolio returns under the assumption that the current portfolio was held for the entire historical period. These portfolio returns are then ranked, and the fifth or first percentile of the empirical distribution is taken; this number, multiplied by the value of the portfolio, is used as the VAR at the required confidence level. The nonparametric approach we used involved the empirical density of the actual sample of returns.9

Following the Basle Committee requirements, we calculated one-day VARs from the volatility estimates of these models based on one year of daily data, updated quarterly, and estimated at the 99 percent confidence level. In addition, to obtain an idea of whether these parameters are the most preferable, we also estimated the VAR by using the same models with three years of data, rather than one, and a 95 percent confidence level.

We estimated all combinations of models, sample lengths, and confidence levels on a rollingwindow basis with the length of the window as either one or three years, and the sample was rolled forward every 60 days, as recommended by the Basle Committee. The result was a total of 68 windows, each of which generated a set of daily VAR estimates.

Results

We address first the results of the VAR estimation and evaluation; then, we turn to our findings on whether the empirical density method or volatility estimation method is preferable; finally, we analyze our backtest findings.

VAR Estimation and Evaluation. Thevalues for the daily MCRRs estimated by using the empirical distribution, three normal parametric, and the generalized Pareto models for the one- and three-year sample periods for the three U.K. and three U.S. assets and for equally weighted portfolios of U.K. and U.S. assets are in Table 2. The statistics in these tables are the averages and standard deviations of the VARs over the 68 estimation windows, each of which was constructed from samples of 250 observations (for the one-year period) or 750 observations (for the three-year period) rolled forward 60 observations at a time.

A number of features are worth noting. First, in the case of the U.K. assets shown in Panel A, of the parametric models, EWMA and GARCH produced slightly lower VARs than the equally weighted moving average for both data period lengths for all of the individual assets (except commodities) and the U.K. portfolio. Panel B indicates that EWMA also produced the lowest VARs for the U.S. assets for the one-year period most of the time.

Second, as expected, of the parametric models, the equally weighted (EQMA) method almost always produced the most stable VAR estimates, as shown by the small standard deviations. For example, for 99 percent coverage of the equally weighted U.K. portfolio with one year of data, the standard deviation of the mean VAR estimated by using the equally weighted averages is 0.16 percent, whereas under the EWMA method, it is 0.25 percent, and under the GARCH method, it is 0.20 percent.

Third, the use of three trading years of data instead of one left the average MCRRs virtually unchanged but lowered variability. The absolute changes in the VAR are small-typically on the order of 0.03 of a percentage point.

Fourth, the ED (empirical distribution or historical simulation) approach generally produced larger (and sometimes more stable) VAR estimates than the parametric approaches based on the normal distribution.

In addition, because the methodologies based on an assumption of Gaussian return distributions are likely to systematically underestimate VAR if actual return distributions are fat tailed, the generalized Pareto distribution, which explicitly allows for fat tails, produced in almost all cases, as one would expect, considerably higher VAR estimates than any of the other approaches. For example, Table 2 shows that the average daily VAR estimate based on one year of data for the U.K. portfolio is less than 1 percent of the initial value of the position for all models except the generalized Pareto, which produced an average 20 percent higher than the average given by the other models. The differences between the Pareto VAR estimates and those of the other models are greater when only one year of data was used than when the estimation period was three years. Such a result may arise from the difficulty of estimating the parameters of the distribution from small data series.

The values of the MCRRs alone say little, however, aside from which model would be cheapest for the firm to use and which model would generate the largest capital charge. To test the adequacy of the VARs, we carried out the backtests to determine whether the capital risk requirements given by the models are, given the nominal coverage probability of 99 percent, sufficient (or excessive) in terms of actual coverage rates.ll The results of the backtests are presented in Table 3.

We consider a good model to be one that puts the firm in the yellow or red zones (see Appendix A for a description of zones) for as few of the 68 windows as possible but that has a proportion of exceedances close to the number expected under the nominal probability (2.5 per 250-day interval). Given this definition of "good," the results have a number of interesting and relevant features. First, the clear winners among the five calculation methods and for both countries are the simplest onesthe ones in which the first percentile of the distribution of the actual data, rather than the volatilities, was used-and also the generalized Pareto model. For example, for the U.K. commodities series, at no time would a firm have been outside the green or yellow zones if it had assumed 99 percent coverage, used either the previous trading year or the previous three years of data to estimate the quantile, and used the empirical density, the equally weighted volatility measures, or the Pareto distribution. If it had used the exponential weighting scheme, however, it would have been in the red zone 17 and 14 times out of 68 for, respectively, one and three years of data. The exponentially weighted model (EWMA) always generated the largest number of exceedances, which implies that this model does not do a good job of setting the right VAR at the right time. The GARCH model produced results somewhere between those of the ED and EWMA models in terms of both the size of the VARs and the number of exceedances. A similar, although less extreme, pattern can be observed in Panel B for the U.S. assets. Of the parametric models based on the normal distribution, the simple unweighted moving average (EQMA ) model generated the lowest number of exceedances. This outcome is to be expected because when volatility is low, VAR is probably understated; hence, VAR will be exceeded more often. The loss function is one-sided because it is measured in terms of exceedances; therefore, the model with the more stable VAR will, everything else being equal, generate fewer exceedances. Table 2.

In terms of the issue of optimal sample length to use in the calculation of the MCRRs, Table 3 indicates that if no weighting is given to the observations (i.e., if the ED or EQMA are used), shorter runs of data are always preferable. For example, the securities firm or bank in our sample that used the empirical density with 99 percent coverage of the FTSE returns would have been in the red zone only once if it had estimated the volatility number by using one year of data; if it had used three years and the same model, then (all else being equal), it would have been in the red zone three times. Conversely, for the EWMA and GARCH models, which are conditional and give plausibly higher weightings to previous observations, the three-year run of data yielded fewer exceedances in most cases than did the one-year data.

The generalized Pareto model led to the fewest excursions into the yellow or red zones, as was expected in light of the substantially higher VAR estimates shown for this method in Table 2. When estimated from only one year of data, the generalized Pareto model appears to overestimate the VAR, leading to actual exceedances that are considerably lower than those based on any other approach. For example, in Table 3, the average number of exceedances per 250-day rolling out-ofsample period for the U.S. bond returns is 1.4, compared with an expected number of 2.5. Given that a regulatory scaling factor has not been applied, this low average number of exceedences appears to represent an excessive VAR. The problem probably arose from the small number of observations that actually entered into parameter estimation for the generalized Pareto model for a one-year sample, which led to the coefficient estimates being considerably affected by a small number of outliers in the data. The VARs are lower (and arguably, with the greater number of exceedances, more appropriate) for the same model estimated from three years of daily data.

Empirical Density versus Volatility Estimation. A potentially worrying feature of the results in Table 3 is the sheer number of times a firm or bank would be in the danger zone if it had used any of the parametric models based on the normal distribution-and particularly the method based on exponential weighting.12 Table 3 suggests that the use of actual returns, rather than volatility estimation with the normality assumption, leads to more accurate and more reliable VAR estimates. This result seems at odds with the analysis by Jorion, who argued in his important 1996 paper that parametric models should be preferable. The reason is that by using information about the whole distribution of returns directly in calculating the volatility estimate, parametric methods should be more efficient. Their VARs, therefore, should vary less from one sample to the next than VARs estimated from the first percentile of the actual return distribution. Jorion demonstrated that, indeed, for a sample of data generated normally or using a Student's t-distribution, the standard errors around the VAR estimates are much smaller for the parametric VARs. Our results suggest, however, that the use of an inappropriate parametric distribution can lead to a considerably worse outcome than use of the empirical density method.

Tall Probability Estimation. Can a firm provide better protection for itself than is provided by the current regulations and approaches? A repeat of all of the preceding analysis but with a 95 percent nominal coverage probability used instead of 99 percent showed that a 95 percent probability is completely insufficient to protect a firm against most unforeseeable losses. In the out-of-sample tests, the estimated VAR would have been exceeded far too often. Thus, on the face of it, the regulatory focus on 99 percent appears to be sensible. However, as noted, in a limited amount of data, the further out in the tails one wishes to set the cutoff to estimate the VAR, the more difficult it is to get an accurate estimate of the quantile. This dilemma is sometimes known as "the Star Trek problem"-going where no one has gone before. Of course, if one assumes a normal distribution for the returns, the 1 percent and 5 percent tails will be estimated with equal (in)accuracy. Hull and White (1998) tackled the fat-tails problem by offering a model for VAR that allows the analyst to define a class of probability distributions that can be transformed into a multivariate normal distribution. Table 3.

Because we found in this study that the empirical density measure was accurate-generating relatively small numbers of violations and with capital requirements of acceptable sizes-we considered an alternative approach for dealing with fat tails. In this alternative, a 5 percent VAR measure was used but the multiplier was increased. We increased the multiplier from 3 by a further scaling factor of 1.4, so the overall multiplier was 4.2. We chose the factor of 1.4 because we found it is the factor that takes the fifth quantile from a normal distribution to the first quantile.

The results from repeating the analyses but using a 5 percent VAR multiplied by 1.4 are given in Table 4.14 Table 4 shows the revised MCRRs for the modified ED method, which in the U.K. case (Panel A) increased only slightly relative to the average 1 percent VAR for the quantile estimation method given in Table 2. In the case of the FTSE series, the average VAR went unchanged, but in all cases, the standard deviation of the VARs across the 68 windows shrank considerably, indicating that the estimated VARs are more stable over time with the increased multiplier. For all of the U.S. assets and the U.S. portfolio (Panel B), the VAR actually fell when the 5 percent VAR with increased scaling factor was used. Table 4 also gives the number of times VAR was exceeded for the new "5 percent x 1.4 = 1 percent" scaled VAR in out-ofsample tests. A comparison of the exceedances in Table 4 for the ED model with the exceedances of Table 3 shows that for almost all data series, when the new VAR was used, the number of times that the firm was in the precarious red zone or the yellow zone fell from its previous number. For example, the standard 1 percent VAR estimated by using the ED method with three years of daily data on the FTSE series put the firm in the red zone three times, whereas the scaled VAR led the firm into the red zone only once.

Keep in mind that this improvement in accuracy comes essentially costlessly to the firm. The average capital requirement for the two methods is the same; only the variability changes. This point is illustrated in Figure 1, which shows the number of times VAR was exceeded for the 68 test windows for the 1 percent VAR and scaled VAR rules. The number of exceptions per window is on average smaller and far less variable for the modified rule than for the current (1 percent VAR) regulatory requirement. Table 4. Figure 1.

Finally, comparing the modified empirical density method with the generalized Pareto distribution, we argue that the modified ED method is likely to be preferred by a firm. Although the Pareto model clearly generates the smallest percentage of exceedances, this effect is caused substantially by the considerably increased capital requirements. When multiplied by the regulatory scaling factor of 3, such increased VARs would be very costly to a firm.

Conclusions

We sought to examine a number of issues related to the practical implementation and evaluation of internal risk management models. Our main findings are as follows.

First, the models have substantial differences in the number of days on which the realized losses exceed the MCRRs; a method based on quantile estimation will perform considerably better in many instances than simple or complex parametric approaches.

Second, the use of long runs of data relative to the single trading year required by the Basle Committee has an indeterminate effect on the performance of the models; the effect depends on the model and the asset series under consideration.

Third, the use of critical values from a normal distribution in conjunction with a parametric approach when the actual data are fat tailed can lead to a substantially less accurate VAR estimate than use of a nonparametric approach.

Finally, the closer the quantiles are to the mean of the distribution, the more accurately they can be estimated. Therefore, if a regulator has the desirable objective of ensuring that virtually all probable losses are covered, the use of a smaller nominal coverage probability (say, 95 percent instead of 99 percent) combined with a larger multiplier is preferable.

Chris Brooks is professor of finance at the ISMA Centre, University of Reading, United Kingdom. Gita Persand is a lecturer (assistant professor) in finance at the University of Bristol, United Kingdom.

Footnotes:

1. See Basle Committee on Banking Supervision (1995, 1996) for a detailed description of the framework; a brief discussion of the regulatory environment under the Basle rules is given in Appendix A.

2. A useful summary of the issues involved in VAR estimation and an extensive literature survey can be found in Duffie and Pan (1997). See also Dimson and Marsh (1995,1997) or Brooks and Persand (2000).

3. In fact, the use of old vintages of data may actually worsen market parameter estimates and forecasting accuracy because the weight given to recent observations will be lowered by any increase in historical data.

4. We did not investigate the impact of the choice of a 10-day holding period because Jorion (1996) showed that, given the use of the "square root of time" rule for scaling up volatility estimates, an alteration of holding period will change the results only trivially.

5. The rules of the Basle Committee for calculating and applying VAR are described in Appendix A. The actual coverage rate is even greater than 99 percent (in fact, greater than 99.99 percent) because of the scaling factor of at least 3.

6. In our analysis, we assumed that we were long all the assets-both individually and in the portfolios. A similar analysis could be undertaken for short or netted positions, but we would not expect our conclusions to be markedly altered.

7. This portfolio is deliberately highly simple relative to a genuine bank's book, as well as entirely linear in nature. The use of a simple portfolio enabled us to unravel the various estimation issues and broad aspects of the methodologies. Additionally, the series that we considered are all

fundamental or "benchmark" series to which other series are mapped under the J.P. Morgan approach.

8. Given the "delta normal" model that implicitly underlies many of these risk management models, the estimation issues essentially boil down to forecasting volatility. The models used here are widely used; Brooks (1998) provides a thorough discussion of alternative models for prediction of volatility.

9. An alternative method for calculating VAR is the Monte Carlo approach. It involves constructing, given an assumed data-generating process and assumed correlations between the assets, repeated sets of artificial data. We did not use this approach in our study because it is best suited to complex (nonlinear) asset payoffs, whereas all the instruments we used are linear. Moreover, if the assumed datagenerating process is not fully descriptive of the actual data, the estimated VAR may be inaccurate.

10. The Basle Committee called such tests "forward tests."

11. In this study, we did not multiply by the regulatory scaling factor of 3 because doing so would mask many of the important differences between model adequacies. Of course, multiplying all of the estimated capital requirements by a factor of 3 would make all but the worst VAR models safe.

12. Boudoukh, Richardson, and Whitelaw (1995) showed that such a large number of exceedances should be expected when normality is assumed and proposed an alternative measure of risk termed "worst-case scenario" analysis.

13. This analysis is not shown because of space constraints.

14. Recall that in this study, we did not multiply the original VARs by the regulatory scaling factor of 3. Thus, our multiplier was increased from 1 (i.e., no effective multiplier) to 1.4.

References:

Basle Committee on Banking Supervision. 1988. International Convergence of Capital Measurement and Capital Standards (July). . 1995. An Internal Model-Based Approach to Market Risk Capital Requirements (April).

. 1996. Supervisory Framework for the Use of "Backtesting" in Conjunction with the Internal Models Approach to Market Risk Capital Requirements (January).

Boudoukh, J., M. Richardson, and R. Whitelaw. 1995. "Expect the Worst." Risk, vol. 8, no. 9 (September):100-101.

Brooks, C. 1998. "Predicting Stock Index Volatility: Does Volume Help?" Journal of Forecasting, vol. 17, no. 1 (january):5980.

Brooks, C., and G. Persand. 2000. "Value at Risk and Market Crashes." Journal of Risk, vol. 2, no. 4 (Summer):5-26.

Brooks, C., A.D. Clare, and G. Persand. 2000. "A Word of Caution on Calculating Market-Based Capital Risk Requirements." Journal of Banking and Finance, vol. 24, no. 10 (October):1557-74.

Davison, A.C., and R.L. Smith. 1990. "Models for Exceedances over High Thresholds." Journal of the Royal Statistical Society B, vol. 52, no. 3:393-442.

Dimson, E., and P. Marsh. 1995. "Capital Requirements for Securities Firms." Journal of Finance, vol. 50, no. 3 (July):821-851. . 1997 "Stress Tests of Capital Requirements." Journal of Banking and Finance, vol. 21, nos. 11-12 (December):1515-46.

Duffie, D., and J. Pan. 1997. "An Overview of Value at Risk." Journal of Derivatives, vol. 4, no. 3 (Spring):7-49.

Efron, B. 1982. The Jackknife, the Bootstrap, and Other Resampling Plans. Philadelphia, PA: Society for Industrial and Applied Mathematics.

Hsieh, D.A. 1993. "Implications of Nonlinear Dynamics for Financial Risk Management." Journal of Financial and Quantitative Analysis, vol. 28, no. 1 (March):41-64.

Hull, J., and A. White. 1998. "Value at Risk When Daily Changes in Market Variables Are Not Normally Distributed." Journal of Derivatives, vol. 5, no. 3 (Spring):9-19.

Jorion, P. 1995. Big Bets Gone Bad: Derivatives and Bankruptcy in Orange County. San Diego, CA: Academic Press.

. 1996. "Risk2: Measuring the Risk in Value at Risk." Financial Analysts Journal, vol. 52, no. 6 (November/ December):47-56.

2001. Value at Risk: The New Benchmark for Controlling Market Risk. 2nd ed. New York: McGraw Hill.

J.P. Morgan. 1996. Riskmetrics-Technical Document. 4th ed. New York: J.P. Morgan and Reuters. (Available at iii ow.html.)

Kupiec, P. 1995. "Techniques for Verifying the Accuracy of Risk Measurement Models." Journal of Derivatives, vol. 3, no. 2 (Winter):73-84.

Appendix:

According to the Basle Committee, VAR should be calculated as the higher of (1) the firm's previous day's VAR measured according to the parameters given here or (2) an average of the daily VAR measures on each of the preceding 60 business days, both subject to a multiplication factor. VAR is to be computed on a daily basis for a minimum "holding period" of 10 days. Moreover, VAR has to be esti

mated at the 99 percent probability level, with daily data for a minimum length of one year (250 trading days) and with the estimates updated at least every quarter.

The multiplication factor, which has a minimum value of 3, depends on the regulator's view of the quality of the bank's risk management system and, more precisely, on the backtesting results of the models. Unsatisfactory results might incur an increase in the multiplication factor up to a maximum of 4.

The regulator performs an assessment of the soundness of the bank's procedure in the following way: Underprediction of losses by VAR models (that is, the days on which the bank's calculated VAR is insufficient to cover the actual realized losses in its trading book) are termed "exceptions." Between 0 and 4 exceptions over the previous 250 days places the bank in the green zone; between 5 and 9, places it in the yellow zone; and when 10 or more exceptions are noted, the bank is in the red zone.

Document fia0000020021029dy9100007

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download