Extreme Value Theory with High Frequency Financial Data



Extreme Value Theory with High Frequency Financial Data 

Abhinay Sawant 

Economics 201FS

Spring 2009

Duke University is a community dedicated to scholarship, leadership, and service and to the principles of honesty, fairness, respect, and accountability. Citizens of this community commit to reflect upon and uphold these principles in all academic and non-academic endeavors, and to protect and promote a culture of integrity.

To uphold the Duke Community Standard:

• I will not lie, cheat, or steal in my academic endeavors;

• I will conduct myself honorably in all my endeavors; and

• I will act if the Standard is compromised.

 

1. Introduction and Motivation

In recent years and in wake of recent events, financial risk management has become an issue of paramount importance for both financial and non-financial firms. One way in which firms manage their exposures to market and credit risks is through the Value at Risk (VAR) metric. For a given portfolio of assets, the N-day X-percent VAR specifies the loss amount V that the portfolio is not expected to exceed in the next N-days with X-percent certainty. Value at Risk is used by several firms to understand their level of risk exposure and for financial institutions to impose risk-related limits on individual traders. It is also used by firms and regulatory agencies to determine the amount of capital to be set aside to absorb unexpected losses. In fact by the 1996 amendment to the Basel Accords, banks are required to hold capital in excess of 3-4 times the VAR of their portfolio. Hence, proper estimation of Value at Risk is necessary in that the VAR must be high enough to accurately capture the level of risk exposure but also not too high, in order to minimize the capital allocated, as excess capital could be better invested elsewhere.

Value at Risk is generally determined by modeling the distribution of the asset returns of the portfolio and then determining the (100 – X)% quantile of the distribution for long positions (left tail) and the X% quantile of the distribution for short positions (right tail). For simplicity, many risk management practitioners have assumed that the returns of portfolios are normally distributed. However, empirical evidence has shown asset returns to have distributions with fatter tails than those modeled by normal distributions and to have differences in the left and right tails, unlike the symmetric normal distribution. One alternative approach that takes both these issues into account is Extreme Value Theory (EVT) (described in further detail in Section 2) which estimates VAR based only on the data in the tails as opposed to fitting the entire distribution. Studies with application to empirical data have generally shown VAR estimations with EVT to outperform VAR estimation procedures using other distributions such as normal, empirical quantiles and student distributions with fat tails and especially so at higher quantiles.

 This paper is primarily concerned with the application of Extreme Value Theory for VAR estimation using high frequency financial data. Anderson and Bollerslev have shown that the use of high frequency data has carried many potential benefits such as improved measurements of volatility as intraday data provides more information than closing prices alone. This paper aims to realize the potential benefits from using high-frequency estimation for VAR estimation with Extreme Value Theory. One immediate benefit that can be realized is that the use of Extreme Value Theory relies on the availability of independent and identically distributed asset returns data. Due to time-varying volatility, this assumption usually does not hold for financial data. However, if realized volatility can be estimated using high-frequency data then it would be possible to standardize the returns by their estimated volatility and so make the data more identically distributed for extreme value estimation.

      Other potential benefits to using high frequency data include a better estimation of VAR and Extreme Value parameters, the use of more recent data, and the possibility of Intraday VAR. By using high frequency data as opposed to just closing pricings, more extreme returns are likely to be detected allowing for better parameter estimation. Furthermore, high frequency data allows larger data sets to be pulled from data in recent years, which would likely give better estimation for predictive VAR in a short-term horizon. Finally, a concept that has been posited in EVT literature is the potential of an Intraday VAR. While VAR is normally calculated based on closing prices and capital allocation is adjusted daily, recalculating the VAR throughout the day may provide more efficient allocation of capital. In order for intraday VAR to be implemented, higher frequency data than just closing prices must be used.

2. Background

Extreme Value Theory

The basis of Extreme Value Theory rests on the Fisher-Tippett theorem. When given a sample of data points from independent draws [pic] [pic]with common cumulative distribution F(x), the cumulative distribution of the maximum order statistic [pic][pic]can usually be expressed as [pic][pic]. However, as n increases to infinity this cumulative distribution function degenerates to 0 for all x where[pic] [pic]and to 1 for all x where[pic][pic]. By the Fisher-Tippett theorem, the normalized maximum converges in n in the Generalized Extreme Value (GEV) distribution whose cumulative distribution is as follows:

[pic] if ξ ≠ 0 (1)

[pic] if ξ = 0

Therefore by this theorem, the distribution of the maximum can be determined as a function of three parameters: the shape parameter ξ and the two scale parameters αn and βn which increase in block size n.

Block Maxima Estimation

      There are several methods that exist for estimating the parameters of the GEV distribution (1) but this paper will focus on the block maxima method. In the block maxima method, the original data set [pic][pic] is divided into g subgroups (“blocks”) of block size m: [pic][pic], [pic][pic],…, [pic][pic]. For sufficiently large m, the maximum of each block should be distributed by the GEV distribution with the same parameters (as though each block were an independent time series). Therefore, if the data points [pic][pic]are taken such that[pic][pic][pic], [pic],…, [pic][pic], then [pic][pic]should be a collection of data from a common GEV distribution (1). Using maximum likelihood estimation, the parameters in (1) can be estimated with the data from[pic][pic].

      The first major issue with the block maxima method is the block size m since it needs to be large enough to provide proper estimation but if it’s too large, then fewer data points [pic]will be obtained, thus providing poorer maximum likelihood estimation. Due to the large size of the data sets used in this paper, especially for high-frequency returns, an appropriate size m could generally be found to balance these tradeoffs. The second major issue with block maxima is that it is susceptible to volatility clustering since blocks would only capture one data point in succession of days with extreme returns. However, this effect is largely negated in this paper since the data is standardized by realized volatility before the block maxima method is utilized.

      Other methods of estimation include the parametric peaks-over-threshold and the nonparametric Hill Estimation which estimate the parameters and VAR based on returns above some threshold value u. However, since this threshold is difficult to determine quantitatively and because it varies from stock-to-stock, the block maxima method was used.

Value at Risk Estimation

The VAR can be estimated from the block maxima method by using the following relationship for block size m:

      [pic]  [pic] (2)

Therefore, to determine the X% Value at Risk, one would find the value of VAR where:

      [pic]     [pic]   (3)

Once again, [pic][pic] is distributed by the estimated GEV distribution from the block maxima method. In order to forecast over longer horizons, the Value at Risk over T time periods VART can be estimated using the Value at Risk over 1 time period VAR1 and the shape parameter ξ:

  [pic]        [pic] (4)

While VAR is typically cited in terms of a nominal monetary amount, reflecting the amount of principal to be potentially lost, this paper leaves the value at risk in terms of a percentage value for simplicity (e.g. the daily VAR of the portfolio is expected to be -5% with 99% confidence).

Realized Variance

The realized variance can be computed over some time period t (e.g. one day) as the sum of the squared high-frequency returns within that time period:

      [pic][pic] where [pic][pic]      (5)

Anderson and Bollerslev have shown that the realized variance measure converges in frequency to the integrated variance plus a discrete jump component:   

      [pic] [pic]   (6)

Therefore, intuitively the realized volatility metric can be used as an estimate for the volatility over some time period t for high-frequency data.

HAR-RV Model

Due to the presence of volatility clustering, predictive models of volatility and realized variance have been developed. One such model of forecasting realized volatility is the Heterogeneous Autoregressive Realized Variance (HAR-RV) model developed by Corsi (2003) which regresses average realized volatility ahead on average historical realized volatility from one day, one week and one month prior. The standard HAR-RV model for daily realized variance is expressed as:

[pic] [pic]where [pic][pic] (7)

3. Methods

Data

This paper primarily analyzed share price data for Citigroup (C) from April 4, 1997 to January 7, 2009 (2921 days) sampled minute-by-minute from 9:35 am to 3:59 pm daily. To check the validity of the results, share prices were also analyzed for Goldman Sachs (GS) and Wal-Mart (WMT) although their results were not presented in this paper.

Standardization of Data

All value at risk estimation in this paper were computed using daily log returns or some divisible portion such as half-day or quarter-day log returns. In order for Extreme Value Theory estimation to hold, the log returns in the time series must be independent and identically distributed (iid). To account for this, the log returns were standardized by the realized volatility over the time period of the return as estimated using high frequency returns over the time period:

[pic][pic] where [pic] [pic]     (8)

However in the data, the presence of market microstructure noise due to market frictions such as the bid-ask spread has been shown to mask the fundamental price of the stock at very high frequencies and so a lower frequency must be used for proper estimation. Based on the signature volatility plot as shown in Figure 1 of Section 6, realized variance was calculated using a 10-minute frequency to balance the tradeoff of bias due to market microstructure noise and the loss of information. In addition, to provide a better estimation of realized variance, sub-sampling was used. For example, for a daily realized variance, ten realized variance measures were averaged where each measure had a different initial time step 9:35 am, 9:36 am, … 9:44 am and all intraday returns shorter than 10-minutes were scaled appropriately. The data standardization would clearly make the log returns identically distributed since they were normalized by their volatility. A correlogram in Figure 2 in Section 6 also suggests that the standardized data are very weakly correlated and so the iid assumption is validated for standardized data.

Value at Risk Simulations

This paper examined simulations of Value at Risk models implementing Extreme Value Theory and standardized data. The first 1000 days were used as the in-sample period, leaving the remaining 1921 days as the out-of-sample period. In order to determine the VAR for the 1001st day, the standardized daily returns from the first 1000 days were used to estimate a “standardized VAR” through the block maxima method. Once this “standardized VAR” was known, it had to be multiplied by tomorrow’s realized volatility in order to determine a VAR in real terms. One day ahead realized volatility was either known in some simulations, or was forecasted using the HAR-RV model (7) for predictive simulations. This paper used a “nonadaptive” approach for VAR simulations in that for determining the VAR for 1002nd day, all the data through the 1001st day were used.

Methods for Evaluating VAR Model

During each iteration of the VAR simulation, the VAR was calculated for one-day forward and then was compared with the actual daily return. If the actual daily return was larger in magnitude than the estimated VAR, then a break was recorded. For an X% VAR test, it would be expected that breaks occur (1-X)% of the time. Therefore, the number of breaks can be thought of as a binomial random variable with probability p = (1 – X)/100 over the number of out-of-sample trials n. Using a two-sided test with the null hypothesis that the number of breaks equals the expected value n(1 – X), a p-value was determined. With the same general idea, Kupiec proposed a powerful two-sided test that for a valid VAR model, the statistic

      [pic]    [pic]  (9)

should have a chi-square distribution with one degree of freedom where m is the number of breaks, n is the number of trials, and p is (1 – X)/100. For either the binomial or Kupiec statistic, a low p-value would indicate the number of breaks was much different than expected and thus the VAR model was inappropriate. Another issue used to test the validity of a VAR test was to test for bunching. A valid VAR model should have breaks spread relatively uniformly across the out-of-sample region. A test proposed by Christofferson (1998) indicates that the test statistic

      [pic]  [pic]  (10)

should have a chi-square distribution with one degree of freedom if there is no bunching. The variable uij is defined as number of observations in which a day moves from state i to j where i, j = 0 indicates a day without a break and i, j = 1 indicates a day with a break. The other parameters are determined as follows:

      [pic] [pic] [pic][pic] [pic] [pic] (11)

Finally, the last metric used for evaluating the VAR model was the average estimated VAR across the entire simulation. An ideal VAR model would not have low p-values for the binomial, Kupiec, and Christofferson statistics and would minimize the average VAR estimated (since one of the goals is to minimize the amount of capital allocated, which is directly related to the VAR estimate).  

4. Results and Discussion

      

Table 1 in Section 6 displays the statistical results of a VAR simulation for Citigroup’s stock in which standardized 1-day returns are used to determine 1-day ahead VAR and in which the forward realized volatility is known. The results show that except for the 97.5% VAR test on the left tail, the procedure described above in the methods section results in a relatively sound VAR model. However, since the forward realized volatility is known in the simulation, this reveals nothing about the predictive nature of the VAR model. This simulation only suggests that EVT provides a relatively good estimation of VAR, assuming that the realized volatility is known. Therefore, in order to apply the VAR model developed above, predicting future volatility is an integral component.

      Table 2 repeats the same simulation from Table 1, but this time the forward volatility is forecasted using the HAR-RV model. At first glance, the model proposed by this paper with forecasted volatility appears valid for the right tail. However, for the left tail, this model appears to largely underestimate the VAR since more breaks occur than expected and this is especially true for the higher quantiles of 99.5% and 99.9%. Since the procedure appeared valid for known volatility as shown in Table 1, these results suggest that the problem with the predictive model lies with improper volatility estimation by the HAR-RV model. Considering the out-of-sample region included data from the highly volatile period of fall 2008, it is likely that considerable errors could’ve been made in one-day ahead volatility forecasting. However, it is unclear as to why the improper volatility estimation appears to have been more of an issue for the left tail than the right tail. One conclusion that could be drawn is that the left-tail VAR estimation is more sensitive to volatility estimation than the right tail. By looking at Table 1 once again and comparing the results for the left and right tails with known forward volatility, it generally appears as though the left-tail VAR simulation receives more breaks and tends to underestimate the VAR more than the right-tail VAR simulation. Therefore, there could be some difference in dynamics between the left and right tails causing improper EVT estimation for the left tail. One further observation is that the average VAR is higher for the simulation with forecasted forward volatility than the simulation with known forward volatility, once again highlighting the errors due to forward volatility estimation.

      One problem with the simulations conducted so far is that the magnitude of the numbers involved is relatively small. For example in Tables 1 and 2 for VAR simulations with greater than or equal to 99.0% VAR level, the number of breaks is less than 25. Therefore, the test could have high sensitivity to the number of recorded breaks. For example, if two more breaks were recorded in the 99.9% VAR simulation for the left tail in Table 1, then the model could be deemed invalid. This is especially troublesome since the break is binary variable, and a break could be determined if a daily return is only slightly above the VAR threshold or be left out if it is only slightly less than the VAR threshold. Therefore, one alternative method would be to consider VAR simulations on intraday returns rather than daily returns. That is, to compute and standardize half-day returns and then to treat each half-day as a trial under the original VAR simulation. For half-day returns, this would generate double the amount of trials and for quarter-day returns, quadruple the number of trials. With more trials, there would also be more expected breaks making the VAR tests more likely to determine the true validity of the proposed model. Furthermore, this would also test the concept of an intraday VAR in which capital allocation could be readjusted during the day as opposed to only allocating capital overnight.

      Table 3 shows the results of the VAR simulation in which the days are divided into 1, 2, 4 and 8 parts and in which the forward volatility is known. The results show that when the volatility is known, the test procedure outlined in the paper is valid at even returns at higher frequencies than daily returns. This also substantiates the concept of an intraday VAR. In right-tail 99.5% VAR in Table 4, one can simply observe that by daily returns one would on average expect a 3.81% VAR but with half-day returns on average would expect 2.53% VAR. One can interpret this as by using an intraday return system, a risk manager can allocate capital separately for the morning and the afternoon and average 2.53% VAR. On the other hand by just setting the capital allocation once daily, the morning and afternoon VAR would average to 3.81%. Since capital allocation is proportional to the level of VAR, one could imagine an intraday VAR as a method for the risk manager to more efficiently allocating capital.

       One other result from intraday simulation is the robustness of the VAR tests. Once again, since there are more data points, the magnitudes of the numbers involved are larger and so these tests provide a better way of measuring the validity of the tests. This can especially be seen in Table 4, which repeats the simulation in Table 3 but this time using a normal distribution, which is known to be a false distribution of asset returns. It actually turns out for standardized daily returns, the VAR simulation with a normal distribution estimation cannot be definitely rejected. However, when considering half-day and higher-frequency intraday returns it becomes clear that the normal distribution does not appear to be a good fit for the data. This is in contrast to the results in Table 3, and so these two sets of tables seem to confirm once again that EVT provides a much better method for estimating VAR than does the normal distribution.

      With larger data sets due to intraday returns, more effective parametric estimation can also be conducted. The main parameter of importance for the GEV distribution (1) is the shape parameter ξ since it is in independent of the size of the block and has the largest role in determining the overall shape of the GEV distribution. It is also the necessary parameter required for forecasting VAR over longer periods of time (4). For larger block sizes, it is expected that the shape parameter ξ converge to a particular value. For daily returns, this result doesn’t become readily apparent because for large block sizes, there are less data points and so the estimation of the shape parameter is less precise. However, due the large data set generated by intraday returns, the shape parameter does appear to converge for large block sizes. Figure 3 in Section 6 shows the estimation of the shape parameter as a function of the block size for 16 returns per day. The plot clearly shows convergence towards a value for higher block sizes, leading to a more precise estimation of the shape parameter ξ. However, it is not immediately clear as to whether the shape parameter for 16 returns/day provides any information about the shape parameter for longer periods such as daily returns.

5. Conclusions 

In the majority of the literature regarding the application of EVT to financial data for VAR modelling, only closing price and their respective log returns are used for EVT estimation. However, since the log returns are not independent and identically distributed, this would provide an improper estimation of the VAR. This paper illustrates a procedure for estimating VAR by first standardizing the log returns by the realized volatility, as estimated with high-frequency data, and so making the returns data independent and identically distributed prior to EVT estimation. The results empirically show that if the forward volatility is known, then this procedure can provide a valid VAR measure. By using the HAR-RV model to predict forward volatility, the predictive model doesn’t perform as well, especially for the left tail, suggesting that proper volatility prediction is an integral component for the application of this procedure.

This paper also explored the use of high-frequency data for analyzing VAR models for intraday returns as well as for daily returns. The results suggest that the procedure proposed by the paper also provides a valid model for VAR estimation on an intraday scale. This intraday VAR may also provide a method for risk managers to more efficiently allocate capital as less capital is needed on average for intraday periods than for daily periods. Furthermore, analyzing VAR models through an intraday basis provides a better method of evaluating the model since the higher number of trials results in a higher number of expected breaks, which would be important for evaluating VAR models with high degrees of confidence. This intraday method also eliminates incorrect VAR models such as the normal distribution. Finally, the use of intraday VAR may also allow for better parameter estimation as the large size of the data set allows for better convergence to the shape parameter ξ with larger block sizes.

There are several areas of research that could be pursued from these findings. First, in order to apply this model, better methods of predicting realized variance would have be developed and especially at the intraday level. Another area of potential would be to analyze the differences between estimation with the left and right tails. Although the distributions of the tails are known to be different, VAR simulation results also appear different between the two tails, especially for forecasted volatility. Finally, another area of further research would be to analyze how intraday VAR estimates could be used to forecast daily VAR estimates. As higher intraday returns provide better methods of estimation the shape parameters, this may lead to better methods of forecasting VAR over longer time periods.

6. Tables and Figures

Table 1: 1-Day EVT VAR Simulation for Citigroup Stock with Known Forward RV

Left Tail

|VAR Level (%) |Breaks |Break % |Binomial p-value |Kupiec |Christoffer. |Average VAR (%) |

| | | | |p-value |p-value | |

|97.5 |64 |3.33 |0.0208 |0.0262 |0.3731 |-2.97% |

|99.0 |20 |1.04 |0.7415 |0.8572 |0.5164 |-3.42% |

|99.5 |12 |0.62 |0.3441 |0.4559 |0.6977 |-3.69% |

|99.9 |2 |0.10 |0.6039 |0.9548 |0.9485 |-4.17% |

Right Tail

|VAR Level (%) |Breaks |Break % |Binomial p-value |Kupiec |Christoffer. |Average VAR (%) |

| | | | |p-value |p-value | |

|97.5 |51 |2.65 |0.5996 |0.6668 |0.0952 |2.87% |

|99.0 |19 |0.99 |0.9172 |0.9615 |0.5377 |3.45% |

|99.5 |11 |0.57 |0.5178 |0.6592 |0.7218 |3.81% |

|99.9 |1 |0.05 |0.8554 |0.4638 |0.9742 |4.50% |

The table above displays the statistics from a Value at Risk test at various significance levels for an out-of-sample size of 1921.  

 

Table 2: 1-Day EVT VAR Simulation for Citigroup Stock with Forecasted Forward RV 

Left Tail

|VAR Level (%) |Breaks |Break % |Binomial p-value |Kupiec |Christoffer. |Average VAR (%) |

| | | | |p-value |p-value | |

|97.5 |49 |2.55 |0.8122 |0.8871 |0.0002 |-3.34% |

|99.0 |23 |1.20 |0.3237 |0.3993 |0.0292 |-3.84% |

|99.5 |19 |0.99 |0.0043 |0.0074 |0.0128 |-4.14% |

|99.9 |12 |0.62 |0.0000 |0.0000 |0.0622 |-4.68% |

Right Tail

|VAR Level (%) |Breaks |Break % |Binomial p-value |Kupiec |Christoffer. |Average VAR (%) |

| | | | |p-value |p-value | |

|97.5 |37 |1.93 |0.1154 |0.0934 |0.1997 |3.22% |

|99.0 |17 |0.88 |0.7190 |0.6052 |0.5816 |3.88% |

|99.5 |13 |0.68 |0.2158 |0.2975 |0.6738 |4.29% |

|99.9 |2 |0.10 |0.6039 |0.9548 |0.9485 |5.06% |

Table 3: Intraday EVT VAR Simulation for Citigroup Stock with Known Forward RV 

99.5% Left-Tail VAR

|Returns Per Day |OOS Trials |Breaks |Break % |Binomial p-value |Kupiec |Christoff. |Average VAR (%) |

| | | | | |p-value |p-value | |

|1 |1921 |12 |0.62 |0.3441 |0.4559 |0.6976 |-3.69% |

|2 |3842 |22 |0.57 |0.4416 |0.5329 |0.6146 |-2.58% |

|4 |7684 |46 |0.60 |0.1968 |0.2345 |0.9383 |-1.68% |

|8 |15368 |79 |0.51 |0.7480 |0.8058 |0.3662 |-1.06% |

99.5% Right-Tail VAR

|Returns Per Day |OOS Trials |Breaks |Break % |Binomial p-value |Kupiec |Christoff. |Average VAR (%) |

| | | | | |p-value |p-value | |

|1 |1921 |11 |0.57 |0.5178 |0.6592 |0.7218 |3.81% |

|2 |3842 |23 |0.60 |0.3248 |0.4005 |0.5986 |2.53% |

|4 |7684 |43 |0.56 |0.4061 |0.4674 |0.2440 |1.65% |

|8 |15368 |78 |0.51 |0.8352 |0.8947 |0.3723 |1.04% |

99.9% Left-Tail VAR

|Returns Per Day |OOS Trials |Breaks |Break % |Binomial p-value |Kupiec |Christoff. |Average VAR (%) |

| | | | | |p-value |p-value | |

|1 |1921 |2 |0.10 |0.6039 |0.9548 |0.9485 |-4.17% |

|2 |3842 |3 |0.08 |0.9297 |0.6548 |0.9454 |-3.05% |

|4 |7684 |9 |0.12 |0.4899 |0.6438 |0.8845 |-1.89% |

|8 |15368 |15 |0.10 |0.9391 |0.9249 |0.8641 |-1.15% |

99.9% Right-Tail VAR

|Returns Per Day |OOS Trials |Breaks |Break % |Binomial p-value |Kupiec |Christoff. |Average VAR (%) |

| | | | | |p-value |p-value | |

|1 |1921 |1 |0.05 |0.8554 |0.4638 |0.9742 |4.50% |

|2 |3842 |4 |0.10 |0.6806 |0.9362 |0.9272 |2.97% |

|4 |7684 |6 |0.08 |0.7067 |0.5272 |0.9229 |1.85% |

|8 |15368 |16 |0.10 |0.7431 |0.8727 |0.8551 |3.34% |

Table 4: Intraday Normal VAR Simulation for Citigroup Stock with Known Forward RV 

99.5% Right-Tail VAR

|Returns Per Day |OOS Trials |Breaks |Break % |Binomial p-value |Kupiec |Christoff. |Average VAR (%) |

| | | | | |p-value |p-value | |

|1 |1921 |8 |0.42 |0.7570 |0.5929 |0.7958 |3.92% |

|2 |3842 |11 |0.29 |0.0622 |0.0411 |0.8015 |2.74% |

|4 |7684 |6 |0.08 |0.0000 |0.0000 |0.9229 |1.85% |

|8 |15368 |0 |0.00 |0.0000 |- |- |1.27% |

Figure 1: Signature Volatility Plot for Citigroup (C) Stock

[pic]

This figure shows the plot of the non-annualized average daily realized volatility as a function of the frequency. The realized volatility was calculated using sub-sampling to provide better estimation. Although the data appear to truly asymptotically converge for > 20 minutes, 10 minute frequency was used to minimize the loss of information.

Figure 2: Autocorrelation of Standardized 1-Day Returns for Citigroup (C) Stock

[pic]

Figure 2 provides a correlogram for the daily log-returns of Citigroup after they have been standardized by the daily realized volatility. Since the absolute value of the autocorrelations are below 0.05 for the lags considered and no systematic pattern exists across the lags, this plot suggests that the standardized data are very weakly correlated.

Figure 3: Shape Parameter Estimation for 16 Returns Per Day for Citigroup (C) Stock

[pic]

Figure 3 suggests slight convergence in block size for the shape parameter ξ.

7. References

Andersen, T.G., Bollerslev T., Diebold F. X., and P. Labys (2003). Modeling and Forecasting Realized Volatility. Econometrica 71, 579-625.

Corsi, F. (2003). A Simple Long Memory Model of Realized Volatility. Unpublished manuscript, University of Southern Switzerland.

Cotter, J. & Longin, F. (2004). Margin Setting with High-Frequency Data. Unpublished. Retrieved April 29, 2009 from

Gencay, R., Selçuk F. (2004). Extreme Value Theory and Value-at-Risk Relative Performance in Emerging Markets. International Journal of Forecasting, 20, 287-303.

Gencay, R., Selçuk F., & Ulugülyaĝci A. (2001). EVIM: Software Package for Extreme Value Analysis in MATLAB. Studies in Nonlinear Dynamics and Econometrics, 5, 3, 214-240.

Hull, J.C. (2007). Risk Management and Financial Institutions. Upper Saddle River, New Jersey: Pearson Education.

McNeil, A. (1999). Extreme Value Theory for Risk Managers. Unpublished manuscript, Department of Mathematics, ETH Zentrum.

Tsay, R.S. (2005). Extreme Values, Quantile Estimation and Value at Risk. (2nd Ed.), Analysis of Financial Time Series (pp. 287-316). Hoboken, New Jersey: John Wiley & Sons, Inc.

-----------------------

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download