SePP Home Page



Overcoming Chaotic Behavior of Climate Models

S. Fred Singer

University of Virginia/ Science & Environmental Policy Project, Arlington, VA 22202 singer@

Abstract: A fundamental issue in climate science is Attribution – determining the relative importance of human and natural causes. The task generally involves comparing temperature trends from observations and from greenhouse (GH) models. However, a problem arises from the chaotic uncertainties inherent in the (non-linear) model calculations. Modelers try to overcome this problem by forming an “ensemble-mean” of a number of “simulations” (runs) of the same model. Here we conduct a synthetic experiment, and use two distinct procedures to demonstrate that no fewer than about 20 runs (of 20-yr length of an IPCC Ggeneral-Ccirculation Mmodel) are needed to place useful constraints upon chaos-induced model uncertainties

Introduction:

As Lorenz (1963, 1975: Zichichi, 2007, and also Giorgi, 2005) have demonstrated, climate models, using non-linear partial differential equations, generate results highly sensitive to the initial conditions. To reproduce an identical result in successive “simulations” (runs), the parameters describing the model’s initial state must be given to a precision that is unattainable in practice. The Intergovernmental Panel on Climate Change (IPCC-TAR; Meehl et al., 2001) acknowledges that, mathematically speaking, the climate is a “complex, non-linear, chaotic object” and that, therefore, “the long-term prediction of future climate states is not possible.” Accordingly, any comparison of modeled with observed temperature trends cannot be done satisfactorily without understanding the chaotic behavior of a climate model.

One consequence is that successive runs of the same climate model can yield very different values for warming trends. These trends may vary by an order of magnitude or more, and even their sign may vary. For example, the Japanese MRI model carried out five runs for IPCC [Santer et al., 2008]: the individual trends range from 0.042 to 0.371 K/decade [Fig. 1] -- and this error interval (‘spread’) would have been even greater if more runs had been performed.

[pic]

Fig. 1: Illustrating the chaotic nature of model trends, using the results of 5 runs (sometimes referred to as “realizations” or “simulations”) from 1979 to 1999 of a particular GCM (from Japan’s MRI), as presented in figure 1 of Santer et al 2008. The OLS trends of the five runs range from +0.042 to +0.371 (K per decade). The range of trends would likely to be even larger if more runs had been carried out. None of the five trends, nor the ‘ensemble-mean’ shown, represents the ‘true’ model trend. As discussed in the text, one needs to show that the cumulative ensemble-mean approaches an asymptotic value as the number of runs increases.

Modelers, therefore, carry out several runs (“simulations”) and then publish the ‘model-ensemble-mean,’ E, the arithmetic average of the individual trend values generated by the several runs. Only rarely do we learn the results of the individual simulations that are components in E. Yet how do we know that, say, five runs are sufficient to produce a reliable EM to compare with an observed trend?

The present paper addresses a single but crucial aspect of the impact of chaoticity on the performance of general-circulation models: the strong dependence of the error-bars in temperature-trend projections on the number of simulations that are run on a particular model. We suggest that it may be possible to overcome the “chaoticity barrier” by performing a sufficient number of runs.

Method:

The objective of this enquiry was to establish how many simulation runs of a GCM, at minimum, are necessary to provide reasonable constraints on the value of E. For this investigation, it would have been desirable to use climate models which had each been run at least 20 times. However, financial and time constraints on modelers mean there are no ready examples of such multiple runs. Therefore, we developed a synthetic approach to the problem.

A single, unforced (‘constant forcing’) control run of 1000 years’ duration was obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI) at Lawrence Livermore National Laboratory.

[pic]

Fig. 2: Temperature values of an unforced 1000-year climate model control run. Source: PCMDI.

First, the temperatures were plotted against time in years [Fig. 2] to check for inevitable drift from what should be a straight, zero-slope, horizontal line. Next, the temperature series was divided into 25 segments, each of 40 years’ duration (and also into 50 segments of 20-year runs). For each segment, trend values T1 … T25 were determined. This procedure is analogous to, and (considering the chaoticity of the climate object) equivalent to, 25 separate runs of a GCM over a single 40-year time-interval. Another advantage of using an unforced model is that the true trend is known in advance – namely, zero (except for drift effects).

First procedure: the cumulative ensemble mean Ecum. In the first of two procedures, a cumulative ensemble mean Ecum was derived by adding the trend value of an additional run to the previous values, Eq. (1), so as to determine a new value of E. Finally, the cumulative trend was plotted as a function of n, the number of trend values used [Fig. 3]:

n

EMcum = 1/n Σ Ti , (1)

i = 1

It was then possible to observe where this cumulative ensemble mean, Ecum, approaches an asymptotic value that may be termed the ‘true’ trend. Results [Fig. 3] indicate that about 10 runs of the model seem to be sufficient for 40-yr runs (and 20 runs of 20-yr length).









Fig. 3: Procedure #1: Cumulative ensemble-means of trend values as a function of n, the number of runs (of length 20 or 40 years). The cumulative ensemble mean, Ecum, is seen to reach an asymptotic value close to zero as the number of runs exceeds about 20 (for a run-length of 20 years) – and about 10 (for a run-length of 40 years). In the absence of model drift, the asymptotic value would presumably be zero.

Control experiments for the first procedure

• To investigate the influence of drift, which is seen to exist in the 1000-yr model run [Fig. 2], we have also carried out the same procedure for two additional time periods (Years 200-1000 and 400-1000), for which the drift appears to be more uniform – or at least does not change its sign. For 40-yr runs, the asymptote of the cumulative ensemble mean Ecum is reached again after at least 10 runs.

• In addition to the 25 time-segments, each of 40 years, starting in years 1, 41, 81, … , a further 24 trend values may be determined by starting the 40-year time-segments in years 11, 51, 91, … , with still further trends determinable by starting in years 21, 61, 101, … , and at 31, 71, 111, …, for a total of 97 trend values. All of these were found on examination to behave in the same way. Lagged auto-correlation does not seem to be significant.

• We also checked against a possible influence of contiguity by selecting non-contiguous segments to form cumulative ensemble-means.

• Finally, by starting in each year on the interval [1, 961], it is possible to obtain 961 (partly overlapping) segments of 40-year length. When all 961 trend values are plotted, they are found to form a Gaussian distribution.

Second procedure: constraining the error-interval

In the second of two procedures, the interval on which the values of E fall (‘spread’), was investigated for (assumed) three synthetic models as a function of the number of runs. The result is shown in Fig.4, with the spread plotted against n, the number of simulation runs. For a run length of 40 years, the trend interval is seen to approach zero for n > 10 (Fig. 4)

[pic]

Fig. 4: Procedure #2: Error interval (“spread” = Tmax - Tmin) of ensemble-means of trend values of 3 (synthetic) models as a function of n, the number of runs. (All trend values shown on the y-axis were multiplied by 1000.) For a run length of 40 years, the spread is seen to collapse towards zero as the number of runs exceeds 10. Similar results are obtained for the cases of 4 models and 5 models. The dashed lines result from a different method of selecting trend values (see text).

Details for the second procedure

• The time-series is truncated by removing years 1-200, to minimize possible effects of drift.

• The remaining 800 years were divided into 40 segments of length 20 years, whereupon 40 trend values T1 … T40 were determined.

• We assume we have 3 models, each with the same number of runs n. We therefore assign n trend values, arbitrarily selected from the 40 values of T, to each of the three models.

• For n=1, the error interval among the 3 trend values is simply (Tmax – Tmin).

• For n=2, a series of ensemble-mean trend values was constructed thus: T΄1 = ½(T1+T2); T΄2 = ½(T3+T4); T΄3 = ½(T5+T6). The trend interval among the 3 values was then determined as (T΄max–T΄min).

• The procedure was repeated for n = 4, 5, 8, 10, and 13, respectively. Results are plotted in Fig. 4.

• To test sensitivity, the procedure was next repeated for an assumed 4 models (permitting n-values up to 10), and then for 5 models. Similar results were obtained.

• As a further check, we repeated all procedures by starting the selection of T-values with T40 (instead of T1) and proceeding backwards. Those max and min trends are indicated by dashed lines in Fig. 4.

Discussion:

We suggest here that it may be possible to overcome the “chaoticity barrier” of climate models identified by IPCC in the 20CEN intercomparison.

We have demonstrated that the ensemble-mean (E) trend obtained from a multi-run model is more reliable than the trend obtained from a model that is run only once. In general, ten or more 40-yr runs may be necessary to form a reliable E.

Sensitivity Analysis of Segment Length: Our initial choice of a segment length of 40 years was arbitrary. We now investigate the effects of segment length on the convergence of the ensemble mean value. We find convergence after 5 runs for model runs of length 80 years and after 20 runs (see Fig. 3) for a segment length of 20 years (which is typical of the models in the IPCC compilation; see Fig. 1). Empirically, it appears that convergence is achieved in 400 run-years – i.e., (20 x 20), (10 x 40), and (5 x 80). We have not discovered a theoretical explanation for this useful result.

Discussion: Few modelers have the resources to carry out ten or more runs of a particular general-circulation model. They frequently report a temperature trend based on only a single run. For example, the IPCC’s compilation of 22 ‘20CEN’ models has five models with just one run, five with 2 runs, and only seven with 4 or more runs. The run lengths varied between 20 and 24 years. Modeler should be encouraged to report not only the details of the forcings and parameterizations used in their particular models but also the results of each run and its length (in years).

Most investigators when considering a group of models compound the problem by simply using the average of the ensemble-mean trends of the group to compare with an observed trend. This procedure, however, is defective in that it gives equal weight to single-run models and multi-run models, and leads to greatly enhanced uncertainty of modeled trends. Yet we demonstrate here that the ensemble-mean trend obtained from a multi-run model is more reliable than the trend obtained from a model that is run only once.

The “spread” in trend values trend values discussed in Procedure #2 is akin to the “range’ of extreme values of a Gaussian distribution. But “range” is an improper statistical metric.; it inceases as the number of independent data points increases –while the Standard Deviation of their distribution decreases. For example, it can be shown that the wide model uncertainty displayed as a grey region in Fig. 6 of Santer et al. [2008] and labeled as a “2-sigma Standard Deviation” is actually an artifact, caused by the presence of five single-run models in the IPCC compilation of models. A compilation comprising only multi-run models would, therefore, help to constrain chaotic uncertainty, and would provide a more reliable means of comparing the consistency of modeled with observed trends.

Acknowledgements

We are grateful to Roger Cohen, Curtis Covey, Robert Levine, Craig Loehle, Christopher Monckton, and Ronald Stouffer for useful discussion and to Garrett Harmon and Will McBride for technical assistance in preparing the figures.

References

Giorgi, F. 2005. Climate Change Prediction. Climatic Change 73: 239-265: DOI: 10.1007/s10584-005-6857-4

Lorenz, Edward N. 1963. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20: 130-141.

Lorenz, E. N. 1975. The physical bases of climate and climate modeling, in Climate predictability, #16 in GARP Publication Series, pp. 132-136, World Meteorological Organization.

Meehl, G., et al. 2001. Climate Change: The Scientific Basis, in Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, London.

Santer, B.D., Thorne, P.W., Haimberger, L., Taylor, K.E., Wigley, T.M.L., Lanzante, J.R., Solomon, S., Free, M., Gleckler, P.J., Jones, P.D., Karl, T.R., Klein, S.A., Mears, C., Nychka, D., Schmidt, G.A., Sherwood, S.C., Wentz, F.J. 2008. Consistency of modeled and observed temperature trends in the tropical troposphere. Int. J. Climatol.: doi:1002/joc.1756.

Zichichi, A. Meteorology and climate: Problems and expectations. Pontifical Council for Justice and Peace, The Vatican, 26-27 April 2007

********************************************************

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download