Near real time data exchange system



SIXTH FRAMEWORK PROGRAMME

Project no: 502687

NEEDS

New Energy Externalities Developments for Sustainability

INTEGRATED PROJECT

Priority 6.1: Sustainable Energy Systems and, more specifically,

Sub-priority 6.1.3.2.5: Socio-economic tools and concepts for energy strategy.

Technical Paper n° 1.3 - RS 1b

“Report on Sub-grid atmospheric dispersion models”

Due date of technical paper: February 2007

Actual submission date: October 2007, last changes February 2009

Start date of project: 1 September 2004 Duration: 48 months

Organisation: Laboratory of Heat Transfer and Environmental Engineering, Aristotle University of Thessaloniki

Authors: Ioannis Douros, George Tsegas, Christos Naneris

|Project co-funded by the European Commission within the Sixth Framework Programme (2004-2008) |

|Dissemination Level |

|PU |Public |x |

|PP |Restricted to other programme participants (including the Commission Services) | |

|RE |Restricted to a group specified by the consortium (including the Commission Services) | |

|CO |Confidential, only for members of the consortium (including the Commission Services) | |

Introduction

This technical paper deals with improvements that were achieved in the process of sub-grid atmospheric dispersion model application, having as a final aim the introduction of local modelling in external cost calculations. For assessing the external costs for non-marginal applications, the resolution of the regional scale models is too coarse, and a finer spatial resolution of the exposure or impact is necessary in most cases. Both for the analysis of pollution dispersion and transformation characteristics as well and for the analysis of its impact at the local scale, local modelling has to be taken into account. Usually this is done by using Gaussian models which is normally a good choice for single sources such as power plants assuming their emissions consist mainly of primary pollutants, however less appropriate for secondary pollutants (e.g. ozone and secondary aerosols).

The work in this WP focused on finding ways to improve the whole process of atmospheric modelling in order to make its application more accurate, easy and wide. Especially the latter posed a considerable challenge as the lack of suitable input data to local scale models often prevents their application at arbitrary locations. Towards this goal, 2 distinct tools were developed, the first in order to assess the terrain complexity of the area under ivestigation and a second one to produce input meteorological data for local scale models, based on data from larger scale models. Parts of the methodology developed in this WP is later to be used in RS3a in order to perform a series of local scale runs having as a final aim to parameterise concentrations patterns depending on the complexity of the area, the emission rate and the prevailing meteorological conditions.

A computational tool to estimate terrain complexity

Methodologies for quantifying the complexity of a given local topography are traditionally used as a tool for assessing the effect of terrain-forcing in a given area, allowing for an accurate estimate of the minimum spatial resolution required for local meteorological simulations. For the purposes of the currently implemented scheme of local modelling, a complexity estimation step was implemented as a means of selecting the appropriate approach for generating the required meteorological fields in a given area. In the case of a simple topography (e.g. inland areas with flat topography), a simple interpolation scheme using wind data at a few discrete locations would suffice to provide an accurate flow field, to be used as input in a Gaussian or Lagrangian dispersion model. On the other hand, an area containing complex terrain (e.g. mountainous topography and/or complex coastlines) would generally require a more detailed estimation of topography forcings and local flows, necessitating the use of more sophisticated wind models such as full 3D Eulerian prognostic models.

2.1 Complexity estimation method

Two different methodologies have been proposed for estimating the complexity of local terrains: Spectral Analysis of terrain data and Fractal Dimension Estimation. The former is usually based on a 2- or 1- dimensional Fourier transform of the 2-dimensional terrain data or sections thereof, respectively, followed by an estimation of the power spectrum at different spatial scales (Rayner, 1972; Salvador et al, 1999). In the latter method, an estimate of fractal dimension (between 1 and 2) is calculated by fitting to select height isopleths and/or to the coastline curve. A large fractal dimension implies complex topography, while values close to 1 indicate an almost flat domain (Jaggi et al, 1993). Extensive testing has indicated that Fractal Dimension methods tend to overemphasise the contribution of terrain roughness on very small scales (1 to 10m) and, furthermore, the contribution of different spatial scales is intrinsically difficult to resolve in cross-scale (fractal) estimations of the complexity. In contrast, spectral methods are much more flexible in allowing analysis and manipulation of contributions from any arbitrarily-specified scale, therefore a variation of the spectral method described by Salvador et al (1999) was used as the basis for the complexity estimation tool.

As a first step, the rectangular grid containing the input data is extended by a buffer of zero values so as to obtain x- and y- grid extents that are powers of 2, as required by the 2-D Fast Fourier Transform (FFT) computational routines. The unnormalised 2D power spectrum F(x’,y’) is estimated simply by taking the modulus of the FFT on the positive x’>0 semiplane, where x’ is the wave number (in units of 1/km) corresponding to the x-direction.

In the second step, a one-dimensional spectrum, P(r’) is obtained from the 2D power spectrum, using the following relation:

[pic]

where the summation is performed for all points lying at a distance R’2=x’2+y’2 from the origin and the indices i and j denote grid coordinates across the x and y directions, respectively.

In the final step of the calculation, three power indices are calculated by integrating P(r’) over three different spatial scales:

[pic] , k=1,2,3

The predefined scale ranges used by the complexity estimation tool are listed in Table 1, along with other operational parameters of the algorithm. The compound power index I, used as the determining criterion of the complexity of the domain, is obtained as a weighted sum of the partial spectral indices and the total water surface area, expressed as a fraction w of the total domain area:

[pic]

A fixed value of 1200 m2 km2 was determined experimentally as an appropriate threshold value of I for complex domains. This value was obtained by calibrating the algorithm to a collection of 20 European urban and rural domains.

|Operational Parameter |Symbol |Value |Comments |

|Default area extents |ΔX, ΔΥ |150km, 150km |User-configurable |

|Default cell size |δx, δy |1km, 1km |Hard-coded |

|1st scale range |1/l1 to 1/h1 | 1km to 10km |Hard-coded |

|2nd scale range |1/l2 to 1/h2 |10km to 50km |Hard-coded |

|3rd scale range |1/l3 to 1/h3 | 50km to 100km |Hard-coded |

|Power index threshold value |Icomplex |1200 m2 km2 |Hard-coded |

|Execution time on a 2006 | |~10sec |Including initialization time |

|workstation | | | |

|Execution time on a 2006 | | ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download