Ars.els-cdn.com



Supplemental MaterialFigures S1-S5 are expanded versions of Figs. 4 and 7 that include all PROMOD zones, Classes, and seasons. Figures S6-S7 show the probability distributions of temperature and load biases for 1 sample of randomly selected weather stations for Classes B-E over the full 2007-2010 time period. Figure S8 shows the distributions of maximum temperature biases on different spatial scales and is meant to complement Fig. 11 from the main text. Table S1 shows how the number of individual building simulations would expand as we use more representative weather stations. An exploratory analysis of our random sampling methodology is in the section “Sensitivity to Sampling Methodology” which begins on P12.Fig. S1. The relationship between the 2010 zone mean Class A representative weather station temperatures (x-axes) and simulated building energy demand (y-axes) using the calibrated BEND model in every PROMOD zone. In each panel the black dots indicate hourly values and the cyan lines are the second-order polynomial regression between the zone mean representative weather station temperature and building energy demand. The 2.5 and 97.5 percentiles of zone mean temperature and building load from the polynomial model are shown for the representative weather station (magenta squares) and NLDAS (red diamonds) datasets.Fig. S2. The December-January-February (DJF) seasonal mean temperature bias for each county in the WECC calculated using the Class A-F representative weather station datasets. The Class A stations are indicated by black dots in panel (a). The biases are based on all data from 2007-2010.Fig. S3. The March-April-May (MAM) seasonal mean temperature bias for each county in the WECC calculated using the Class A-F representative weather station datasets. The Class A stations are indicated by black dots in panel (a). The biases are based on all data from 2007-2010.Fig. S4. The June-July-August (JJA) seasonal mean temperature bias for each county in the WECC calculated using the Class A-F representative weather station datasets. The Class A stations are indicated by black dots in panel (a). The biases are based on all data from 2007-2010.Fig. S5. The September-October-November (SON) seasonal mean temperature bias for each county in the WECC calculated using the Class A-F representative weather station datasets. The Class A stations are indicated by black dots in panel (a). The biases are based on all data from 2007-2010.Fig. S6. Probability distributions of the population-weighted mean temperature bias across the WECC for each of the six Classes of representative weather stations (colored lines) during (a) DJF, (b) MAM, (c) JJA, and (d) SON. The distributions include all data from 2007-2010 and are based on one random sample of the Class B-E weather stations. Biases are computed at the county level and then averaged across the WECC using county populations as weighting. The number in the legend indicates the root-mean-squared temperature bias for each Class. Note that the y-axis ranges are variable.Fig. S7. Probability distributions of the total building energy demand bias for the entire WECC for each of the six Classes of representative weather stations (colored lines) during (a) DJF, (b) MAM, (c) JJA, and (d) SON. Loads are based on our regression approach and the distributions include all data from 2007-2010 and are based on one random sample of the Class B-E weather stations. The number in the legend indicates the root-mean-squared load bias for each Class. Note that the x-axis in (c) has a different range and the y-axis ranges are variable.Fig. S8. Distributions of the maximum June-July-August 2010 temperature bias for each Class of data calculated over different spatial scales: IECC climate zones (red), states (magenta), PROMOD zones (blue), and balancing authorities (cyan). Distributions are based on 100 random samples of weather stations for Classes B through E. In the box plots the horizontal lines indicate the median value, the boxes indicate the area between the 25th and 75th percentiles, and the whiskers extend to the 5th and 95th percentiles. Solid lines trace the medians of each distribution and are intended to capture changes in bias across Classes.CBECS/RECS Climate RegionClass AClass BClass CClass DClass EClass F13361220482668111645344691530466811192953357810-------Total Unique Combinations2222335078162Ratio to Class A Unique Combinations111.5x2.3x3.5x7.4xTable S1. The number of unique weather stations within each CBECS and RECS climate region for each of our six Classes of representative weather stations using the mappings from 2008. The different mappings in different years will slightly modify these tabulations, so we use the 2008 mappings as an example. If, for example, we utilized 10,000 unique buildings in our aggregate model, the total number of EnergyPlus simulations for each Class of data would be obtained by multiplying the Total Unique Combinations for each Class by 10,000.Sensitivity to Sampling Methodology The analyses in the paper were based on weather stations for the Class B-E samples that build on each other (i.e., the stations randomly selected for Class B are included in Class C, etc.). As discussed in the Data and Methods section, this decision to use “additive sampling” was made so that differences in error characteristics from Class to Class are primarily due to changes in the number of sites utilized as opposed to changes in the exact sites selected. This section quantifies the degree to which this decision could impact the important trends in our analysis. To do this, we generated an additional 100 random samples of the Class B-E weather stations for JJA 2010 without the requirement that the Classes build on each other. In other words the sites chosen in each Class B-E are purely random. Trends in the JJA 2010 temperature biases using the additive and random samples are shown in Fig. S9. The major trends discussed in our analysis are present in both the additive and random sampling datasets for JJA 2010. When examining trends in individual samples, the random sampling method is more likely to produce a see-saw or non-linear bias reduction between Classes (Fig. S10). For example, in some samples the bias may drop between Class B and C, increase between C and D, and then drop again between D and E. The trends are more consistently downward across Classes for the additive samples. Intuitively this makes sense as maintaining the same sites in the additive samples reduces the likelihood that biases would swing up and down from Class to Class. As discussed previously, using the additive random sampling the RMS temperature bias drops by 1.0°C between Class B and D but only an additional 0.5°C between Class D and F. Using purely random sampling, these bias reductions are 0.8°C and 0.8°C, respectively. Aside from those minor differences, the distributions of RMS temperature biases for the repeated or additive samples are statistically indistinguishable from each other for Classes B-E as determined by a two-sided t-test with p = 0.05. The RMS load bias distributions follow similar patterns (Figs. S11-12). Based on this we concluded that our principle results are not significantly impacted by our choice to use additive as opposed to random sampling.Fig. S9. The trend in root-mean-squared temperature biases during June-July-August 2010 for 100 random samples of weather stations for Classes B through E. Data shown in blue is created using random samples where each Class C through E builds on the previous Classes (“additive sampling”). Data shown in red is created using purely random sampling in each Class (“random sampling”). In the box plots the horizontal lines indicate the median value, the boxes indicate the area between the 25th and 75th percentiles, and the whiskers extend to the 5th and 95th percentiles.Fig. S10. The trend in root-mean-squared temperature biases during June-July-August 2010 for 100 random samples of weather stations for Classes B through E. Data shown in the top panel is created using random samples where each Class C through E builds on the previous Classes (“additive sampling”). Data shown in the bottom panel is created using purely random sampling in each Class (“random sampling”). In the box plots the horizontal lines indicate the median value, the boxes indicate the area between the 25th and 75th percentiles, and the whiskers extend to the 5th and 95th percentiles. The thin black lines shown the trends for individual samples.Fig. S11. The trend in root-mean-squared load biases during June-July-August 2010 for 100 random samples of weather stations for Classes B through E. Data shown in blue is created using random samples where each Class C through E builds on the previous Classes (“additive sampling”). Data shown in red is created using purely random sampling in each Class (“random sampling”). In the box plots the horizontal lines indicate the median value, the boxes indicate the area between the 25th and 75th percentiles, and the whiskers extend to the 5th and 95th percentiles.Fig. S12. The trend in root-mean-squared load biases during June-July-August 2010 for 100 random samples of weather stations for Classes B through E. Data shown in the top panel is created using random samples where each Class C through E builds on the previous Classes (“additive sampling”). Data shown in the bottom panel is created using purely random sampling in each Class (“random sampling”). In the box plots the horizontal lines indicate the median value, the boxes indicate the area between the 25th and 75th percentiles, and the whiskers extend to the 5th and 95th percentiles. The thin black lines shown the trends for individual samples. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download