Jenningsplanet.files.wordpress.com



Shawn Stivershawndiver@Geography 375: Introduction to GIS ProgrammingAmerican River CollegeProfessor Nathan JenningsProcessing of NOAA Precipitation Data and Thematic Map GenerationAbstractThe state of California is currently in a period of drought unprecedented in its history. Annual rainfall is a fraction of what it has received in previous years with 2014 being the worst in recorded history, receiving only 50% on average of the precipitation normally received over the last few decades. NOAA (National Oceanic and Atmospheric Administration) has a vast range of monitoring stations located in hundreds of locations throughout the state. They have historic records going back decades. However, the downloaded data is difficult to manage, due to both the sheer amount of data and the format it is presented in.This process was developed to automate the categorization, calculation, and preservation of the data by county and year, and generate thematic map representations of that data.IntroductionIn this project, I used python script to bring NOAA precipitation for the northernmost 35 counties in California. The data spanned the years 1980 to 2014. Unfortunately the annual precipitation data for each county was presented in monthly totals, not in a single annual total. This presented a challenge in interpreting the data since number of entries totaled in the thousands. The data was first processed in Microsoft Excel. Using a Python script, the data was spatially projected, and precipitation totals by county were then calculated and summarized by year. This data was then joined to a shapefile showing the northern counties of the state, and added to a map template. The data was then classified by precipitation totals and symbolized to generate a thematic map showing totals by county and year. The maps are then exported in PDF format for distribution.MethodPrecipitation DataIn an attempt to limit the scope of this project, the study area was limited to the northernmost counties of California (Figure 1) and restricted to the time period between the years 1980 and 2014, the last year wildfire data was available. Data used in this study was obtained from two sources. Precipitation data was downloaded from the NOAA Climate Data Online website, in CSV format. Precipitation data was limited to one station per county due to restrictions of website download size restrictions and project timeframe. Station selection was dictated primarily by the completeness of the data, many stations had considerable gaps in a 34 year timeframe, or had ceased operation at some point. Figure SEQ Figure \* ARABIC 1. Northern California Study AreaThe raw data downloaded presented some challenges in its raw format. Annual summaries of precipitation were presented as totals of each month. For 24 years of precipitation data over 36 counties, the raw data was composed of 13, 833 lines of data. Values for rain and snow values were lacking decimal points, and date values were presented in aggregated form for month and year. For example, January, 1980 was presented as 198001. Excel was used as the primary step for processing the data into useable form. Decimal points were calculated for precipitation values by multiplying the values by the appropriate multiplier (564 x .01 = 5.64”). Snow values were converted to inches in the same fashion then converted to equivalent rain values using an average of ten inches of snow equals one inch of rain (National Weather Service). Rain and snow values were then added together to determine total precipitation for that station. (Figure 2) Total precipitation values of zero were then removed as they would play no part in calculating values and could be problematic in averaging values. The data was then saved in DBF format.Figure SEQ Figure \* ARABIC 2. Precipitation Pre-ProcessingPython ProcessingThe Python processing of the data was broken down into two steps, the processing of the data contained in the NOAA data table, and the generation of a thematic map once that data had been summarized.Data ProcessingThe first step was to import the raw data and spatially project the collection points represented. This was done by using arcpy.MakeXYEventLayer_management and projecting the XY coordinates using NAD83/Albers projection to match shapefiles used in the template map. This was then exported out to a shapefile for future use as needed.A while loop was then created in the script to cycle through the year range desired. This could be used to cycle through all the data (1980 – 2014), or modified to only select a couple at a time. A query was used to find the desired year values in the date field, updating the year with each cycle of the loop. Since the date in the original data was buried in a string of numbers outlined earlier, a LIKE statement was used select the desired values:query = """ "DATE" LIKE '%""" + str(year) + """%'""" This was used to select only the rows matching the selected year. A new shapefile was then created containing only the selected year data for future use.Once the selected year data had been selected, a table containing the data needed for the thematic map was created. The total precipitation for each county in that year was summarized with arcpy.Statistics_analysis and a new row was created and populated with a “clean” year value using arcpy.AddField_management and arcpy.CalculateField_management for future reference.The next step was to join the data table with the county shapefile that would be used for generating the map. Indexes were created for both the shapefile and the table, and then a layer and table view created to facilitate the join. These were then joined using arcpy.AddJoin_management, and then a uniquely named new shapefile containing all the desired data for that year was generated using arcpy.CopyFeatures_management for future use.Map CreationA generic map mxd file was created, (Figure 3) containing shapefiles, titling and comments section features that would be used for the thematic maps generated. Figure SEQ Figure \* ARABIC 3. Map TemplateTo create the map, the template is named as a variable, and the previously created final shapefile is called in as a layer into the dataframe. To symbolize the layer, a previously created layer file is used to import the proper classification using arcpy.mapping.UpdateLayer. As a final step, the legend style is updated, using arcpy.mapping.ListStyleItems to point to the location of the legend styles, and updated using legend.updateItem. Layout elements within the map are listed using ListLayoutElements, and a for loop is used to cycle through them and update the subtitle of the map to reflect the proper year.The created map is then exported out to a PDF format using mxd.saveACopy ,(Figure 4) and the year is updated by one. The while loop then cycles back to the top and the sequence is repeated until the highest value specified at the top of the is reached.At the end of the program, the mxd variable is deleted to prevent data locks, and a message is generated indicating the program is finished.Figure SEQ Figure \* ARABIC 4. Output MapSummaryThis project went through a couple iterations, the first version was planned using search cursors to parse the raw data and create a table with the required data. However, I was not able to get the sequence to run. Deciding that a working project was a better outcome than a more elegant non-working one, I went to using tool objects to accomplish the required tasks. During development of the map generation process, the most frustrating turned out to be the classification of the new layer. I tried two different methods, ApplySymbology, and UpdateLayer. I tried many iterations with variables, and order it was applied in the process. Ultimately, it was my professor that pointed me to a web site with examples which showed me I was missing a for loop to identify the layer and apply the classification. Finding the command to update the legend style was no small feat either.Overall, the script functions as planned. Any number of maps can be generated quickly and easily, with shapefiles containing the appropriate data are generated by year for future reference as needed.References:NOAA, Last accessed 2016.02.20ArcGIS Resources, ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download