18075527-25010.docx - Energy



28414_DOE_WildfireWebinar_2_VideoRecording_v01MEREDITH BRASELMAN: Ladies and gentlemen, good afternoon, and welcome to the U.S. Department of Energy's Office of Electricity Wildfire Mitigation Webinar series. We are so pleased to have all of you here with us again today. I'm Meredith Braselman with ICF, and our team is going to be guiding you through the webinar today and throughout the webinars the rest of the month. As always, we have a few housekeeping items before we get started. Please, note that this WebEx call is being recorded and will be posted on the Department of Energy's website, and may be used internally. If you do not wish to have your voice recorded, please do not speak during the call. If you do not wish to have your image recorded, please turn off your camera and participate by phone. If you speak during the call or use a video connection you’re presumed to consent to recording and use of your voice or image. If you have any technical issues today or questions, you may type them in the chat box and select to send to our host. Your lines have been muted and they will remain muted throughout the webinar. We are taking questions today, though. You may submit your questions throughout the presentations, but we're going to hold them until the end and have all of our panelists answer together. So, to submit your questions, you can put them in the chat, and again, select to send to the host from the drop-down menu. If you will please also reference either the speaker's name or the topic when you submit, that will help us make sure it gets to the right panelist at the end. And finally, if you need to view live captioning, please refer to the link that will appear in the chat panel. So, to get us started today, I am pleased to welcome Michael Pesin, deputy assistant secretary for Advanced Grid and Research Development for the Office of Electricity. Michael, I will turn it over to you. MICHAEL PESIN: Thank you very much for the introduction, and welcome to the second webinar in our series of Wildfire Mitigation Series. So, I want to talk about the role of my division in the Office of Electricity. So, the role of the Advanced Grid Research and Development Division is to focus on the ways to address the changes facing the nation's electric power grid. Our main effort is to ensure that the nation's energy delivery system is secure, reliable, and that enables the administration's decarbonization goals. Grid transmission distribution innovations, smarter ways to control, convert, and deliver electricity are the keys in unlocking a cleaner and more affordable and resilient energy future. So, this resilient enabled grid will help us to maintain the reliability, equity, and connectivity that is vital for societal advancement. Whether the grid is power and manufacturing the essential health services, all our computers and communications, it goes largely unnoticed except until when it fails. Recently, investments in the grid have focused on improving its reliability, efficiency, and resilience to meet growing dependence on electricity across all sectors. To serve our expectations of continuous access to electricity a collection of generators, towers, wires, transformers, switches, and poles were erected and stitched together. In addition to the physical infrastructure, a centralized control system was developed where large remote generators are coordinated and dispatched to ensure the reliable delivery of electricity to end users through the network of high voltage transmission lines and medium voltage distribution systems. However, the electric power system is undergoing a dramatic structural transformation on both generation and loads, and this vast complex machine will require significant re-engineering. It needed to meet all new demands that are outside of what this electric grid was initially asked to do when it was designed over 100 years ago. One of the big challenges is how do we accommodate all this change over the existing transmission and distribution system? In addition, administration has very ambitious goals of 100% clean power sector by 2035, and net zero carbon [INAUDIBLE] by 2050. There is a lot of work that needs to be done and is continuing to be done on generation side—so solar, wind—and continued energy efficiency improvement in electrification of loads. However, on the delivery side, the [INAUDIBLE] of the grid is largely the same as it was 50 or 60 years ago. So, how do we accomplish this while maintaining grid reliability? Clearly, industry-wide coordination is essential. Not just technology, but planning, operations, and markets. I always say that technology cannot succeed on its own. For new technologies to succeed, you need to have markets that support this technology. And for markets to succeed, you need to have the right policy in place. This is the three legs of the same stool—policy, markets, and technology—working together to realize a shared vision of the modernized grid. So, one of the big success stories in recent years that our office participated was the deployment of [INAUDIBLE]. We spearheaded the deployment of thousands of the sensors throughout the country, and that was done in the response of 2003 Northeast Blackout. It provided significant improvement in high-level situational awareness. However, this is no longer enough. We need to have much higher fidelity and visibility on real-time grid conditions. We are now providing new investments that support development of dynamic line readings to help system operators determine accurately and reliably the current carrying capacity limits of transmission lines, which can improve reliability and resilience by providing grid operators with enhanced situational awareness of individual assets. So, last week the webinar series explored the sensing and detection capabilities provided by Oak Ridge, as well as fire testing from Sandia National Labs. Today's webinar will build on that education and dive into situational awareness capabilities. So, we are pleased to have experts from Argonne, Oak Ridge, PNNL, and Sandia National Labs. While we are highlighting the lab's capabilities, we understand that many utilities and other government agencies are working on their own solutions to mitigate wildfires. And we are very interested in hearing from you about what are you working on. So, please, reach out to us, and we would love to hear from you so we can work all together. You'll have contact information in the final slide of this webinar. Our hope is that you'll find capabilities that will work for you, and that we will be able to connect with our labs to help mitigate wildfires together. So, thank you for your time, and enjoy the webinar. MEREDITH BRASELMAN: Thank you so much, Michael. We appreciate you being here with us today. So, now it's time to hear from our colleagues at the national labs. Today you'll hear from Feng Qiu and Brett Hansard from Argonne National Laboratory; Aaron Myers, Jitu Kumar, and Gautum Thakur from Oak Ridge National Laboratory; David Judi and Dan Corbiani from Pacific Northwest National Laboratory; and Ken Armijo from Sandia National Laboratories. As a reminder, you are welcome to submit your questions in the chat box. Send to the host, and we're going to hold questions until the end. So, please reference the speaker or topic when you submit your questions. So, let's take a deeper dive into the situational awareness capabilities provided by Argonne National Laboratory. We are pleased to welcome Feng Qiu from Argonne to discuss its multi-source, multi-time-scale wildfire data warehouse and visualization platform. Feng, you have the ball. FENG QIU: Hello, thank you. Can you hear me? MEREDITH BRASELMAN: We sure can. FENG QIU: OK, yes, sure. I also turned on my video, but I'm not sure if it's working. But anyway, thanks for the invitation. So, I'm very happy to have this opportunity to share some of the work on the wildfire research. And wildfire research is one of the topics that very much relies on industry practice, so your inputs, suggestions, comments are highly appreciated. And if you find some of this part of talk is interesting to you just, please, free to reach out to me. All right, so I will just first give an overview about this line of research, and then we're going to talk about the first part. There are two other presentations in the next week. So, the whole purpose of this research is to better understand the wildfire risk. For example, what are the contributing factors for wildfire, and can we predict—can we come up with a formula such that when you give me a condition, I can give you a number indicating the risk? And so, odds are you understand the risk better, how can we incorporate this understanding into your operation and planning decisions? For example, can we do a better public safety power [INAUDIBLE] operations so that the outage hours can be reduced? Or in the planning stage, how you can incorporate this risk so that you might design your transmission or distribution grids a little bit different. So, this talk is about the first part. So, we need data, right? We need a lot of data. So, let me just give you a brief introduction to the features here. So, this is the data management system with a bunch of the programs that automatically grab data from different sources, and then do the computing, processing, computing, storing. And also, we have a visualization platform that converts all the information into pictures, and then sends to the [INAUDIBLE] for visual presentation. And the potential users, we know a lot of the utilities on the west coast, that wildfire prevention is a big part of their business. And they have very sophisticated applications, they have a lot of data. Certainly, we are not trying to duplicate the work they are competing with them. But we know that there are also a lot of other utilities who don't have this data [INAUDIBLE] wildfire application. So, we hope that some of these tools that we developed that could be useful there. And also, if you are a community stakeholder or if you are inspector or developer, you may want to take a look at the fire potentials in the area you want to start your project. And also, research communities, we hope that we can provide this one-stop shopping so that you can get all the data you need. And so, what data we have here now, here is the fire triangle that illustrates the major contributing factors to a wildfire. So, we have the weather here, we have wind speed, temperature, humidity, precipitation, and other things. And also, you have fuel. So, this fuel provides the materials. And normally, the weather and the fuel are lumped together into a so-called fire danger index. And the third thing in the fire triangle is the power delivery, your device that might start ignition. So, these are three type of input data we have. In addition, we also have the fire incident report. So, if we want to do more in-depth analysis, you're better to look at the fire incident report and see if you can discover any patterns there. All right, so these are the major categories up there. So, let's first look at the fire danger incidents. So, the fire danger incidents provide an estimation for the fire that naturally occur. So, our focus is on the fire caused by the power delivery—that's the naturally occurring fire, those conditions, those [INAUDIBLE] factors is also important in our study. And so, a lot of agencies, they develop a different type of fire danger incidents. For example, the weather forest and the USFS, and some utility companies also have their own fire incidents. And they normally take two kind of inputs. One is a weather variable, the other is the environmental variables. So, they lump them together according to some formula. And some of the fire indices is for long-term, yearly, seasonally. And some of them are for a day ahead forecasting. And so, one of the common things in the fire incidents is that it's based on the observation from the weather station, which does not have very good coverage. So what Argonne, the environmental science people have improved this incidents so that it can provide better coverage and a better resolution. And we have either the incidents from a number of agencies, and also from our own calculation. And also, if you are a planner for infrastructure then you probably care about what happens in the next 50 years. So, in Argonne, our environmental science people have calculated the future climate projection and take all the variables from those projections and calculate this fire index. And you can see here the example, these are the KBDI. This is one of the [INAUDIBLE] index. It's calculated for late twenty-first century. And you would notice that in the southeast and the midwest U.S., which traditionally are not considered a fire risk region, in the future it will have a significant number of more days with hotter and drier weather. So, that might give more fire potential in this region. So, this is very helpful for a long-term planning. All right, and the next type of data are vegetation type. So, I'm not sure how much time do I have? All right, OK so-- MEREDITH BRASELMAN: We're good. FENG QIU: OK, thank you, thank you. And the vegetation provides the fuel for the fire. There are different ways to categorize the vegetation type. Some of them focus more on the developed or undeveloped areas, and some of them focus on the plant type. And if you are doing in-depth analysis, this could be your input feature. So, we have connect a number of this vegetation type. And also, we have weather and the climate variables. And there are a number of variables that have already been identified as highly relevant to wildfires. For example, the wind speed, relative humidity, temperature, precipitation, the moisture from air or from the soil, and also geopotential height. And also, a lot of other variables there. And also, we've not only provided the weather variables for now, but also the variables in the future climate projections. All right, so the fire incident reports, so far, we only have the fire incident reports from the California region, basically from PG&E, SCE, and the SDGE. We hope we can find more fire incident reports. If you have this kind of information, and if you would like to share, please, feel free to reach out to us. And this report includes a number of information. For example, date, time, location, fire size, and also it has the cause, like equipment failure, or the contact from animals, or contact from vegetation. So, this is very useful if you want to categorize, or if you want to find association of the fire with the environmental variables. And last thing is the power infrastructure. So, we only have a transmission grid here, which you can download from the EIA website. But most of the wildfires happened in the [INAUDIBLE] system, but we don't have that data. But if you have the data you can plug into this tool and carry out the analysis. All right, so here is the flow chart for the data flow and visualization part. The left-most are the database located outside. We have the USGS, NOAA, EIA, and other resources. And we have program regularly downloads this data to our local server. And some of them will be processed in the computer. For example, the fire danger index, and some of them will be stored there. And after that, we have a program that takes all this information and generates many, many small pictures. And that would be used finally for the visualization purpose. All right, so here is the very—so the picture is not moving here, so there's supposed to be an animation. So here is the visualization results. If you can see this branch, this—sorry, OK, so if you see this branch these are the transmission grids which calculates the power flow. And then we have this fire potential index, which is one of the fire danger index. And so, if you overlay these two layers you probably can discover something. I mean, I'm not an operator, but if you are, you probably can discover something. And of course, you can have your own definition of the fire risk. For example, if you think wind is the critical factors here, maybe you can overlay the wind variables layer together with this power topology. And also, maybe you could add another layer of vegetation. So, we believe that this is the first step to analyze the risk for humans. The visualization is the best way to just get started. And we have more in-depth analysis using statistical methods and the forecast progress model, so that would be the next two presentations next week. And so, this is only part of the work on wildfire in Argonne. So, this work is coordinated by our grid program leads, and then we have people from the energy subdivision, and also environmental science division. And so, we also have an industry advisory board Southern California Edison. We really appreciate their work, their support. We meet them regularly, and they provide valuable information. We thank them very much. And also, all this work, most of this work, are supported by our Argonne LDRD funding. And I think that there is acknowledgments page, but it's not here. Yeah, OK, so that's all I want to talk about. If you have any suggestions maybe you'll find, hey, this does not make sense, or you feel hey, this might be useful, or you could have some data you would like us to take a look with you, please, feel free to reach out to me. All right, thank you very much. MEREDITH BRASELMAN: Thank you so much, Feng. We appreciate all of that information. Now, we're going to turn it over to Brett Hansard, also of Argonne, to talk about risk and crisis communication for wildfire response. So, Brett, you have the ball, and the floor is yours. BRETT HANSARD: All right, wonderful, hello, everybody. I'm Brett Hansard with Argonne National Laboratory. I appreciate the opportunity very much to speak with you today. I know time is short, so I'm going to go ahead and just kind of jump right into this presentation. So, I'm the manager of a training and exercise program at Argonne that focuses on emergency public information. Basically, how we communicate with the public and media before, during, and after a crisis, and how as responders we communicate with each other, both internally and with our external partners. What we do isn't specifically targeted to wildfires, but the capabilities we offer are vital to a successful wildfire response. From our perspective, in any kind of disaster or emergency, information is as important as food, water, and shelter. And that's definitely true for wildfires, perhaps even more so. So, I'll talk about the points in this slide a little bit as we go forward and about some of the training and exercise considerations that are unique to wildfire planning and response from an emergency public information perspective. This slide simply gives you a quick overview of some of the different organizations and agencies we're currently working with. Wildfires have been a particular focus in recent years for many of our sponsors, like DOE, NNSA, The Chemical Stockpile Program, which all have sensitive facilities in high-risk fire areas. We also have an ongoing project with FEMA on how to communicate effective alerts and warnings, which is also a major interest for state and locals with wildfire hazards. Our training program offers a wide variety of emergency public information and public information training topics, both virtually and in-person. Studies on communication consistently show that public affairs is much, much more than just media relations, which is still what a lot of people consider to be the primary job of the public information officer. But we really strive to take a more holistic approach and look at all the elements that go into successful sunny day and emergency communications. At the bottom of this slide, you'll see a link where you can download a copy of our course catalog and see the listings and descriptions in a little more detail. This slide shows some of the different topical areas that our training encompasses. For the first area, public affairs, we offer courses for both leadership and full-time communicators, including a course to help senior managers better understand and appreciate the important role of the PIO. In the risk and crisis communication area, we get into the science and methodology behind effective communication. And media relations is about engaging directly with the news media. For the social media, digital communication, and public information technology areas, we look at the countless new ways in which information is disseminated, consumed, and monitored, which have provided incredible new opportunities for both the sender and receiver but has also posed major challenges as well for the emergency management community. And finally, exercises and drills, I'll talk about that a little bit more in a minute as a way to create a challenging and realistic environment to practice decision making and improve your response planning. So, when we think about critical communication capabilities for a wildfire response, there are some key areas to consider, with some of the most relevant courses listed here. The idea of a joint information system, a joint Information center, it's how the various response agencies-- local, state, and federal-- will come together and work with each other. By its nature, a wildfire response usually involves multiple agencies at all levels of government. And their ability to coordinate emergency public information across jurisdictional lines is absolutely critical. We like to say as PIOs that our job is to get the right information to the right people at the right time so they can make the right decisions. Reaching vulnerable populations and dealing effectively with misinformation and disinformation can be the difference between at-risk populations getting or not getting the information that they need. That vulnerable populations course actually was developed by someone who worked in the Tubbs Fire in California in 2017 as a 911 dispatcher. And she saw firsthand the problems experienced in that event by vulnerable populations. In addition to spokesperson training, which really is about allowing communicators to go on camera with messages that answer those three critical questions-- what happened, what are you doing about it, and what does it mean to me? There are some other courses here: Social Media for Situational Awareness, Social Media Monitoring and Reporting, GIS tools, Livestream Technology, that leverage the ability to directly engage with the media and public in real time during an emergency. That Go Live course also came out of the Tubbs Fire. It was developed by a PIO who worked in the Santa Rosa City EOC producing live streamed videos. One of the other areas we're actively involved in is in providing technical assistance to state and locals for alerts and warnings through a program with FEMA. The increasing frequency, intensity, duration of weather events, including wildfires, has highlighted that what protective actions people take or don't take really do have enormous consequences for their health and safety. We know that alert and warning messages that the public receives, understands, and responds to will save lives, but there are many challenges that emergency managers face in issuing those alerts and warnings. You could see some of the case studies we've done that are included as part of this technical assistance. We saw in the Camp Fire, for example, that emergency phone calls alerting Paradise residents to evacuate failed to reach more than a third of those who had signed up for the local warning system. In the Gatlinburg Fire, where an EAS message to announce the city's evacuation didn't go out because power outages prevented that message from being verified. In the Oroville Dam Spillway evacuation, where the order to evacuate almost 200,000 people caused hours of bumper-to-bumper traffic. So, the audience for these are those folks who are directly involved in issuing alerts and warnings—emergency managers, PIOs, social media communicators. Right now, these sessions are being done virtually with a heavy emphasis on peer-to-peer engagement and information sharing. And like all the training that we do, the course materials continue to evolve as new resources and case studies are built into the program and shared with participants. So, I just want to shift gears just a little bit. The primary purpose of my presentation is to talk about training, but I also want to mention our mock media exercise support. Because it's a very unique thing that we do, and it plays an important role in testing and validating emergency public information capabilities and helping to identify training needs. So, our goal in this regard is pretty straightforward. It's to create a challenging and realistic exercise environment so agencies can see what's working and where they need to improve. We use a virtual training platform called the Exercise Training Network, or ETN. That's a secure password protected site where players in an exercise or drill can see what the media would be reporting in an emergency, and then respond as they would in a real event. And then on this slide, it's a view of that ETN dashboard. This is from a virtual exercise we did in Kentucky last year. In a typical exercise, we would usually produce about two dozen mock media news stories, a mix of radio, print, and video. We develop all of these stories in real time based on direct interactions with players through phone calls, interviews, media briefings, news conferences, anything that we can do to gather information, like the real media would. In addition to the news stories, we also stimulate social media to show what the media and public would be saying on social media platforms, and to give players a chance to practice using social media platforms as well. So, I was going to show a short clip here, but I have a feeling it might not have made it in. MEREDITH BRASELMAN: Brett? BRETT HANSARD: Yeah? MEREDITH BRASELMAN: Yeah, Brett, what we're going to do is we're going to take control here. We'll show the videos, just let us know when you want to go to the next slide. And also, for folks who have dialed in on your phones, you are not going to be able to hear any audio for the next couple of slides. But when you do view this later on, you'll be able to see the videos. So, Brett, I'll hand it back to you to chat. BRETT HANSARD: OK, thanks, and this is a quick—this is just an example of one of those mock media news reports that we would reproduce in an exercise. And this one was done as part of an annual exercise we did in Colorado a couple of years ago. You can go out and play—all right, there we go. I'm not hearing that audio, I'm not sure if anybody else— MEREDITH BRASELMAN: We are not either. Again, once folks see this a little later on, they will be able to hear and see. BRETT HANSARD: OK, well you guys kind of get a flavor of—so these are simulated news reports that are part of an exercise. And it's really designed to bring that realistic flavor to the exercise scenario and give players kind of something tangible and real to react to and respond to. So, the next slide is one more video, and I'm not sure if we're going to have the same problem there. But it's an example of a virtual studio that we developed last year to practice live remote interviews. And yeah, again, we're not hearing the audio. So, I was really excited about this, because this one parallels a lot of the trends that we're seeing in the real world, particularly during the pandemic, where these kinds of interviews really became the norm. So again, creating that experience, creating that environment for players to practice something that's going to be very realistic for them in an actual event. So, I'm going to skip on to—we can just go onto the next slide. The next slide my last one. Yeah, and so this is the final slide and summarizes the main takeaways. I won't read these verbatim, but my hope is that you'll see from all of this that effective communication and critical messaging, it's not an accident. It's the result of sustained and focused efforts that apply the science of risk and crisis communication to identify training gaps and establish best practices. We've seen from countless disasters and emergencies, including several recent wildfires, that collectively we have to do a better job communicating with the public and coordinating our messages. And it's really, I think, at the core of kind of who we are and what we do as emergency management and public safety professionals. So, I think the last slide is just kind of a questions, and I do look forward to answering any questions when we get to the Q&A session or at any point in the future. So, thanks, everybody. MEREDITH BRASELMAN: Thanks so much. Really great seeing what you guys are doing with exercises. We appreciate you sharing that. Now, we're going to hear from the Oak Ridge team, and I'm going to turn this over to Aaron Myers to talk about Eagle One. Aaron? AARON MYERS: Thank you. I'm actually going to be talk about EAGLE-I, The Environment for Analysis of Geo-Located Information. So, EAGLE-I is the U.S. Department of Energy's operational situational awareness tool for the energy sector. It is sponsored by the Office of Cybersecurity, Energy Security, and Emergency Response. So, EAGLE-I collects utility customer outage data from over 430 utilities across the U.S. We cover about 144 million customers, which is about 92% of the United States and its territories. That data is collected every 15th minute, so at zero, 15, 30, and 45 of each hour. And then it's aggregated into common county-level, state-level summaries of current electrical outages. We also break it down by utility and provide up to 30 days’ worth of historic data, though we do have historic data going back all the way to 2014 that we can actively engage with. This application supports multiple agencies across the federal government, the Department of Energy, Homeland Security, FEMA, and USDA, along with their state and emergency response partners. And primarily is to support the ESF#12 energy response function within the National Response Framework. And really, our goal is to provide a modern capability for situational awareness for emergency response and recovery. So, within EAGLE-I we have a lot of capabilities. Currently, really centered around electricity and those utility customer outages. We also include a lot of current situation information, such as wildfire data, which is pulled from the National Interagency Wildfire Center. We also have 23 critical infrastructure layers which are pulled from HIFLD, the Homeland Infrastructure Foundation Level Data sets, which are distributed from DHS, which are also updated annually, so those include transmission lines, power plants, natural gas pipeline, natural gas terminals, a lot of other information. And for situational awareness we have information about earthquakes, hurricanes, and tropical cyclones, also river observation and current watches and warnings. Some recent additions to EAGLE-I that we've added to enhance situational awareness has been pulling in some social media data from Planet Sense, which is a capability that you'll hear about in just a couple of presentations, where basically we're curating Twitter and Facebook information that are being published by each of the utilities that we're collecting data from to help provide that additional context. When there's an outage and there's not necessarily a big storm or a big event happening, a lot of times the utility companies will put out on their social media relevant information that can help the watch offices and the response communities understand what's actually happening on the ground, get information about restoration times, and a lot of other information that just helps to enhance the situational awareness. And a big piece of that are those current situation layers, such as wildfires and earthquakes. So currently, EAGLE-I has been primarily up to date. EAGLE-I has been primarily more of a situational awareness tool, but we're working to add and integrate some new analytic capabilities. I just wanted to talk about those very briefly. So, the first one is Energy Infrastructure Damage Assessment and Identification, so using both satellite-based imagery, such as the VIIRS nighttime lights, or UAS imagery from drones to be able to (1) identify changes in light output. So, looking over the nighttime lights, if you see a change in the light output, you can assume that there's been some power loss in that area. But from drones and UAS imagery to actually identify critical infrastructure, maybe small-scale infrastructure, such as electrical utility poles and other information that could be assessed for causing damage or have been damaged that could be causing those different outages. And so, trying to look to bring that online in a real-time capability to again, enhance that situational awareness, enhance the insights that the response community can have into what's actually happening on the ground. In relation to that, some things that are kind of relevant specifically to doing that sort of work in the sense of a wildfire, we have several basic image processing capabilities here at the lab that aren't necessarily tied specifically to EAGLE-I, but that could be leveraged in our work. This dehazing capability where we can pull smoke, we can pull fog, we can pull a lot of information out of the image to get a much clearer picture of what's happening on the ground. Same thing with shadow mitigation and light enhancement. So, we can really do a lot of post-processing on the data very rapidly so that we can get the raw images, post-process them, and then run them through our other algorithms to identify those impacts very, very quickly to provide the best answers to the decision makers. Finally, another new capability that we're looking that I think is relevant to situational awareness as it relates to disasters and wildfires is a capability called Urban-Net, where we're looking at cascading impacts. So, kind of understanding how the electrical infrastructures and different infrastructures are connected to each other. We can use this capability to do both what-if scenarios, so disaster preparation and exercises, but also when an event is happening, we can provide information. Not about what might be impacted within that area, kind of within that wildfire perimeter, but also kind of what those downstream impacts are. So, if certain substations and certain power plants get impacted, get taken offline, what are some of the downstream infrastructure that might be outside of that impacted area that we need to be paying attention to that might be missed otherwise? So, we can do a lot of modeling, a lot of preparedness to understand what are the cascading impacts of these different events. And finally, another new capability that we're working on, we're talking power outage event monitoring. So, we're really looking at taking real time situational awareness information, such as wildfire information, tornadoes, earthquakes, severe thunderstorms, and looking to correlate those events with actual utility customer outages. So not just being able to say that yes, there are outages in this area, but this percentage or this amount of these outages were caused by this other event. So again, trying to balance that there are always outages kind of even on blue sky days, but can we actually correlate percentages of those outages to act to another event, as compared to maybe there's line maintenance going on or something else happening in the electrical infrastructure that's causing those issues. And so, we're trying to come up with models and capabilities that allow us to provide a lot more information, a lot more confidence to say that these outages were caused by this external event. Especially for things that are non-noticed events, such as those wildfires, tornadoes, and earthquakes in comparison to hurricanes, where we tend to get several days’ worth of notice and ability to prepare for those different things. And that's all I have. Be interested to take your questions at the end. Thank you. MEREDITH BRASELMAN: Aaron, thank you so much for sharing EAGLE-I with us. We appreciate it. Now, I'll hand it over to Jitu Kumar from Oak Ridge to discuss ForWarn: Satellite-Based Change Recognition and Tracking. So, Jitu, I think you've got it. There you go, it's loading up. JITU KUMAR: You can see my screen fine, and hear me? MEREDITH BRASELMAN: We sure can, and we can hear you, too. JITU KUMAR: All right, perfect, thanks. Thanks for the opportunity to talk to you about some of the work that we've been doing at Oak Ridge on the ForWarn Project, which is a Satellite-Based Change Recognition and Tracking system. So just for the context, the brief background on the wildfires, it's one of the largest and the most devastating disturbance events that we have in the country. And I wanted to make a note that the wildfire management is done by a lot of different agencies. So, on federal lands, the USDA Forest Service, Department of Interior, and so on, and on the non-federal lands there are a lot of state agencies. A number of agencies and entities actually monitor and record historical, as well an active wildfire information. But it's important to note that they all do it differently, depending on the purposes that they have. So, the synoptic wall-to-wall examination of wildfires still tends to be missing. So, for the animation that you see in the background is from the USGS monitoring trends and the burn severity, which is the most complete record of all wildfires. But they only record wildfires that are greater than 500 acres in the Eastern United States, and the greater than 1,000 acres in Western United. So just to give you just an example of how things are just done differently. And depending on where you are, it actually may matter. So ForWarn provides a near-real-time tracking of vegetation changes across the landscapes across the entire United States, so it's all of the lower 48. It's a program that's led and supported by USDA Forest Service, and it's a system that's used operationally within the agency. ORNL is a contributing partner, and we provide a lot of support for development of remote sensing and data analytics algorithms and technologies, high-performance computing, tools and software. ForWarn is based on MODIS NDVI, which is MODIS Normalized Difference Vegetation Index. It uses data from NASA's two of the instruments, the MODIS instruments on Aqua and Terra satellites. And it monitors all lands, all the time, for all possible changes, both for disturbance and recovery, whether they are natural or anthropogenic disturbance. So, all the way from wildfires, droughts, hurricanes, anything—insect affiliation, anything you can think of. It's all happening at the spatial resolution of 250-meter resolution, and a temporal update frequency of every eight days, with an eight-day latency. So, if you go to ForWarn on that link today, you'll see it's updated all the way to eight days ago, and it's constantly getting updated. Every eight days there are updates happening. And it's designed for enhanced sensitivity, not just to look at very abrupt changes like wildfires, but also for minor disturbances because of frost events and insect damages and so on. So, it provides a lot of different baselines to be able to detect different ecosystem disturbances of interest. So ForWarn, because it has a primary focus on forest lands, and this is a snapshot of one of the current departure maps that I grabbed a couple of days ago. But all the calculations that it's doing is for every possible pixels on the land that you can think of. And our recent work we have expanded, we're trying to cover all of North America. So, this is just a snapshot of what the current disturbance map, current departure map from ForWarn. So, it provides a fast turnaround assessment of disturbance of all vegetation and fuel. And for wildfire purposes, the fuel regrowth patterns. Forest Change Assessment Viewer is designed to see fine distinctions and severe negative departures all across the CONUS, but for wildlife purposes you can start to tweak it to be able to focus particularly on the wildfire disturbance as opposed to all possible disturbances. And it's one of the only systems that monitors departure from expected normal conditions. So, in ForWarn, you can have an expected normal based on a year-long baseline, or a five-year-long baseline, or a 10-year-long baseline. So, there are a number of different ways ForWarn provides information that are applicable to different types of disturbance events. And ForWarn has been operational since 2010, but the data record goes back all the way to 2000. And in 10 years that we have been working with ForWarn in an operational capacity, especially for the purposes of wildfire disturbance, it's important to note that we have noticed that the shallow root vegetation, grasses and shrubs, responds to dry conditions very rapidly. And they revert and recover quickly as well, because of droughts. So, when we think of wildfires, we think that if you are worried about wildfires that are going to impact your forest ecosystems, you really have to watch out for all the grass and shrubs around you. Because those light fuels are susceptible to ignitions that tend to actually show the signs of the disturbance fairly early. And with the near-real-time capability you can start to monitor those changes really well. This is an example that I identified. In the middle of the screen on this animation is the Camp Fire in Butte County, California, which was one of the fires that was ignited by a faulty electrical transmission line. And you can start to see how ForWarn tracked that wildfire event over time. And my animation goes all the way for a year after the fire, and you can see on block that's on the bottom right corner, that system has not recovered from that fairly abrupt disturbance since the wildfire. But it illustrates the capability of what we can do with eight-day tracking. All of the ForWarn data products are free on a viewer that's at forwarn.. There are no login or password required. And all near-real-time as well as historical data products since 2000 are readily accessible through WMS and WCS services. And ForWarn is doing every possible disturbance. What I wanted to just take a minute is to talk about what is the most relevant for electrical infrastructure. We really want to be focused on the wildfire disturbances, and we would like to be able to know what's happening before wildfire so that a potential risk. So, before a fire happens, where it might happen, and then during the wildfires, how it's actually moving, where it may move for different purposes for incident planning and response. So, in order to zoom into areas of interest, right now ForWarn is doing a 250-meter resolution, but there are other data sources that can be brought in to really start to improve both the spatial resolution, as well as temporal frequency of these updates to a lot frequent manner, that may be more relevant for fire electrical infrastructure. And then the idea of risk. So, wildfire requires three things to happen. You have to have the fuels available, you have to have conditions right for wildfire to happen, and then you just need an ignition source. The number one and number two is something that we can do ahead of time before the wildfire has happened. So, there is a way to start to look at all these data sets that we have available to us and start to identify these potential risk, and that can actually inform the mitigation and planning. And then the last point of a wildfire spread. So, wildfire can start at an infrastructure and can spread out, or it can happen the other way, where a wildfire that was started elsewhere may actually [INAUDIBLE] a powerline infrastructure. By doing a good assessment of where the fuels are, how those things are connected, and what will happen so we can start to try these what-if scenarios of conditions when a particular infrastructure may be vulnerable, and start to use those in our monitoring and mitigation. So, that's all I have. And I'll be happy to take questions later on in the discussion session. MEREDITH BRASELMAN: Thank you so much, we appreciate it. As a reminder, please, keep submitting your questions in the chat box. We are going to be holding them till the end, but we are going to be going through a number of questions. So, include the name of the speaker or the topic when you submit your question. Our final speaker from Oak Ridge is Dr. Gautum Thakur, who is going to discuss GridSense. So, I'm going to hand it over. GAUTUM THAKUR: Thank you, Meredith. Good afternoon, everyone. I'm Gautum Thakur, I am a staff scientist here in Oak Ridge National Laboratory. I work in National Security and Sciences direct area. The motivation for today's talk is to kind of a little bit more unconventional in situational awareness situations, where non-authoritative data like Tweets or Foursquare check-ins or Facebook posts could be used towards improving our understanding of natural disasters. Improving our understanding of evacuations, and at least to kind of guide in near-real-time about on-ground situations. There are certain challenges when we think about these situations and how to improve and how to capture this data. So, there's a lot of science behind how we approach this problem of collecting, processing, informing, and creating a reasonable understanding of any situation. As Aaron said earlier, we did, as of now, collect information from over 100 utilities that talk about outages, that talk about restoration. And we do process that information to create a more reasonable second line of evidence that supports any major outage information that's coming out of utilities. And in situations when it is not possible, like in natural disasters, when utilities tend to be more rely on sharing the outage information or any kind of information through social media, platforms like GridSense would come pretty handy in creating information that you can reliably then send to first responders or even for policy guidance. So, the narrative here is to kind of how you can incorporate citizen science in more serious understanding of modeling, as well as predicting impacts to energy infrastructure. So, I borrow one of the motivations, this work that we did in the past during the Smokies Wildfire here in Gatlinburg, which is pretty close to our home here. In 2016 on the left side, you see there's a wildfire tweet that Great Smoky Mountain National Park Services handle tweeted about a 0.25-acre wildfires. They sent a picture of this in the location of that. They also said that was the estimate they think that this area is closed, but then they hope that it would open. Fast forward by the time of November 2019, there were a couple of wildfires that happened, and a large part of National Park was officially closed, and severe damages were happened to infrastructure, to human life, but also a lot of animals and natural beauty of national park was compromised. And it was surprising to see that this was the first information that came out from National Park Services, and that was actually tweeted around that time frame on November 14. And the actual wildfire that really got out of control was happened on November 23 of 2016. So, as we look into this thing, that was kind of a real historic tragedy for the state of Tennessee, where 14 people were killed at that time. Over 190 people were injured, and 90% of the entire Gatlinburg area, as well as the Smoky Mountain National Park, was evacuated at that time. Over $2 billion worth of losses were happened, and over 2,500 real infrastructure, including electric grid, power transmission lines, but also hospitals, schools, hotels, cabins were destroyed around that time. The actual duration of this was November 14 to December 22, but the actual fires were pretty much ended on November 29 to early first week of December. But after that it was mostly rescue and post-disaster recovery plans. And in a matter of a few days to a week, over 28 square miles of the area were burned down at that time. So, this is kind of a timeline that I created based on a report generated, and also based on some of the data collection at that time. So, as I said, 14 November was the first day when the first Chimney Top fire was detected by the National Park Service, followed by a much bigger fire that was detected on 23rd. And from 23rd to 28th, the fire was continuing to rise, and there was some heroic plans were in play to detect and extinguish these fires. But at the same time, a lot of information through citizen science platforms, like social media and Facebook and Twitter were being shared by people. They were sharing their location. They were sharing how intense these fires were. They were sharing photos as well at the same time and were sharing live coordinates where exact their location is. So, if you see that 23rd to 28th was the actual timeframe then we have enough time and enough capacity. Of course, the terrain was complex, but what I wanted to show here is there was a real opportunity to identify and make plans and activate those plans during that time. And by the 28th, significantly in the evening, early evening, city of Gatlinburg started to experience intermittent power outage, and that suddenly spiraled into Gatlinburg being completely losing the power to Tennessee Valley Authority losing their transmission lines within a matter of hours. Of course, that was restored soon after. But the matter of the fact is, there was an interdependence of a lot of different networks like electrical, cell, everything was lost within that timeframe. And these kind of things could be avoided if we decided as having information from sensors, about satellites. We can also try to understand and capture information that people are sharing through these unconventional data sources, like Twitter. And this is like some tenths of tweets that we collected that time out of several thousand tweets that were shared by people. And as you can see here, people were actually sharing the location, they're sharing the time of the day, and also what the situation on the ground. And in order to kind of really capture all this information there was a need for us to—so these are the two counties that were affected. Over 11,000 customers were short of electricity, and they were waiting for some kind of restoration, but also evacuation was done in order for saving more lives. But when you look beyond just kind of looking at a small problem, it was important to also look at from a scientific challenge and from research challenges as well. When we think about collecting this data and really sharing this information in near-real-time for policy making and guidance, there is a need to create a more sustainable, data-intensive computing system. So, Aaron and me and a lot of other people here at the lab spent an enormous amount of time in a couple of years trying to finetune how we can collect these data sets. How we can process them in near-real-time, how to detect outliers. Because most of this data is volunteer data, not an authoritative data. So how we can trust this data that this person is talking about a wildfire, if there is indeed a wildfire. So, this platform basically captures information from different sources like sensors, but also captures data from social media data. We have over 25 to 30 virtual machines where their job is just to look for unconventional data sources and look for data that might have some kind of a natural disaster, like wildfires, outages, restorations, flooding, and things like that. And then we have built a stack of different machine learning models that processes data in near-real-time. But at the same time, we also added a human component to really pinpoint information that before it gets underway in the system would say yes and no. This information is real, this information may be doubtful. So, we basically create some kind of confidence in this data as well. And finally, that data can be then shared with any third party. So, the right part of this [INAUDIBLE] services in GridSense users an EAGLE-I where we can create a restful and kind of [INAUDIBLE] kind of infrastructure that can connect with any third-party platform. So, there's a lot of science and scientific need to kind of capture and process this information to make it a real stable and reliable system. But at the end of the day, it's really a combination of very different kind of data sources, like groundwaters for example, or GPS data, or social media data, or people talking from different sensors and sharing that information. And then we build basically a lot of systems around this crowdsourcing approach to improve our understanding of the ontologies of different dataset has to be matched with the ontology of what we talk when we talk about outages in TND, for example. And how we can operationalize it is kind of where you can kind of really make that a more trusted platform. So, this is kind of an active, ongoing research at this time. And with EAGLE-I team, we trying to kind of at least cope our current work to outages, restoration, and some of the other related information that can complement more authoritative data at this time. So, this is just kind of a rundown of how we basically approach this problem [INAUDIBLE] work look like. Beyond the Smokies, we have also explored to use this approach during Puerto Rican outages a couple of years back. DT outages that happened a couple of years back, where the utilities started to share this information through social media data rather than updating their own websites. So, having this second line of evidence would definitely kind of create a more compelling evidence to use and consider this kind of work in mainstream outage analysis as well. So, here kind of—I open the floor for any questions, anything I could answer at this time, and thank you. MEREDITH BRASELMAN: Gautum, thank you so much. Appreciate all of that. Please, continue to submit your questions. We've got a growing list, and we are looking forward to getting to those at the end. So, now we're going to explore situational awareness capabilities from Pacific Northwest National Laboratory. David Judi is going to discuss Water Extreme Lookup Library. So, David, I will be driving your slides for you, so just let me know when you're ready to move to the next one. And you might need unmute, David. DAVID JUDI: Yes, I'm on my phone, so hopefully that works. Can you hear me? MEREDITH BRASELMAN: There we go. We can hear you. DAVID JUDI: Muting two spots, sorry about that. So, this talk may be a little different, as it isn't directly power grid related, and to date hasn't been applied to wildfire. But we see some possibilities on how those connections could be made. So, next slide. First, a little background to set some context for the Water Extreme Lookup Library. One of our objectives at PNNL is to develop and apply state-of-the-art infrastructure analytics that can really support the needs of a broad infrastructure protection mission. The analytics in general are used to characterize things like infrastructure fragility and resilience, the dependencies on natural systems, or even the coexistence within natural systems, interdependencies between infrastructure systems, and potentially the evaluation of the service infrastructure provides to support communities and the economy. So, in particular, we're interested in understanding both endogenous and exogenous disturbances to infrastructure systems. So, as you all may be aware, weather extremes are the most frequent significant causes of widespread infrastructure destruction, that's kind of what we've heard today. So, with that in mind, we've developed a number of capabilities to characterize hazards, such that this information can be used to understand cascading effects within infrastructure systems. These are intended to be applicable to planning, response, recovery, all those activities from an extreme event. Next slide. One of the areas we've spent a great deal of time thinking about is enhancing situational awareness during extreme events, and specifically within the context of floods. And I know the topic today is wildfire, but hopefully you'll see some connections in this talk and how it may be extended to help within a wildfire response. And I'll point out a couple of things here. The situational awareness capabilities are really driven by the infrastructure mission, including interactions with the emergency operation centers and the questions that they're asking, which in the case of floods has been: what is the spatial extent of flooding, when will the flood arrive, how long will the flood remain? And some of the more pressing questions about how many people are at risk and what infrastructure assets are at risk. So how do we support in the case of flood events, how do we support these questions? We have three primary approaches. From a real-time or near-real-time sense, we use predictive modeling and simulation, which I'll give a talk about in a couple of weeks relative to post-fire recovery. We use imagery-based damage analytics, and my colleague, Andre Coleman, will talk on that in the context of fire. And then the topic today, which is access and leverage previously simulated events stored in an archive, which is the Water Extreme Lookup Library, to rapidly enhance situational awareness. We like to say if you need data go to the WELL. Next slide. So, what is the WELL? As I mentioned, the WELL is a readily accessible archive of flood simulations. There were a number of requirements that drove to the development of the WELL, and I'll just briefly mention a couple of those things. Broadly accessible to a wide range of people, so it had to be cloud-enabled. Provide geospatial visualization, but also make the data and the meta information or metadata available and ingestible within other environments, through APIs, for example. And it needed to be searchable, efficiently both through spatial and text-based queries. So, we believe there are many benefits to having a large repository of hazard data. It can assist in planned studies. With a large set of hazard data there are opportunities for unique data analytics, which include risk and resilience analysis. It can be used to rapidly enhance situational awareness from the pending event by finding existing hazards that may have similar characteristics. And it also may become an initiating point for new simulations that you may want to run. To have an archive that can do these things you have to be able to fill the WELL with data, and the key to that is automation and simulations. Next slide. Thank you. I'm not going to go into the details of the WELL implementation. I'll show you a graphic here, other than to tell you the WELL is implemented within the Azure cloud environment. Simulations that generate hazard data—in this case, we're talking about floods—can occur either within the cloud environment or outside, but they're ingested and stored within the WELL. The WELL contains indexing and OGC compliant services to search, visualize, and make the data accessible to users. Next slide. I mentioned the key to filling the WELL is automation. So, we've worked to automate a number of, in this case, flood hazard types, which can develop and store thousands of scenarios. We've automated continental scale of wide dam failure simulations, utilizing the available national inventory of dams and characteristics of dams available in that data set to develop flood extents for thousands of potential events. We've automated processes to develop river and flood extents using national hydrography data sets and observational data available from locations like USGS. We've also been automating processes to develop urban area extreme event flooding, which consists of urban area footprints and precipitation curves like you might get from NOAA data sets. We're also working towards more automation in active areas coastal focus, where we're automating hurricanes and compound flooding. And relevant to today's topic, we have high interest and have written a number of concepts on developing automated simulations to store post-fire extreme runoff scenarios to provide additional insight into regions that may be more prone to longer term indirect impacts from wildfire conditions. Next slide. So, over the next few slides I'll show some screenshots of the WELL. The first screenshot is the access page for the WELL. You'll note that it says Open-WELL, and I'll talk about what that means towards the end of this, some of the recent work we're doing. Another option we have within this that I'll point out is that we've provided analysts who have access to this with limited ability to develop their own simulation. So, they can run new simulations that can be stored in the WELL. In this case, it's limited, but it requires little expertise for an analyst to be able to execute against it. Next slide. This is a screenshot of what you actually see when you go into the WELL environment. The heat map is an indicator of locations where simulations have been completed and exist. And the colors, different colors, indicate types of simulations that are available representing hazard data. Users can search geographically by using standard mouse actions and select a simulation of geographic or type of interest. Users can also use a text on the left to find simulations they might be looking for based on keywords. Next slide. When a user does select a simulation that is of interest to them to enhance their situational awareness, the data from the event is loaded into a map-based viewer, as seen here. The data is interactive, meaning the user can interrogate the data to find parameters that they may be interested in. For example, what is the flood depth at this location and other parameters, such as timing of flood. Each simulation contains metadata about the parameters that went into the simulation, when the simulation was run with what code, et cetera. All the data is also downloadable and ingestible into other workflows, which is an important requirement as we work with partners such as state emergency operation centers and FEMA partners that they can pull this information into their workflows to do the types of analytics they're responsible for. Next slide. So how has the WELL been used? Well, in general, it's been used to provide rapid insight to potential flood threats. An example of that was during Hurricane Maria and potential dam failures in Puerto Rico. Just as an example, there were media releases even from the National Weather Service tweeting out indicating thousands could die from a potential dam failure, 70,000 people at risk. The WELL was able to immediately inform our partners about the potential consequences, which was significantly lower than what was being published with help inform their response strategy for the next few hours. Another application which is pretty exciting is using the information in the WELL, the hazard information, to facilitate infrastructure protection training and tabletop exercises. So training, whether it be responders or other analysts, on how they might respond to different types of events. Next slide. So, the previous slide mentioned Open-WELL, and just a little bit more about what that means, as this is a large part of our current activities with regards to the WELL. The WELL was initially developed to support the federal infrastructure protection mission, but the applicability of the WELL goes beyond just that mission. The WELL and the information in the WELL applies to state and local emergency planners and responders, and we've had a lot of interactions over time and discussions with them. To that end, we've been expanding the vision of the WELL, which has included looking at identifying cost-effective approaches for broader cloud-based scaling, including hazard types beyond just floods. Third-party simulated data being able to be ingested into the WELL, not just data that we may generate, but out in the community, really to embrace community-based ensembles or a multi-model approach to the types of hazards we're looking at so that we can start looking at characterizing uncertainty from models or modeling decisions. And finally, we think the WELL architecture as I've mentioned, expandable beyond flood data, to include other types of extreme event data useful in infrastructure analysis, including wildfire. Next slide. There's a fantastic team working on this over the last couple of years, and one of them, Dan Corbiani, you're going to hear from next. So, thank you. MEREDITH BRASELMAN: Thank you so much. We appreciate all that information on the WELL. So now, we're going to turn over to Dan Corbiani, who's our final speaker from PNNL. He's going to walk us through the Hotspot Analysis Tool. So, Dan, the floor is yours. DAN CORBIANI: Cool, well thanks for that. And thanks, Dave, for the intro. I appreciate you finding that picture from back in the day. I don't even know where you found it. So, thank you so much for the opportunity to be able to present today. I appreciate it. I think this is a great opportunity to see all the other programs that the labs are doing. I think that this presentation may also be a little different. We have done this for a more broad set of impacts, but I think that this will still be applicable for wildfire. So as I said, I'm Dan Corbiani. I'm a data scientist at PNNl. So, in terms of an agenda the thing I'll start out with is what is the Hotspot Analysis Tool, what are we doing here? Then, I'll get right into some results. I find that maps are usually good for communicating what we've done. So, I'll go right there. Then if there's time I'll touch a little bit on the challenges and the technical approach we took. And then, for those that have GIS knowledge, I'll talk a little bit potentially about the implemented algorithms. So, what is the hotspot analysis tool? We also call this the proximity tool sometimes depending on the context. And I think to start with, it's important to understand where this came from. So, our goal is to really find spatially concentrated industries that are a high risk of disruption from hazards. Those hazards can be a lot of things. They could be hurricanes and flooding, like Dave was talking about, or they could be wildfires or man-made risks, anything along those lines. We had a couple interesting requirements that made this a little different. The first is that we wanted our tool—well, we needed the tool to be able to be deployed in multiple environments. Those can be things that are connected to the internet, like other national labs, or more secure environments that don't have access to the internet. We were specifically asked not to create a framework, a UI, a platform, or anything like that. This was a tool that should work behind the scenes and be integrated into other places. So, that's my kind of disclaimer of any maps in here are development maps and used for just illustrating points. This gets integrated into other systems with much fancier visuals. And then the last requirement that we had was we really wanted to be able to simplify the deployment and validation of research code. We found that the feedback loop between some of our sponsors requesting an algorithm and then testing that algorithm on data, and then hardening that algorithm with tests and documentation, and then deploying it to them, that process can sometimes take a long time. So, we wanted to make that simpler and easier and get everybody working together on the same page. So, the elephant in the room that I always feel I need to mention is can't we just use a heat map for proximity? I mean, that looks great in most cases, and that does work in a lot of cases, but not all cases. Because heat maps work at showing the concentration based on global information. So, if we look to the right here there's a heat map of total beds. And you might see some cells there that are hotter around New York City and maybe in Chicago. But if you're looking at something that has more spatial concentration to it it's not as interesting to know that New York has more beds when you're in Chicago. That's not useful for you. So, one of the algorithms really does take that spatial concentration into consideration, and you'll see more clusters forming in smaller areas like Columbus or Indianapolis or Detroit. So, before I flip the slide and go into some results, what I'm going to be talking about here are two visualization sets that we created to help communicate our results and validate the algorithms. One is the dashboard that we use to help to do that at the national scale, and another one is more regional and we used block level data. Again, not a UI team, so please keep that in mind as you see the visuals. So, this is the dashboard that we used to look at the national level concentrations and clustering around different industries. For this data, we used the County Business Patterns data, which has a high-resolution sector information, but it's at the county level. And this was really interesting because there are thousands of sectors that are represented in the County Business Patterns data, but again, it's not very granular. So, what we have here is on the top left, we have a cluster ranking map, and this is just our ensemble algorithms results of how probable we think an area is to be part of a cluster. And then in the bottom left, we weight that ranking. We found that there were certain cases where just a pure spatial clustering didn't work. A good example of this might be mining, where you have a mine that's off in the middle of nowhere but happens to produce a ton of iron ore. We needed a way to be able to weight based on an attribute, and we threw this map in there to understand when we might need to do that. The one in the upper right, that's an employee concentration map. That's kind of like a heat map. That's where the analysts normally start. They take an attribute; they make a heat map. This is what they see. And it was important for us to validate that our algorithm was different or was not. If the attribute and heat map worked then maybe that's great, but we wanted to be able to show that side by side to be able to validate the algorithm. We also provided a couple of filters for people so we could kind of understand where different values should be set for each of these sectors and make sure that we were consistent. So, I will now remove the slight fog, and this one is for textile mills. I liked this one because it's very much not sensitive, but it's very interesting, in my opinion. When you look at employee concentration, you see a lot of employees in the LA area, but when you run it through the ensemble clustering it ends up highlighting an area around South Carolina, North Carolina, and then in the Boston area. What we did with this is we ran through all of the sectors in the County Business Patterns. We've then found the sectors that were highly spatially concentrated. And then we started to validate is this right, is this algorithm producing the right thing. And in this case, there are some really interesting historical contexts on why Boston and that North Carolina region are what they are. So, the next one that I like to show is some block level data. So, for this we used the Census Workforce Area Characteristics data. This is where we're looking at accommodation and food service in Las Vegas. We ran this analysis across the entire U.S., across all the block levels. We used a couple thousand machines to run this, which is part of the reason that we have done so much architecture work to make that simpler. But we looked at accommodation and food service in Las Vegas because there's a lot of that in Vegas. And we really wanted to know, was the algorithm able to pick up anything interesting in an area like that. And what we can see on the bottom left is the accommodation and food service sector is strong in Las Vegas. But on the top right, you see the result from the ensembling algorithm says this is the Las Vegas strip. Without any knowledge of anything else besides the number of employees in each of these blocks. So that was pretty exciting to us, and that kind of helped validate our entire process. So just really quickly, why is this hard? Why are we talking about this? Why is this a national lab problem? First off, geospatial data stats are available in lots of formats and projections. I think we've seen that through multiple presentations today. Feng was talking about it. There have been a couple of other people that have mentioned having ETL pieces. The other part is that the data sets are fragmented across a lot of different services. So, you might have something on , you might have something in EAGLE-I, you may have something posted by a local municipality. So, that makes it really hard when you want to aggregate all of that data together, and especially if you want this platform to be able to be transformed or transitioned into different environments. So, there are some other specific library challenges that we've had in terms of algorithms and documentations. But the main problem that we've had is that very few algorithms [INAUDIBLE] the design for scaling from scratch. So, what we found is that there are really a couple primary ways that algorithms have been produced. One is let's download the data locally onto a single machine. Let's run some Python or Java and come up with an output. And then I'm going to give the output to somebody, or I'll make a map. And that works great for some things. Another architecture we see a lot is the big server architecture I call it. Where you have one database to rule them all, and you have a couple of servers that are around it. And you do all your analysis in that space. All the data is available for you. It works really well for some cases, and that's also great. But what we were trying to get at was something where we could use a more modern approach where we could use low-cost storage and on-demand compute to solve these challenges. And by doing that, that allows us to transition basically the library to other environments, but not necessarily have to move the data. And really quickly, what that entails is we have these building blocks. So, we have some user-focused APIs. These APIs are the ones that would get integrated into our sponsors workflows. So, we have data sets, and these data sets contain all the ETL process to get external data converted into something very usable. And that's kind of where they stop. It just manages the ETL piece. We have the algorithms, so we have a lot of pattern-of-life information in here. We have a lot of clustering algorithms, things of that nature. Then we have workflows that we use to manage an entire algorithm process. So, let me get some data. Let me run a bunch of algorithms on it, do some ensembling, produce an output. So, we capture these workflows in single blocks that then people can use in their environments. And then lastly, we have some visualization and output pieces. Because ultimately, we do need to visualize things because if you don't visualize on a map, a table doesn't get the point across. And we also make it easy to put this into different databases and such. I'm not going to talk through the rest of these in any detail. So, I think that I'm probably pretty close to out of time, but I will touch on this slide really quickly. One of the things that also makes spatial proximity analysis hard is how do you define near? And especially in a lot of our things we're talking about administrative boundaries or hazard boundaries, and those can have vastly different sizes and shapes. So, one of the issues is that an administrative boundary specifically, you can have something in Manhattan versus Alaska. And how do you figure out what's near each other? Do you use a constant distance? Do you use something like these are the neighboring polygons? It becomes a very nuanced question, and sometimes very data-specific. Additionally, the difference in sizes on those administrative boundaries can cause a lot of visual artifacts in maps. So, if Manhattan is really important but only has one pixel, it's very likely that an analyst will mix that one pixel and see something in Alaska much sooner, because Alaska might have 500 pixels. So, along these lines, one of the things that we've been focusing on is really converting a lot of these administrative and odd regions into H3 hexagons. These are developed by [INAUDIBLE]. They work really well for most of the analysis that we've been doing, and they give us something that has a near constant size, which makes density a lot easier to calculate. They have a near constant distance between each other, and specifically, they have a pretty constant number of neighbors. And that makes it very easy for us to start rolling things up. They also are uniquely indexed by a string, so that makes it easy for us to join data sets, transition them to other places, and store them. So, I think I'm going to stop there. I do have a couple of other slides that will be part of the print piece. So, thank you, I'm here because of a great team. Also, just a quick plug, we do host a big geospatial data meetup. We have a bunch of very passionate people on there the first Thursday of the month. So, if you're interested in geospatial data, shoot me an email, and we can send you an invite. MEREDITH BRASELMAN: Thank you so much, Dan. We appreciate all that information. So, we have our final presentation of the day is from Dr. Ken Armijo from Sandia National Laboratories. He's going to discuss High/Low Power R&D Arc and Ground-Fault Detection/Mitigation. So, Ken, I'm going to turn it over to you. KEN ARMIJO: Excellent, thank you very much. I am going to share my screen, if that's OK with you all. OK, so thank you for that introduction. I'm going to be talking about our High/Low Power Research and Development Arc and Ground-Fault Capabilities that we have at Sandia. My background, I originally at one time I was a volunteer firefighter, and I remember electrical fires tended to be some of the most hazardous and challenging types of fires to address. In my present job as a systems engineer and scientist, I study a lot of arc and ground faults with my colleagues at Sandia for both DC and AC systems. For this presentation, I'm going to talk about photovoltaic or PV and direct-current DC systems that have been linked to dozens of fires around the world. As we can see here on this map on the right, there's a number of PV and DC installations that have been going up, especially at large scale near forested areas. So, this research that we do has a direct relevance for power transmission scale wiring and forest fire potential, as well as for future distributed PV near forested areas. For arc faults, fundamentally, arc faults occur when there is a voltage breakdown between a break in two current carrying conductors, whereby you have an arc discharge that can eject shrapnel, as well as high temperature molten metal that can reach temperatures as high as 20,000 degrees C that can cause secondary fires. And what we're finding is that many new PV installations are getting to 1,000-volt and 1,500-volt scales based on this research that you can see here on the left, which increases the propensity for potential arc fires, because you have higher ohmic heating, you have the higher corrosion potential that you can have in your wires, and a variety of other reliability challenges. Not to mention that it's not just the arc fault that is of consequence, but you can also have electromagnetic interference, EMI-type issues, as well as other pressure waves, such as the plot that we show here on the top right. There's been a lot of notable fires, such as the Bakersfield Fire in PV back in 2009. And also, in AC fires such as the San Onofre Nuclear Generation Station Fire in 2001. Generally, with AC arc faults, once you pass the zero-point crossing, usually for low-voltage arc faults that's enough to snuff out the fire. But we're seeing that at high voltage, it's not enough. And you can have very large discharges that can occur. So, the capabilities that we have at Sandia National Labs, such as our patented Arc Generation System, has allowed us to fundamentally assess the physics of failure that facilitates these arc fault events. We're able to leverage DC power supplies and DC simulators and PV simulators, as well as AC simulators in addition to actual photovoltaic arrays that we can route into our Distributed Energy Technologies Lab, our DETL, where we can assess the physics of failure with this arc fault generator. As you can see in these videos here, visual video, IR, as well as some information from our high-speed photography, all different types of arc faults, we're able to specturally understand the different types of arc faults generated from different types of electron materials, insulation materials, as well as environmental conditions such as humidity, looking at static air temperatures, as well as pressure. Another facility that we have at Sandia National Labs and capability is our Sandia Lightning Simulator Facility and our ACD facility, where we are able to actually now leverage these very high capacitive discharge systems for assessing very high voltage arc faults up to 10,000 volts. This has allowed us to in a very controlled manner assess these arc faults and see their progression with respect to fundamental physics, and then be able to leverage our models—which I'll talk about in a second—for validation. Another capability that we use at Sandia for a variety of DOE, DOD, and other capabilities and mission areas is photometrics. Where we have high-speed photography for both optical photography and IR photography that can have image processing between 100,000 to a million frame rates per second. So, that way we can look at micro arcs at the millisecond scale, and even nanosecond scale to a certain extent, all the way to the regular second and minute time scales. We've been able to take a lot of this capability to actual facilities to, in a very controlled manner, facilitate larger arc faults in the field. For AC, this is some of the application space where we've leveraged our plate calorimeters, as well as other sensors developed at Sandia National Laboratories. You can see on this test stand, if you can see my cursor on the left, where we're able to facilitate these large, massive, and impressive arc faults, and assess the zone of influence and the incident energy, as well as other reliability challenges. In the DC space, we've been able to facilitate these fires in a controlled manner for combiner boxes, inverters, and other power electronics equipment to understand the physics of failure. How it goes from a very simple arc-in-a-box, as you see in the UL 60 99 b-codes and standards, or IEEE 1584, and apply it to actual real-world conditions, such as in this 1,000-volt DC PV system. Using our high-speed photometric data and a lot of our in-house processing capabilities, we've been able to look at and examine temperatures of arc faults that can exceed 5,000 degrees Kelvin, and understand how the physics of failure can occur with more complicated topologies and geometries within PV systems. And also, impressively enough, we've been able to leverage our high-speed photometric information to also do particle tracking of high-temperature particles in both the rotational aspects and their directional aspects to understand how the zone of influence is not just simply a circle, but it can be an evolution over time. And over time I mean, two seconds, which is the requirement for UL 60 99 B and IEEE 1584. Here we go, and with that, we've been able to also convolve many of our spectral capabilities along with IR to assess the zone of influence, where we can precisely understand the ignition zone and the locations where personal protective equipment, or PPE, are required for electrical safety workers. As well as understanding basically, where the fires can occur, what the distribution of the fires can be locally, and on a larger global scale, for at least the initial fires. Another interesting aspect of the research we do, and lately some of the interesting results I'm showing here, has been looking at restrike. It's not enough that we also see just arc blasts and arc flashover events. But we've actually learned for certain configurations, even after an initial arc fault occurs, you could actually get a restrike like you're seeing here in the top right. Where the arc basically could continue for a longer period of time, and you have more serious consequences for fires. And with all this fundamental research, we've been able to understand the solution space for mitigating and putting out these arc fault events. We've been able to develop wavelength models and other fast-forwarding transform FFTs models to understand the signatures that lead up to an arc fault event. And leveraging arc fault circuit interrupters understand how to more reliably and to reduce nuisance stripping, snuff out these arc faults. And also, in very exciting manner, we've also been able to start studying a new class of materials. Similar to self-healing materials, we're developing self-extinguishing materials to basically support the active mitigation methods. With arc fault circuit interrupters, we're able to passively put out these arc fault fires, such as you see here, in the bottom right where an arc fault occurs. And this layer-by-layer nanocomposite materials developed by Sandia's Lab-Directed Research and Development Program have been able to snuff out these fires. Another area of self-extinguishment research is the development of self-extinguishing connectors that can be used beyond both of these self-extinguishing methods, used beyond photovoltaics for both DC wiring and AC wiring in order to self-extinguish arc faults. Here we are looking for application spaces in the larger electric grid, as well as in automotive and aerospace applications where you can have miles and miles, it seems like, of wiring. And so, these arc faults have been able to be extinguished within two seconds, as you can see here on the bottom right. And in order to really fundamentally understand the physics of failure and the causes of these arcs, we've been able to leverage Sandia National Labs Sierra Code Suite, such as the Aria Program, in order to understand these thermal distributions, understand enthalpy a little bit better, and how that occurs within the spark gap region between two electric current-carrying conductors. In the ground fault research, where ground faults occur when you have an exposed unintentional noncurrent-carrying conducting metal part, where it wasn't intentionally you wanted current to occur. But you effectively end up with these blind spots that can facilitate fires. And we've done a lot of research on fuse-based ground fault detectors and interrupters, GFDIs, where we found vulnerability and nuisance trip issues with these types of devices. And in so doing, we've been able to do some research on these advanced GFDIs, as well as leveraging spy simulations in-house and field measurements to determine leakage current and threshold limits for residual current detectors, or RCDs, as well as isolation resistant checkers, and current sense relays, or CSRs. A lot of this research we've been able to publish, especially with respect to current leakage, which has been a problem for ground fault situations. I am sharing—this is my last slide here. I wanted to just point out a few key Sandia arc and ground fault colleagues that have been working with me. And if you all are interested, I have my email on the top. So, feel free to reach out if you have any questions. Thank you very much. MEREDITH BRASELMAN: Ken, thank you so much, and thank you to all of our panelists. This was great information, and so good to hear all the things that you guys are doing. We have 15 minutes left in today's webinar. So, we want to try to get to as many questions as we can. So, we are going to shift over to that. So, panelists, when I do ask you a question, if you can be as succinct as possible, that would be great. And we'll be able to get to as many as we can. So, Feng, this first question is for you. It's actually two questions. Number one, "Is the combined fire and incident report available?" And two, "What factors are driving fire risk index?" FENG QIU: All right. Yeah, I think I already typed the answer in there, but simply briefly, so there are many factors that are driving this fire index. For example, the weather, precipitation, daily maximum temperature, humidity, and also the fuel load. Sorry, what's the first question? Will that be available? MEREDITH BRASELMAN: Yeah. FENG QIU: Yeah, the fire incident report—MEREDITH BRASELMAN: Yeah, is the fire incident report available? FENG QIU: Yeah, they will be available. It's just a large Excel file. So, if you want, we can share with you. Yeah, thanks. MEREDITH BRASELMAN: Very good, thank you. Brett, the next question is for you. [INAUDIBLE] state energy offices, ESF#12, managing and coordinating with utilities that have PSPS policies, and when they are implemented and coordinating downstream impacts that impact life safety issues? And I think you're still on mute, Brett. OK, I'm going to come back to you, Brett. Let's see, Aaron, "Does URBAN-NET also simulate cascading impacts of dynamic events, such as arc faults caused by wildfires in the power grid?" AARON MYERS: It can, we just need to know the location and what infrastructure is impacted, and then we can kind of run the simulation from there. It's currently not completely automated to run that quickly. It requires user input, but that's something we're working towards, is to be able to dynamically generate that sort of output based on different information. MEREDITH BRASELMAN: Very good, thank you. And, Brett, I think you are back on— BRETT HANSARD: Yeah. After a year of the pandemic, I think we all would have figured out the mute button, but it's still a work in progress. Yes, so I did hear the question, and great question. And I think there is really an absolute need for the various local, state, federal agencies and private industry and utilities to come together and have those kinds of conversations and do that planning. I'm not positive on a kind of a state-to-state basis how that's being done, but I do think that states are really in a unique position to drive that. And this really is all part of that idea that I mentioned in my presentation of kind of a joint information system. How we coordinate with each other in real time during an emergency to make sure that those critical protective action messages are reaching the public in a way that allows them to make the decisions they need to make. And we're doing some different workshops focusing on nuclear and radiological incidents, others on dam emergencies. And I think those would be really good models for something similar for wildfire hazard response planning as well. So, thanks for the question. MEREDITH BRASELMAN: All right, thank you. Jitu, the next question is for you. "Does ForWarn triangulate the local weather measurement data, such as wind and moisture with that of satellite data?" JITU KUMAR: So, the answer is yes and no. So no, because it does not do that at a continental scale that it operates in an automated fashion. But once we identify a region where we know that there are disturbance event happening there are people who will go ahead and pull those data sets, additional ancillary data sets manually, and do an assessment and then prepare a report to share with the local forest managers. So, we use that data, but we don't do this at continental scale in an automated fashion. MEREDITH BRASELMAN: OK, thank you. Gautum, the next question is for you. "Does Oak Ridge have matrix models that depict the financial loss or damage due to forest fire at different locations in the US due to transmission fault lines?" GAUTUM THAKUR: Well, we don't have anything at this time. [AUDIO OUT] get some financial estimates from [INAUDIBLE] data set. So, we've been talking with vendors to get MLS data that we can use for estimating an approximation of what the damage likely to be on the infrastructure at this time. MEREDITH BRASELMAN: Thank you. So, this next question is for Jitu and Feng. "Are ForWarn and Feng's platform related at all?" KEN ARMIJO: No, we are not related at all, as of now. JITU KUMAR: Yeah. MEREDITH BRASELMAN: OK, let's see, David. "Does WELL consider water resources and rate of water flow and past actual flood data?" DAVID JUDI: Yeah, we did do a number of hindcast, so grabbing observational data and run specific events. We typically focus on the more extreme events, so think like hurricane events. But as I mentioned, all of our riverine simulations that we have populated in the WELL are based on historical observations. I guess, the nuance there is we develop frequency distributions rather than specific events, and so we're developing more of a probabilistic representation of a flood extent. But the short answer is we can develop any number of hindcast for any unit. MEREDITH BRASELMAN: OK, and the next question is also for you. "Does WELL also model flooding in tributaries due to ice formations on entry to main rivers?" DAVID JUDI: The short answer is ice formation, no. But there is a way to represent obstructions like you might get from ice in the capabilities that populate data in the WELL. So, it's something that can be done kind of heuristically rather than explicitly modeling ice. MEREDITH BRASELMAN: OK, thank you. Gautum, next question is for you. "Relative to non-authoritative tweets or similar semi-spatial [AUDIO OUT] inputs, are there generalized best practices or even ML models which have proven successful in generating reliable time-critical information while filtering out false positives?" GAUTUM THAKUR: Yes, that's actually a really great question. And it's important to estimate the false positives and false negatives by working with the machine learning model. And really the core of doing is embedded in building a good codebook, a codebook which basically captures the nuances of how people talk during natural disasters, what are the framing narratives that are heard during certain kinds of natural disasters. So, people talking about floods is different from people talking about wildfires. So, a lot of our qualitative scientists are basically spending hours and hours to build this codebook which informs our machine learning models. And when we do data labeling for supervised machine learning models [AUDIO OUT] codebook as it's kind of like a dictionary, or I would say like a hierarchal dictionary you're embedding into the model, which improves our detection of machine learning. The second part of this is as new events and activities happen, it's important to kind of continuously update this codebook. But at the same time when some of the things [AUDIO OUT] happened before, and the words are fairly new, it's important to create a human in the loop approach towards putting some kind of trust and confidence in the data. So, the architecture that we build does include a team of geospatial scientists who, during critical times, evaluate and put this confidence interval within the data. So, that's kind of what happens. The second is, we rarely rely only on one data point. We always create and rely on [AUDIO OUT] numbers. So, if people are talking about a certain event or certain activity it's likely going to be true. And then we not only use one source, but more than one source. So not only, for example, Facebook is talking about the same thing, it's likely that Foursquare is also talking about the same thing. But also, besides [INAUDIBLE] are we getting any photos. People take pictures and they share that information. So, it's really a combination of coming up with the data fusion strategies to provide some kind of a confidence interval towards improving our understanding of machine learning. So, machine learning in itself is just not about detecting or classification, but also a large part of that building the code book, creating more than one line of evidence, and then also human in loop processes to really create a more compelling reason for policymakers in that way. MEREDITH BRASELMAN: Great, thank you. I'm going to try to get a few more questions in here. Ken, "Does Sandia have models or understanding of fire and its physics from energized fallen transmission line conductor on the ground or [AUDIO OUT]. KEN ARMIJO: We don't. Presently, we have a model—or the model work that we're actually in development of just looks at the immediate vicinity of say a combiner box or an inverter. And just to understand what the zone of influence is just around that area. Our models also look at AC fires just around a switchgear equipment. But we haven't extended it beyond that. So, I'm hoping with future funding we can extend the model to also see the impact of secondary hazards analysis, such as with vegetation and other areas. MEREDITH BRASELMAN: All right, thank you. David, I'm going to ask you this next question. "It seems like there's crossover with other agencies regarding wildfires and flooding. Do you have processes in place to collaborate with NOAA, the National Weather Service, to test your output for potential transition to operations?" DAVID JUDI: Yeah, good question. Yes, there is a lot of crossover, and we do a fair bit of coordination. This is generally through I guess what would be called the IRIS, and I won't try and spell out what the IRIS stands for, but it's a federal interagency coordination council for flooding. And it consists of USGS, FEMA, Corps of Engineers, National Weather Service, in particular, the National Water Center. And so, we do coordinate with them a fair bit, and we share information. For example, when we run simulations, we share that information across the interagency. MEREDITH BRASELMAN: Very good. All right, thank you so much. We are coming to the end of time today. I want to thank all of the panelists for your time, your energy, all of the great ideas that you brought forth today. We really appreciate it. We have posted the presentation from last week's webinar on the Wildfire Webinar Series website, and a PDF of today's presentation will also be on that page by Monday. And a recording of the presentation should be available later this week. Ladies and gentlemen, if you have not already done so, please, do register for our final two webinars to hear more about the national labs’ capabilities. Our next webinar, which is next Thursday starting at 2:00 p.m. Eastern, is going to focus on modeling and analytical tools. And we're going to have speakers from Argonne, Sandia, and SLAC National Accelerator Laboratory. The link to register is in the chat. And if you have any additional questions, please, do contact Stewart Cedres at DOE. His email is on the screen here as well. And thank you so much for your participation. We look forward to having all of you join us next week. Have a great day. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download