Document Purpose - New York State Department of Public Service

 NYS ITWG, Industry & JU, Comprehensive CESIR Analysis Evaluation InitiativeVersion 1 - Last updated by Industry on 16 Mar 2021Document PurposeBelow you will see some excerpts from the following Google Spreadsheet:Master CESIR Analysis Ranking & Evaluation, Grouped File We have moved the feedback and questions from the spreadsheet to this document to facilitate better collaboration, information transfer, flexibility, etc, for each analysis.This is a dynamic document that will evolve as part of the exchange/collaboration process.Formatting KeyYellow highlight represents an action item from Industry for the JU to complete.Blue text is written response text by the JU as understood by Industry from previous ITWG calls or from written responses.TIER 1 Importance, CESIR Analysis to ReviewAnalysis ID #1 - tbd-analysis-title1VoltageOvervoltage < 105% (ANSI C84.1) With the addition of the subject generator the maximum voltage as modeled on the Feeder is [X]% of nominal."Purpose"JU: Utilities are required to design and operate their systems to comply with ANSI C84.1 Standard. The purpose of this analysis is to determine if the DER system causes the voltage to exceed the maximum limit under ANSI Range A.What exact real-life scenario/problem are we trying to model with this analysis?(industry requests JU replace this with a response)General MethodJU: The proposed system is modeled under the requested operating characteristics. Run steady state power flow with all DER at 100% of its nameplate rating (or based on applicable operating characteristics) for both peak system conditions and minimum load conditions (daytime loads are used for PV). The impact is compared to utility voltages prior to the DER system being modeled along with verifying the system does not cause any overvoltage conditions. Are there differences in the periods of daytime load used between utilities? (Please describe in general and/or have each utility state their exact assumptions below.)UtilityResponseNGNYSEGCHG&EORUConEdPSEGLI(Is there an equivalent and what is their method?)Are there differences in the generation change in kW used between utilities, are all modeling from 0 to 100% output, and for all of the DER on the circuit or just the one being proposed? (Please describe in general and/or have each utility state their exact assumptions below.)UtilityResponseNGNYSEGCHG&EORUConEdPSEGLI(Is there an equivalent and what is their method?)Are system regulator movements currently taken into account? If all utilities are not doing the same thing, please explain the reasons.(Please describe in general and/or have each utility state their exact assumptions below. Note that we are requesting updated responses here.)UtilityResponseNGNYSEGCHG&EORUConEdPSEGLI(Is there an equivalent and what is their method?)If any utilities are NOT taking regulator movements into account, please explain why. Note that the industry believes that regulators must be taken into account for all utilities.(industry requests JU replace this with a response)Industry Open Questions Please respond if possible; otherwise know that this is an upcoming open question to be addressed.What is the correct method, and how does this tie to the rationale behind the calculation? Are all performing this as a steady state operation with regulators in service?Analysis ID #2 - tbd-analysis-title2Voltage Undervoltage > 95% (ANSI C84.1) With the addition of the subject generator the minimum voltage as modeled on the Feeder is [X]% of nominal."Purpose"JU: Utilities are required to design and operate their systems to comply with ANSI C84.1 Standard. The purpose of this analysis is to determine if the DER system causes the voltage to exceed the minimum limit under ANSI Range A.What exact real-life scenario/problem are you trying to model with this analysis?(industry requests JU replace this with a response)General MethodJU: The proposed system is modeled under the requested operating characteristics. Run steady state power flow with all DER at 100% of its nameplate rating (or based on applicable operating characteristics) for both peak system conditions and minimum load conditions (daytime loads are used for PV). The impact is compared to utility voltages prior to the DER system being modeled along with verifying the system does not cause any undervoltage conditions.Are all performing this as a steady state operation with regulators in service? If not, please explain why.(industry requests JU replace this with a response)Former responses for reference or update:UtilityResponseNGNYSEGCHG&EORUConEdPSEGLI(Is there an equivalent and what is their method?)Industry Open Questions Please respond if possible; otherwise know that this is an upcoming open question to be addressed.Depending on the answer to "real life" situation, what is the correct method, and how does this tie to the rationale behind the calculation?Analysis ID #5 - tbd-analysis-title5Voltage Fluctuation <3% steady state from proposed generation on feeder (none)"Purpose"JU: Ensure proposed generation does not cause voltage fluctuation of 3% or greater.What exact real-life scenario/problem are we trying to model with this analysis?(industry requests JU replace this with a response)General MethodJU: Model the utility system with proposed DER at 0% output and compare with proposed DER at 100% output to verify no electric node voltage changes by 3% or more. This analysis is performed for both peak and minimum loading conditions. Utilities model regulator operations and study additional line sections on an individual basis. Over what timeframe is the 100% to 0% DER being removed? Are there any kVA per unit time considerations? If it differs by utility, please describe the rationale for each.(industry requests JU replace this with a response)Are system regulator movements currently taken into account? If all utilities are not doing the same thing, please explain the reasons.(Please describe in general and/or have each utility state their exact assumptions below. Note that we are requesting updated responses here.)UtilityResponseNGUse a module using long term dynamics in CYME to plot over a period of time at steady state points. Between 0% and 100%. Apply with if regulator is or not operatingNYSEGLet regulators run.CHG&ELocked Regulators - Has "seen" data changing 100%. Definitely just irradiance data only. Says this is RVC but its clearly not.ORULocked RegulatorsConEdLocked RegulatorsPSEGLIUses geographic correlation factor to assess voltage changes. Use 80% for PV, 200% for frequency regulating BESS. Will check on BESS not operating in ISO.Industry Open Questions Please respond if possible; otherwise know that this is an upcoming question we have open.Depending on the answer to "real life" situation, what is the correct method, and how does this tie to the rationale behind the calculation?Analysis ID #6 - tbd-analysis-title6Voltage Fluctuation <5% steady state from aggregate DER on substation bus (none)"Purpose"JU: Ensure proposed DER and existing queued ahead DER does not cause voltage fluctuations of 5% or greater.What exact real-life scenario/problem are we trying to model with this analysis?(industry requests JU replace this with a response)General MethodJU: Model the utility system with all DER at 0% output and compare with all DER at 100% output to verify no electric node on distribution circuit or substation bus sees a voltage change of 5% or greater. This analysis is performed for both peak and minimum loading conditions. Utilities model regulator operations on an individual basis.Are system regulator movements currently taken into account? If all utilities are not doing the same thing, please explain reasons.(Please describe in general and/or have each utility state their exact assumptions below. Note that we are requesting updated responses here.)UtilityResponseNGRun station as a whole. Look at both locked and unlocked regulators.NYSEGLet regulators run. CHG&ENot sureORULocked Regs - All DERConEdLocked Regs - all DERPSEGLI(Is there an equivalent and what is their method?)What is the time frame and kW change assumed in the analysis? If not all utilities are doing the same, please explain reasons.(industry requests JU replace this with a response)Industry Open Questions Please respond if possible; otherwise know that this is an upcoming open question to be addressed.Depending on the answer to "real life" situation, what is the correct method, and how does this tie to the rationale behind the calculation?Analysis ID #7 - tbd-analysis-title7Voltage Fluctuation Regulator tap movement exceeds 1 position, generation change of 75% of nameplate rating does not result in voltage change > ?the bandwidth of any feeder voltage regulating device.The greatest voltage fluctuation on the feeder occurs at [location] and substation bus occurs at [location]. The resulting fluctuation at the feeder location is [X]% due to the proposed generation and [X]% on the substation bus due to the aggregate generation. [Add additional details for voltage regulators as needed.]"Purpose"JU: To verify if the addition of the proposed system will cause excessive tap movements and/or voltage changes greater than half of the regulator bandwidth.What exact real-life scenario/problem are we trying to model with this analysis?(industry requests JU replace this with a response)General MethodJU: Model the utility system with the DER at 100% output, then model the proposed DER at a 25% output (to simulate a 75% change in output) and verify that the proposed DER does not cause any voltage regulating device to exceed 1/2 the bandwidth. This analysis is performed for both peak and minimum loading conditions.Are system regulator movements currently taken into account? If all utilities are not doing the same thing, please explain reasons.(Please describe in general and/or have each utility state their exact assumptions below. Note that we are requesting updated responses here.)UtilityResponseNGOutput from dynamics model. Use step wise function. Similar to a time-series.NYSEGTake anything downline of regulator but do 3V/full step bandwith (100% to 25%). Realworld example for 80,000 tap changes. Request that NYSEG bring that information. 90 second time delay.CHG&EHow much excessive movement (Use 75% output). Look at all DG from the circuit at once tripping. No Standalone ESS yet - would useORU75% change in output, within 3000 ft. radius, look for change in tap movement and bands (1/2 bandwith or 2V). Everything else on.ConEdDo 75% just for proposed DG. But have done for projects directly adjacent.PSEGLIAll 100% but at full step, not half stepMisc Q&AWhen we mention generator drop from 100% to 25%. Are we doing just the proposed project or on the substation?answer?(2) CYME Load Profile Analysis being used? NG: No, long term dynamics module. Do you include time delay of voltage regulator? NG: YesWhat is the typical tap movement number per year rating?NG: Has seen 500 taps per day from study but haven't had real data (Stoner Substation) - Feeder has 12MW connected Industry Open Questions Please respond if possible; otherwise know that this is an upcoming open question to be addressed.Depending on the answer to "real life" situation, what is the correct method, and how does this tie to the rationale behind the calculation?What is the correct method for this calculation? What is the max number of regulators taps per year?TIER 2 Importance, CESIR Analysis to Review(will transfer over from spreadsheet in future versions)TIER 3 Importance, CESIR Analysis to Review(will transfer over from spreadsheet in future versions) ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download