Public-facing solar hosting capacity (HC) maps, which show the maximum amount of solar energy that can be installed at a location without adverse effects, have proven to be a key driver of solar soft cost reductions through a variety of pathways (e.g., streamlining interconnection, siting, and customer acquisition processes). However, current methods for generating HC maps require detailed grid models and time-consuming simulations that limit both their accuracy and scalability—today, only a handful out of almost 2,000 utilities provide these maps. This project developed and validated data-driven algorithms for calculating solar HC using data from AMI without the need of detailed grid models or simulations. The algorithms were validated on utility datasets and incorporated as an application into NRECA’s Open Modeling Framework (OMF.coop) for the over 260 coops and vendors throughout the US to use. The OMF is free and open-source for everyone.
Before residential photovoltaic (PV) systems are interconnected with the grid, various planning and impact studies are conducted on detailed models of the system to ensure safety and reliability are maintained. However, these model-based analyses can be time-consuming and error-prone, representing a potential bottleneck as the pace of PV installations accelerates. Data-driven tools and analyses provide an alternate pathway to supplement or replace their model-based counterparts. In this article, a data-driven algorithm is presented for assessing the thermal limitations of PV interconnections. Using input data from residential smart meters, and without any grid models or topology information, the algorithm can determine the nameplate capacity of the service transformer supplying those customers. The algorithm was tested on multiple datasets and predicted service transformer capacity with >98% accuracy, regardless of existing PV installations. This algorithm has various applications from model-free thermal impact analysis for hosting capacity studies to error detection and calibration of existing grid models.
Accurate distribution system models are becoming increasingly critical for grid modernization tasks, and inaccurate phase labels are one type of modeling error that can have broad impacts on analyses using the distribution system models. This work demonstrates a phase identification methodology that leverages advanced metering infrastructure (AMI) data and additional data streams from sensors (relays in this case) placed throughout the medium-voltage sector of distribution system feeders. Intuitive confidence metrics are employed to increase the credibility of the algorithm predictions and reduce the incidence of false-positive predictions. The method is first demonstrated on a synthetic dataset under known conditions for robustness testing with measurement noise, meter bias, and missing data. Then, four utility feeders are tested, and the algorithm’s predictions are proven to be accurate through field validation by the utility. Lastly, the ability of the method to increase the accuracy of simulated voltages using the corrected model compared to actual measured voltages is demonstrated through quasi-static time-series (QSTS) simulations. The proposed methodology is a good candidate for widespread implementation because it is accurate on both the synthetic and utility test cases and is robust to measurement noise and other issues.
Before residential photovoltaic (PV) systems are interconnected with the grid, various planning and impact studies are conducted on detailed models of the system to ensure safety and reliability are maintained. However, these model-based analyses can be time-consuming and error-prone, representing a potential bottleneck as the pace of PV installations accelerates. Data-driven tools and analyses provide an alternate pathway to supplement or replace their model-based counterparts. In this article, a data-driven algorithm is presented for assessing the thermal limitations of PV interconnections. Using input data from residential smart meters, and without any grid models or topology information, the algorithm can determine the nameplate capacity of the service transformer supplying those customers. The algorithm was tested on multiple datasets and predicted service transformer capacity with >98% accuracy, regardless of existing PV installations. This algorithm has various applications from model-free thermal impact analysis for hosting capacity studies to error detection and calibration of existing grid models.
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
High penetrations of residential solar PV can cause voltage issues on low-voltage (LV) secondary networks. Distribution utility planners often utilize model-based power flow solvers to address these voltage issues and accommodate more PV installations without disrupting the customers already connected to the system. These model-based results are computationally expensive and often prone to errors. In this paper, two novel deep learning-based model-free algorithms are proposed that can predict the change in voltages for PV installations without any inherent network information of the system. These algorithms will only use the real power (P), reactive power (Q), and voltage (V) data from Advanced Metering Infrastructure (AMI) to calculate the change in voltages for an additional PV installation for any customer location in the LV secondary network. Both algorithms are tested on three datasets of two feeders and compared to the conventional model-based methods and existing model-free methods. The proposed methods are also applied to estimate the locational PV hosting capacity for both feeders and have shown better accuracies compared to an existing model-free method. Results show that data filtering or pre-processing can improve the model performance if the testing data point exists in the training dataset used for that model.
Residential solar photovoltaic (PV) systems are interconnected with the distribution grid at low-voltage secondary network locations. However, computational models of these networks are often over-simplified or non-existent, which makes it challenging to determine the operational impacts of new PV installations at those locations. In this work, a model-free locational hosting capacity analysis algorithm is proposed that requires only smart meter measurements at a given location to calculate the maximum PV size that can be accommodated without exceeding voltage constraints. The proposed algorithm was evaluated on two different smart meter datasets measuring over 2,700 total customer locations and was compared against results obtained from conventional model-based methods for the same smart meter datasets. Compared to the model-based results, the model-free algorithm had a mean absolute error (MAE) of less than 0.30 kW, was equally sensitive to measurement noise, and required much less computation time.
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
The wide variety of inverter control settings for solar photovoltaics (PV) causes the accurate knowledge of these settings to be difficult to obtain in practice. This paper addresses the problem of determining inverter reactive power control settings from net load advanced metering infrastructure (AMI) data. The estimation is first cast as fitting parameterized control curves. We argue for an intuitive and practical approach to preprocess the AMI data, which exposes the setting to be extracted. We then develop a more general approach with a data-driven reactive power disaggregation algorithm, reframing the problem as a maximum likelihood estimation for the native load reactive power. These methods form the first approach for reconstructing reactive power control settings of solar PV inverters from net load data. The constrained curve fitting algorithm is tested on 701 loads with behind-the-meter (BTM) PV systems with identical control settings. The settings are accurately reconstructed with mean absolute percentage errors between 0.425% and 2.870%. The disaggregation-based approach is then tested on 451 loads with variable BTM PV control settings. Different configurations of this algorithm reconstruct the PV inverter reactive power timeseries with root mean squared errors between 0.173 and 0.198 kVAR.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany; Lave, Matt; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim; Holman, Derek; Mannon, Tim; Pinney, David
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO), including some updates from the previous report SAND2022-0215, to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
Conservation voltage reduction (CVR) is a common technique used by utilities to strategically reduce demand during peak periods. As penetration levels of distributed generation (DG) continue to rise and advanced inverter capabilities become more common, it is unclear how the effectiveness of CVR will be impacted and how CVR interacts with advanced inverter functions. In this work, we investigated the mutual impacts of CVR and DG from photovoltaic (PV) systems (with and without autonomous Volt-VAR enabled). The analysis was conducted on an actual utility dataset, including a feeder model, measurement data from smart meters and intelligent reclosers, and metadata for more than 30 CVR events triggered by the utility over the year. The installed capacity of the modeled PV systems represented 66% of peak load, but reached instantaneous penetrations reached up to 2.5x the load consumption over the year. While the objectives of CVR and autonomous Volt-VAR are opposed to one another, this study found that their interactions were mostly inconsequential since the CVR events occurred when total PV output was low.
Distributed generation (DG) sources like photovoltaic (PV) systems with advanced inverters are able to perform grid-support functions, like autonomous Volt-VAR that attempts to mitigate voltage issues by injecting or consuming reactive power. However, the Volt-VAR function operates with VAR priority, meaning real power may be curtailed to provide additional reactive power support. Since some locations on the grid may be more prone to higher voltages than others, PV systems installed at those locations may be forced to curtail more power, adversely impacting the value of that PV system. Adaptive Volt-VAR (AVV) could be implemented as an alternative, whereby the Volt-VAR reference voltage changes over time, but this functionality has not been well-explored in the literature. In this work, the potential benefits and grid impacts of AVV were investigated using yearlong quasi-static time-series (QSTS) simulations. After testing a variety of allowable AVV settings, we found that even with aggressive settings AVV resulted in <0.01% real power curtailment and significantly reduced the reactive power support required from the PV inverter compared to conventional Volt-VAR but did not provide much mitigation for extreme voltage conditions. The reactive power support provided by AVV was injected to oppose large deviations in voltage (in either direction), indicating that it could be useful for other applications like reducing voltage flicker or minimizing interactions with other voltage regulating devices.
The increasing availability of advanced metering infrastructure (AMI) data has led to significant improvements in load modeling accuracy. However, since many AMI devices were installed to facilitate billing practices, few utilities record or store reactive power demand measurements from their AMI. When reactive power measurements are unavailable, simplifying assumptions are often applied for load modeling purposes, such as applying constant power factors to the loads. The objective of this work is to quantify the impact that reactive power load modeling practices can have on distribution system analysis, with a particular focus on evaluating the behaviors of distributed photovoltaic (PV) systems with advanced inverter capabilities. Quasi-static time-series simulations were conducted after applying a variety of reactive power load modeling approaches, and the results were compared to a baseline scenario in which real and reactive power measurements were available at all customer locations on the circuit. Overall, it was observed that applying constant power factors to loads can lead to significant errors when evaluating customer voltage profiles, but that performing per-phase time-series reactive power allocation can be utilized to reduce these errors by about 6x, on average, resulting in more accurate evaluations of advanced inverter functions.
The proper coordination of power system protective devices is essential for maintaining grid safety and reliability but requires precise knowledge of fault current contributions from generators like solar photovoltaic (PV) systems. PV inverter fault response is known to change with atmospheric conditions, grid conditions, and inverter control settings, but this time-varying behavior may not be fully captured by conventional static fault studies that are used to evaluate protection constraints in PV hosting capacity analyses. To address this knowledge gap, hosting capacity protection constraints were evaluated on a simplified test circuit using both a time-series fault analysis and a conventional static fault study approach. A PV fault contribution model was developed and utilized in the test circuit after being validated by hardware experiments under various irradiances, fault voltages, and advanced inverter control settings. While the results were comparable for certain protection constraints, the time-series fault study identified additional impacts that would not have been captured with the conventional static approach. Overall, while conducting full time-series fault studies may become prohibitively burdensome, these findings indicate that existing fault study practices may be improved by including additional test scenarios to better capture the time-varying impacts of PV on hosting capacity protection constraints.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous modes, which will produce new stresses on the grid-interfacing power electronics components, such as DC/AC inverters. In previous work, multiple DER devices were instrumented to evaluate additional component stress under multiple reactive power setpoints. We utilize quasi-static time-series simulations to determine voltage-reactive power mode (volt-var) mission profile of inverters in an active power system. Mission profiles and loss estimates are then combined to estimate the reduction of the useful life of inverters from different reactive power profiles. It was found that the average lifetime reduction was approximately 0.15% for an inverter between standard unity power factor operation and the IEEE 1547 default volt-var curve based on thermal damage due to switching in the power transistors. For an inverter with an expected 20-year lifetime, the 1547 volt-var curve would reduce the expected life of the device by 12 days. This framework for determining an inverter's useful life from experimental and modeling data can be applied to any failure mechanism and advanced inverter operation.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous modes, which will produce new stresses on the grid-interfacing power electronics components, such as DC/AC inverters. In previous work, multiple DER devices were instrumented to evaluate additional component stress under multiple reactive power setpoints. We utilize quasi-static time-series simulations to determine voltage-reactive power mode (volt-var) mission profile of inverters in an active power system. Mission profiles and loss estimates are then combined to estimate the reduction of the useful life of inverters from different reactive power profiles. It was found that the average lifetime reduction was approximately 0.15% for an inverter between standard unity power factor operation and the IEEE 1547 default volt-var curve based on thermal damage due to switching in the power transistors. For an inverter with an expected 20-year lifetime, the 1547 volt-var curve would reduce the expected life of the device by 12 days. This framework for determining an inverter's useful life from experimental and modeling data can be applied to any failure mechanism and advanced inverter operation.
Frequent changes in penetration levels of distributed energy resources (DERs) and grid control objectives have caused the maintenance of accurate and reliable grid models for behind-the-meter (BTM) photovoltaic (PV) system impact studies to become an increasingly challenging task. At the same time, high adoption rates of advanced metering infrastructure (AMI) devices have improved load modeling techniques and have enabled the application of machine learning algorithms to a wide variety of model calibration tasks. Therefore, we propose that these algorithms can be applied to improve the quality of the input data and grid models used for PV impact studies. In this paper, these potential improvements were assessed for their ability to improve the accuracy of locational BTM PV hosting capacity analysis (HCA). Specifically, the voltage- and thermal-constrained hosting capacities of every customer location on a distribution feeder (1,379 in total) were calculated every 15 minutes for an entire year before and after each calibration algorithm or load modeling technique was applied. Overall, the HCA results were found to be highly sensitive to the various modeling deficiencies under investigation, illustrating the opportunity for more data-centric/model-free approaches to PV impact studies.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany D.; Lave, Matt; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Gomez-Peces, Cristian; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO) to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
Recent trends in PV economics and advanced inverter functionalities have contributed to the rapid growth in PV adoption; PV modules have gotten much cheaper and advanced inverters can deliver a range of services in support of grid operations. However, these phenomena also provide conditions for PV curtailment, where high penetrations of distributed PV often necessitate the use of advanced inverter functions with VAR priority to address abnormal grid conditions like over- and under-voltages. This paper presents a detailed energy loss analysis, using a combination of open-source PV modeling tools and high-resolution time-series simulations, to place the magnitude of clipped and curtailed PV energy in context with other operational sources of PV energy loss. The simulations were conducted on a realistic distribution circuit, modified to include utility load data and 341 modeled PV systems at 25% of the customer locations. The results revealed that the magnitude of clipping losses often overshadows that of curtailment but, on average, both were among the lowest contributors to total annual PV energy loss. However, combined clipping and curtailment loss are likely to become more prevalent as recent trends continue.
Grid support functionalities from advanced PV inverters are increasingly being utilized to help regulate grid conditions and enable high PV penetration levels. To ensure a high degree of reliability, it is paramount that protective devices respond properly to a variety of fault conditions. However, while the fault response of PV inverters operating at unity power factor has been well documented, less work has been done to characterize the fault contributions and impacts of advanced inverters with grid support enabled under conditions like voltage sags and phase angle jumps. To address this knowledge gap, this paper presents experimental results of a three-phase photovoltaic inverter's response during and after a fault to investigate how PV systems behave under fault conditions when operating with and without a grid support functionality (autonomous Volt-Var) enabled. Simulations were then conducted to quantify the potential impact of the experimental findings on protection systems. It was observed that fault current magnitudes across several protective devices were impacted by non-unity power factor operating conditions, suggesting that protection settings may need to be studied and updated whenever grid support functions are enabled or modified.
Grid support functionalities from advanced PV inverters are increasingly being utilized to help regulate grid conditions and enable high PV penetration levels. To ensure a high degree of reliability, it is paramount that protective devices respond properly to a variety of fault conditions. However, while the fault response of PV inverters operating at unity power factor has been well documented, less work has been done to characterize the fault contributions and impacts of advanced inverters with grid support enabled under conditions like voltage sags and phase angle jumps. To address this knowledge gap, this paper presents experimental results of a three-phase photovoltaic inverter's response during and after a fault to investigate how PV systems behave under fault conditions when operating with and without a grid support functionality (autonomous Volt-Var) enabled. Simulations were then conducted to quantify the potential impact of the experimental findings on protection systems. It was observed that fault current magnitudes across several protective devices were impacted by non-unity power factor operating conditions, suggesting that protection settings may need to be studied and updated whenever grid support functions are enabled or modified.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modeling and impact analysis. Unlike conventional scenario-based studies, quasi-static time-series (QSTS) simulations can realistically model time-dependent voltage controllers and the diversity of potential impacts that can occur at different times of year. However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1-second resolution is often required, which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled. This computational burden is a clear limitation to the adoption of QSTS simulations in interconnection studies and for determining optimal control solutions for utility operations. The solutions we developed include accurate and computationally efficient QSTS methods that could be implemented in existing open-source and commercial software used by utilities and the development of methods to create high-resolution proxy data sets. This project demonstrated multiple pathways for speeding up the QSTS computation using new and innovative methods for advanced time-series analysis, faster power flow solvers, parallel processing of power flow solutions and circuit reduction. The target performance level for this project was achieved with year-long high-resolution time series solutions run in less than 5 minutes within an acceptable error.
By strategically curtailing active power and providing reactive power support, photovoltaic (PV) systems with advanced inverters can mitigate voltage and thermal violations in distribution networks. Quasi-static time-series (QSTS) simulations are increasingly being utilized to study the implementation of these inverter functions as alternatives to traditional circuit upgrades. However, QSTS analyses can yield significantly different results based on the availability and resolution of input data and other modeling considerations. In this paper, we quantified the uncertainty of QSTS-based curtailment evaluations for two different grid-support functions (autonomous Volt-Var and centralized PV curtailment for preventing reverse power conditions) through extensive sensitivity analyses and hardware testing. We found that Volt-Var curtailment evaluations were most sensitive to poor inverter convergence (-56.4%), PV time-series data (-18.4% to +16.5%), QSTS resolution (-15.7%), and inverter modeling uncertainty (+14.7%), while the centralized control case was most sensitive to load modeling (-26.5% to +21.4%) and PV time-series data (-6.0% to +12.4%). These findings provide valuable insights for improving the reliability and accuracy of QSTS analyses for evaluating curtailment and other PV impact studies.
Advanced solar PV inverter control settings may not be reported to utilities or may be changed without notice. This paper develops an estimation method for determining a fixed power factor control setting of a behind-the-meter (BTM) solar PV smart inverter. The estimation is achieved using linear regression methods with historical net load advanced metering infrastructure (AMI) data. Notably, the BTM PV power factor setting may be unknown or uncertain to a distribution engineer, and cannot be trivially estimated from the historical AMI data due to the influence of the native load on the measurements. To solve this, we use a simple percentile-based approach for filtering the measurements. A physics-based linear sensitivity model is then used to determine the fixed power factor control setting from the sensitivity in the complex power plane. This sensitivity parameter characterizes the control setting hidden in the aggregate data. We compare several loss functions, and verify the models developed by conducting experiments on 250 datasets based on real smart meter data. The data are augmented with synthetic quasi-static-timeseries (QSTS) simulations of BTM PV that simulate utility-observed aggregate measurements at the load. The simulations demonstrate the reactive power sensitivity of a BTM PV smart inverter can be recovered efficiently from the net load data after applying the filtering approach.
The rising penetration levels of photovoltaic (PV) systems within distribution networks has driven considerable interest in the implementation of advanced inverter functions, like autonomous Volt- Var, to provide grid support in response to adverse conditions. Quasi-static time-series (QSTS) analyses are increasingly being utilized to evaluate advanced inverter functions on their potential benefits to the grid and to quantify the magnitude of PV power curtailment they may induce. However, these analyses require additional modeling efforts to appropriately capture the time-varying behavior of circuit elements like loads and PV systems. The contribution of this paper is to study QSTS-based curtailment evaluations with different load allocation and PV modeling practices under a variety of assumptions and data limitations. A total of 24 combinations of PV and load modeling scenarios were tested on a realistic test circuit with 1,379 loads and 701 PV systems. The results revealed that the average annual curtailment varied from the baseline value of 0.47% by an absolute difference of +0.55% to -0.43 % based on the modeling scenario.
Distributed photovoltaic (PV) systems equipped with advanced inverters can control real and reactive power output based on grid and atmospheric conditions. The Volt-Var control method allows inverters to regulate local grid voltages by producing or consuming reactive power. Based on their power ratings, the inverters may need to curtail real power to meet the reactive power requirements, which decreases their total energy production. To evaluate the expected curtailment associated with Volt-Var control, yearlong quasi-static time-series (QSTS) simulations were conducted on a realistic distribution feeder under a variety of PV system design considerations. Overall, this paper found that the amount of curtailed energy is low (< 0.55%) compared to the total PV energy production in a year but is affected by several PV system design considerations.
Quasi-static time-series (QSTS) analysis of distribution systems can provide critical information about the potential impacts of high penetrations of distributed and renewable resources, like solar photovoltaic systems. However, running high-resolution yearlong QSTS simulations of large distribution feeders can be prohibitively burdensome due to long computation times. Temporal parallelization of QSTS simulations is one possible solution to overcome this obstacle. QSTS simulations can be divided into multiple sections, e.g. into four equal parts of the year, and solved simultaneously with parallel computing. The challenge is that each time the simulation is divided, error is introduced. This paper presents various initialization methods for reducing the error associated with temporal parallelization of QSTS simulations and characterizes performance across multiple distribution circuits and several different computers with varying architectures.
Quasi-static time-series (QSTS) analysis of distribution systems can provide critical information about the potential impacts of high penetrations of distributed and renewable resources, like solar photovoltaic systems. However, running high-resolution yearlong QSTS simulations of large distribution feeders can be prohibitively burdensome due to long computation times. Temporal parallelization of QSTS simulations is one possible solution to overcome this obstacle. QSTS simulations can be divided into multiple sections, e.g. into four equal parts of the year, and solved simultaneously with parallel computing. The challenge is that each time the simulation is divided, error is introduced. This paper presents various initialization methods for reducing the error associated with temporal parallelization of QSTS simulations and characterizes performance across multiple distribution circuits and several different computers with varying architectures.
Distribution system analysis requires yearlong quasi-static time-series (QSTS) simulations to accurately capture the variability introduced by high penetrations of distributed energy resources (DER) such as residential and commercial-scale photovoltaic (PV) installations. Numerous methods are available that significantly reduce the computational time needed for QSTS simulations while maintaining accuracy. However, analyzing the results remains a challenge; a typical QSTS simulation generates millions of data points that contain critical information about the circuit and its components. This paper provides examples of visualization methods to facilitate the analysis of QSTS results and to highlight various characteristics of circuits with high variability.
Distribution system analysis requires yearlong quasi-static time-series (QSTS) simulations to accurately capture the variability introduced by high penetrations of distributed energy resources (DER) such as residential and commercial-scale photovoltaic (PV) installations. Numerous methods are available that significantly reduce the computational time needed for QSTS simulations while maintaining accuracy. However, analyzing the results remains a challenge; a typical QSTS simulation generates millions of data points that contain critical information about the circuit and its components. This paper provides examples of visualization methods to facilitate the analysis of QSTS results and to highlight various characteristics of circuits with high variability.