As power system planners and operators continue to increase the rate of adoption of renewable energy sources into their generation asset mix, the need to understand the true capabilities of energy storage becomes increasingly important and time-sensitive. In addition to many system wide benefits, one of the most unique aspects of energy storage is its ability to enhance reliability and stability of the power system by virtue of its fast ramp response compared to traditional generation assets. While energy storage technologies are widely acknowledged to hold great promise across the value chain, some users’ experiences have been inconsistent. Challenges such as identifying the optimal system value, ensuring or validating system durability, as well as ensuring optimal integration and validation in grid applications have surfaced in some of the early demonstration projects. Where these issues have been encountered, it is important to understand if they are fundamentally related to design, installation, communication, controls, or operating principles, or if the actual integration and operating of these systems are at the root of the challenges. The important question becomes: How can we be sure we understand the root cause of the issues and are able to optimize such systems to prevent future issues from occurring, or understand if they are still present in the current generation of energy storage projects being deployed?
In short, as storage system developers, owners, and operators continue to deploy systems into the field, combining efforts to optimize these new assets, by utilizing state of the art data acquisition frameworks and optimization techniques to fine tune their real-world performance becomes critical.
State-of-the-art data acquisition
The key to good data acquisition is collecting the right information, at the right time, and in the right format so the information can be easily stored, transferred, and interpreted. This may sound simple at first, but reliable power system data acquisition has unique challenges and it is important to understand the steps required to ensure robust, high-quality data is available to inform operational decisions.
Right information – Given the complexity of the systems under study, making sure that all of the relevant components such as electronic devices/controllers, servers, electronic relays, smart metering devices and data historians have been identified is critical to ensure the required information for operational control and validation is captured. Currently many owners, equipment suppliers and system integrators use their own preferred control and supervisory control and data acquisition (SCADA) platforms to collect and integrate various subsystems and devices. In some cases, this can complicate the data collection process since not all controllers/devices are connected to the main SCADA network. Thus, in certain circumstances, the only way to exchange data is indirectly through another device, which can cause a bottleneck in the system if high speed data is required or if the exchange is not implemented correctly. In addition, while some devices such as inverters are designed to provide standard industry-accepted data outputs that can be analyzed and used to inform real-time operating decisions, not all energy storage systems are standardized to the point where they routinely capture and provide all of those critical data outputs. While groups like the Modular Energy Storage Architecture (MESA) Standards Alliance is making headway in this area, this lack of standardization results in many operators not having enough information to make real-time operational decisions that take into account the continuing health (i.e. availability, performance and capacity) of the storage system, or the information required to diagnose faults or optimize performance. Therefore, carefully reviewing the available data sets against established standards and best practices is critical to the success of any storage project.
Right timing – One of the advantages that storage systems have over many traditional generating assets is that they have a minimal amount of inertia, and can therefore manage transient events with much faster response times. The sub-cycle response time(s) can be achieved with optimized control schemes and highly deterministic communication network(s). This in turn ensures power system stability and reliability (e.g. prevents trips). In addition, in order to validate and fine tune the response of the storage asset, collecting data at a high speed becomes increasingly important. Without access to the sub-cycle data, it is more difficult to tune and optimize the system to allow the system to respond effectively to transient events. However, in most installations, the sampling frequency of SCADA systems and storage databases are generally slower than one second. Therefore, special-purpose data collection equipment and data exchange protocols may be required to effectively collect both high-speed and slow-speed data.
Right format – Finally, establishing a methodology to download the data from several platforms, while ensuring proper security and integration of devices from various vendors and owners can be challenging. Since the data available from the storage system and related network is distributed over various platforms and devices, it is highly unlikely that the information is integrated and stored in compatible formats. While this is somewhat rectified using off-the-shelf software systems and proper implementation, a larger issue can occur while attempting to analyze and optimize the system when clocks built into these various platforms and devices are not synchronized. Unfortunately, this means that once the data is collected, the data sets must be manually synchronized to some known event such as a power trip or a generator start-up event in the analysis software/tool to be effectively used in the optimization process.
Clearly, while data acquisition has its challenges, with proper up-front design or post-installation optimization, these difficulties can be resolved or certainly minimized.
Leveraging data to optimize system performance
While the details of data collection and optimization are interesting, at the end of the day the energy storage system owner/operator wants to ensure that their assets are operated efficiently, safely and in a manner that enables all the services that provide value to the system. How can this be achieved? Here as well, using a robust data collection and management framework is critical, as it enables a similarly robust validation and optimization process. At a minimum, the important elements are to:
Model - Start with robust power system and control system models that adequately describe the system under study
Simulate – Using the models above, simulate energy storage system performance in response to various transient events, and output high speed data
Analyse – During both simulation and operation, analyze the response of the system to both normal operation and transient operation, and compare it against the intended operation
Tune – Based on the analysis above, the final step is to adjust both the engineering parameters and protocols to optimize the overall storage system performance within the larger grid context
Model - While the strategy is straightforward, the implementation steps can become complex and will depend on whether the goal is to optimize the protocol (i.e. the request for services coming from the user) or the system (i.e. the accuracy of the response to the request). In either case, the system models required to perform the optimization process have two tiers: (1) the power system model, which includes the energy storage model, and (2) the control system model (see diagram below). It is important to note that the goal here is not only to maximize system performance, but also system lifetimes. In this way, a robust understanding of both the electrical components’ operation and the storage technologies behaviour to specific stressors or control methodologies is established. This is an important consideration that many current implementations overlook.
Simulate – It is widely known that models can be built, optimized and analyzed exclusively in software, or with a combination of software and hardware in the loop simulators. In pure software system models, common software packages used are PSCAD, Matlab Simulink, and others. While building and evaluating models purely in the software domain is a reasonable approach and suitable in many instances, the most obvious disadvantage is that it doesn’t include hardware interfaces that influence the system response when sub-cycle information and simulation is critical. Therefore, when using hardware in the loop, the power model is most often built in Real Time Digital Power Simulator (RTDS) hardware in conjunction with RSCAD software and the actual control system interfaces with the RTDS system. Both of these approaches are valid, and can be implemented correctly in the majority of systems depending on the exact needs and configuration required. What is critical is that the scenarios under investigation, the operating parameters of the system being studied, and the desired response of the system are well understood, and a robust optimization plan is developed during commissioning and operation phases.
Analyse and tune - Once the appropriate models are built and the actual data is captured from/at the appropriate points and sampling speeds, the simulation and optimization process can yield significant benefits from a solid data acquisition plan.
When the ultimate goal is to optimize the operator’s request to the storage system, event correlation analysis is carried out to verify the completeness of the available data, and to identify opportunities for process improvement. Understanding the discrepancy between planned and actual availability of system components leads to better planning of system control and plant alarms. In addition, this event and operation correlation analysis of the energy storage efficiency and available capacity with the actual system operation provides valuable information required to develop standards for energy storage integration in power systems. As mentioned above, this standardization will lead to better and more consistent data from the storage devices and allow more complete analysis, thus leading to optimized operations and more seamless integration of storage assets into utility and industrial applications.
When the ultimate goal is to optimize the system’s response to a performance request from the user, the response and behaviour is analyzed through several iterative processes, then the results are compared to actual equipment performance and a tuning scheme can be implemented by the power system owners. This is an ongoing process that ensures the owner/operator is able to accurately predict the behaviours of the system and adjust its reaction to events. Whenever there is a gap between the expected and the actual results, the models are optimized through the same process and the installed system is tuned accordingly.
As outlined above, data collection, simulation, analysis and subsequent optimization processes for energy storage systems are critical not only during the design phase, but throughout operation as well. Ensuring that new or existing assets are operating as designed, and in a way which not only meets safety and contractual criteria, but also optimizes its performance and durability, can be a continuous process. Storage system modeling, real-time monitoring, performance and reliability analysis, and system optimization are just some of the many activities that the NRC is currently working on with stakeholders to ensure that all of the right pieces are in place for them to gain maximum value from their systems.
For more information on how these projects and other initiatives related to data acquisition and system operation can benefit your organization, reach out and get involved.