top of page
  • Writer's pictureSantiago Peñate

Updated: Feb 2, 2023

Introduction

Power systems planning is the process of proposing new infrastructure for the development of the power system. This is done by attending to technical, social and economic principles, that in the end merge into an economic valuation of the social and technical aspects.


Common practice

The following methodologies are common practices in power systems planning. Each has its advantages and disadvantages.


Electric model based planning: This process consists in evaluating some operational points of the grid with electricity simulation software. Power flows, contingency analysis and dynamic studies are performed in order to determine which changes in the grid are the most beneficial, giving a higher priority to the technical criteria.

  • Advantages: Highly accurate.

  • Disadvantages: Very labor intensive. The scenarios and investments are chosen by hand due to the laborious process, hence probably accidentally omitting better choices.

Techno-economic model based planning: These types of modeling techniques, also known as expansion planning, are versions of a classical econometric problem in which the investments in the grid are entered into an mixed-integer optimization software that chooses the combination of investments and when to make them, that produce the most benefit for the system. There are plenty of flavors of these family of problems; Some consider only transmission expansion, while others consider generation and transmission expansion. Some are deterministic while others consider stochastic uncertainty in the inputs.

  • Advantages: Optimality of the solution. It considers many operational scenarios.

  • Disadvantages: Somewhat labor intensive. Approximations may make them inapplicable to some problems. Not (easily) scalable to a large number of nodes since the whole problem is formulated inside a Mixed Integer Programming (MIP) solver, which may have a very hard time finding a suitable solution for the complete problem at once.

Mixed approach: Another usual practice is to simulate the market with an optimization software inside a MIP solver and then verify it with electrical simulation software. In this approach, a simplified version of the power system is introduced into an optimization problem that dispatches the generation, sometimes considering some light technical constraints, simulating a competitive market. Some of those market results are then simulated in more detail using electrical simulation software with the complete infrastructure model. This process is repeated for a number of investments.

  • Advantages: Closer to the real interaction of market and system operator.

  • Disadvantages: Very labor intensive. The market model and the electrical simulation model are often incompatible. The scenarios and investments are chosen by hand due to the laborious process, hence probably accidentally omitting better choices.

Without aiming at having a comprehensive compilation of the practice, the listed ones constitute most of what is being done at the moment in the industry. Something that all the listed practices have in common is that they are labor intensive. This is reflected in a recent survey aimed at understanding the state of the tools used at leading TSO and ISO companies.


Vision

The advanced Grid Insights vision is not radically different, but it is executed radically differently. Yes, we need optimization, yes, we need electrical calculation, but above all, we need automation. But automation done right.


A common conception of automation is to couple existing pieces of software aiming at joint operation. Unfortunately, experience has proven that trying to couple software vendor product A with software vendor product B is a dead end, because A and B were not designed to interoperate. Sometimes this sort of automation is against the interest of the vendors.

After several initiatives trying to couple vendor products, it was clear that it takes less resources to replicate the required functionality than hacking A and B to work in a consistent and maintainable manner. Furthermore, since those products are moving targets, to be able to keep them dancing together, it is necessary to incur in fruitless adaptation costs every time the vendor software changes their inner workings. A perfect example of this are vendor files being changed artificially from release to release to force an upgrade.


The mixed approach, combining several vendor products is king, but it is labor intensive and that curtails our ability to come up with the best portfolio investments. Regarding power systems planning, the perfect end result would be to have a Pareto front for the decision makers, instead of a handful of scenarios. But why?

Usually, due to the very labor intensive process and resource constraints, it is close to impossible to produce the hundreds of scenarios needed to compose a Pareto front (a cost-benefit curve of equivalent utility). Such a curve would inform the decision makers about which solutions provide the best value at each cost level. This is far better than providing a bunch of points of utility cost that are hard to extrapolate to the solution space (the complete picture of possibilities)


Image 1: This picture shows thousands of investment scenarios evaluated automatically. It also shows the Pareto front formed by those investments that produce the most benefit at each cost point.


So how do we get there?


A better power systems planning methodology


To achieve the above image of a power systems planning, you need:

  • A very fast electrical solver.

  • A very fast generation dispatch program.

  • A very fast black-box solver.

  • A traceable database.

Do you see a trend here? We need very fast software, and we need it to be interoperable, much like puzzle pieces. In this way, we can build innovative new processes that are not feasible with the business as usual software.


The algorithm works like this:

  • Prepare the investments. By hand for now, but it would be nice if a machine could come up with candidate investments…

  • Start the black-box solver. This solver will propose combinations of investments to test:

    • For each investment combination, run the black-box:

      • Run generation dispatch for a time period.

      • Run power flow for the same time period.

      • Evaluate the economic costs and benefits and evaluate an objective function for the solver.

  • Once the solver has finished proposing and checking investments, trying to find the better ones, we plot the results, generate reports, etc.

  • In the end, every investment evaluation is available, or at least, re-simulatable easily for further analysis. This screening process can be as simple or complex as one has time for since the black-box function can be anything you want, and it is all automatic.

Decision makers are usually thrilled to see the Pareto front instead of the usual scenarios.


Paying the price


As the saying goes, there is no free lunch. Coming up with fast and interoperable software is not an easy feat, especially if there is no comparable experience. It is, however, an investment that pays dividends later.

Image 2: Chart showing the effort of in-house development, depending on the approach taken: In gray, using vendor-locked software, in green, using interoperable software.


The image above, shows the experience that has motivated the creation of Advanced Grid Insights; One can definitely buy vendor solutions that are usually locked for extension. These solutions are easy to adopt in the beginning since the early functionality is already present. However, when more advanced functionality is required, but such functionality is out of scope for the vendor, you are forced to hack the vendor tool to serve your innovation.


On the other hand, you could create your own software, that you can extend with time, and adapt to your needs. If this is done correctly, the cost of extending and making the tool interoperable with others is almost trivial. This works as a good investment.


There is a third option not depicted, which is, to start your own software and end up exactly like the vendor-locked solutions. No one wants that, and it is to be avoided by having the right in-house talent.


Conclusion

The common practice in power systems planning is very labor intensive. This makes it hard to be able to produce the best results, and takes a toll on the personnel in the form of burn outs. The clear way out is automation.


Automation is an investment; It may be a good one or a bad one, depending on the approach you make. Experience advocates on building the proper toolbox, rather than relying on vendor-locked solutions.


Decision makers are usually thrilled to see the Pareto front instead of a handful of scenarios. To be able to produce those results, it is required to invest in getting a hold of the simulation process. That means producing software with a well defined architecture that allows for interoperability and scalability.


“We want everything as automated as possible, we want the best results possible, and we want them to be traceable.” - Red Electrica's planning department.

Advanced Grid Insights has been founded around that premise.

603 views0 comments
  • Writer's pictureSantiago Peñate

Updated: May 18, 2023

What do you do when the power flow of a model does not converge? change parameters until it works somehow? not anymore! This article aims at explaining the most common issues and their solution when the power flow of an electrical grid model does not converge.


Introduction

Let's provide some base concepts to be sure we are speaking in the same terms.


What is the power flow study?

The power flow study, contrary to its name , aims at computing the nodal voltages that satisfy the nodal power balance condition. This means that the computed power injections using the aforementioned voltages are close to the specified nodal power injections (The given generation minus the given load) After the voltages are obtained, the branch flows are computed once. This study provides the steady state electrical magnitudes of a power system, so it is most useful in situations where we can assure that the power system is steady. An example of a situation when this is not true is a large disturbance such as a short-circuit.


What does convergence mean?

To "converge" means that the nodal power balance condition has been satisfied up to a certain numerical threshold. A good numerical threshold is 0.000001, a threshold used in practice is 0.001 for speed, and sometimes because the model is not as good as it should be and the numerical methods get stuck at higher errors.


Issues

1. Nonsense data issue

The first issue I'd like to bring up is the presence of nonsense data. But how can you know what is nonsense? Some hints:

  1. Zero branch reactance. All branches should have some meaningful reactance value. In the case of DC branches, there should be some resistance value.

  2. Very low branch reactance values. Sometimes, exists the temptation to model switches and jumpers with very low reactance values like 0.0000001. This hurts the model condition number, thus its convergence properties. Those branches should be removed from the model with a topological process.

  3. Not in per-unit. Most power system models require the impedances to be provided in per-unit. If not, the values will not make sense.

  4. Out of range values. Some parameters only admit values in a certain range. For instance, does it make sense to have a -0.05 p.u. resistance? no it doesn't. Ideally, the software should only allow values in range because the users are not always aware of what sensible ranges are.

  5. Reversed ranges. When specifying a range it is clear that the lower value has to be actually lower than the upper value. Sometimes the range is reversed by mistake.

  6. NaN values. This is an obvious one, but not to be underestimated. Not-a-Number is not a valid number anywhere, hence, no input field should ever be NaN. The same applies to empty numerical values.

There may be other parameters that might contain harmful data that is not immediately obvious. In those cases a histogram of the values is a very useful tool to find outliers. GridCal features automatic detection of nonsense values and provides histograms of the most influential magnitudes in its model inspection and fixing tool.


2. Active power imbalance issue

This one is very common, and easily overlooked. As we mentioned, the power flow study computes the steady state values of a circuit. For a circuit to be in steady state, there must be an exact balance between the load and the generation. In the numerical power flow study we need to leave a free generator, located at the "slack" node. This generator can provide any amount of power, but that does not mean that the grid can transport it anywhere. In transmission grids, the balance should typically be within a 3% error margin, less is even better. In distribution grids there might not be any balance at all if there are no generators. In those cases the slack will provide all the power, but unlike the transmission grids, the distribution grids are designed for that situation and the power transport from a single entry point is not an issue.


Solution: To solve the active power imbalance, the typical solution is to run an optimal power flow where the generation is dispatched according to their cost. This produces the generation values that satisfy the demand with good enough criteria to be able to satisfactorily solve the power flow problem. Alternatively, one could just scale the generation to match the load.


3. Voltage guess issue

This one is less obvious and it is solely related to the numerical methods used to solve the power flow; The gold standard algorithm to solve the power flow study is the Newton-Raphson algorithm. First reported to solve the power flow problem in 1967 by William. F. Tinney and Clifford E. Hart, it has quadratically convergence properties. However, the Newton-Raphson algorithm and most of the state of the art algorithms, rely on an initial guess of the solution to iterate until reaching the convergence.


We know that in the real world, only one solution exists for each loading state. But since we are dealing with a mathematical model, it has been proven that the solution space of the power flow equations contains more than one solution that is mathematically possible. So how do we know if we are getting to a meaningful solution? In my experience a meaningful solution is one such that the voltages are not in collapse state (given that we have no nonsense values and balanced power injections) these are voltages in the range (0.9 , 1.1) p.u.


Back to the issue at hand; Sometimes we need to run a sequence of power flow studies, and experience shows that if we use the last state solution as the initial point for the new state, the algorithm converges faster. This is a good practice if both loading states are similar like the voltage solution at 10AM might be a good initial point for the situation at 11AM. However, if the loading states are very different, using one of the voltage solutions as an initial guess for the other might lead to a garbage solution or to no solution at all.


Solution: Use the plain voltages profile. This is 1.0 p.u. for all PQ nodes, and the generator set point voltage for the PV nodes and the slack node. In the rare event that this does not lead to a solution, a voltage initial guess from a close situation might help. However experience shows that if this is the case there are other problems to consider.


4. Transformers virtual taps issue

If you connect a transformer end rated at 11kV to a bus-bar rated at 10 kV, it is because by design you expect to have 11 kV / 10 kV = 1.1 p.u. voltage at the bus bar. However if you connect a 132 kV transformer end to the same 10 kV bus-bar, the voltage would be 13.2 p.u. and that is probably not a good decision. In fact it is an error. This is a very common error that slips under the radar, because sometimes despite the wild virtual taps generated, the numerical algorithm finds a solution.


Solution: Check that the difference between the transformer ends' voltage ratings and the buses voltage rating are not too different, say a 10% maximum. Most likely the transformer is connected backwards, so flip it.


5. Reactive power transport issue

Sometimes the reactive power is not generated close enough to where it is needed and the grid cannot transport it efficiently enough. In these cases, the numerical algorithm cannot find a solution and it is quite difficult to tell why.

The obvious solution is to provide more reactive power close to where it is needed. For instance, locating capacitor banks or generators with reactive power capability.

But believe it or not, there is an easier solution; Does your load characterize the reactive power properly? Sometimes not. Specially in models used in planning where the reactive power is derived from the active power using a simple rule. Ensure that there are loads that generate reactive power as well as loads that consume it, because that is what happens in real life.


6. Incompatible controls

What if you have a transformer controlling the voltage of a bus and a generator controlling the voltage of a bus nearby and it is almost impossible to transport the necessary active and reactive power to satisfy that control situation? Then the algorithm will not find a solution, or the convergence properties deteriorate greatly. To avoid this we need to:

  • Avoid concurrent controls. This is, avoid having more than one device controlling the same magnitude.

  • Avoid nearby controls that control the same magnitude to different values.

  • In general, avoid having too many controls. The power flow algorithm is an optimization function and sometimes it can stagnate or fail to converge with too many controls.

7. Singular Jacobian matrix

Sometimes you may be notified by the power flow program that the Jacobian matrix is singular, hence the power flow is not solvable. What does this mean?

The Jacobian matrix is a matrix composed by the derivatives of the power flow equations with respect to the voltage module and the voltage angle. This matrix is used to solve a linear system to compute the voltage magnitude and angle increments in the Newton-Raphson (and Newton-Raphson like) algorithms. This is done at each iteration to determine the next step, so if the linear system is not solvable, there is no next step to be found.

In practice, this means that the power system is in voltage collapse, but how? This happens if the load exceeds what the grid can transport by the declared impedances. So here there are only two things to consider:

  • Is the load way too high? If so, consider grid reinforcements. But most likely, the issue is with the next item.

  • Are the impedances correct? This one relates to the rest of this article where bad data, wrong transformer connections and the likes may be playing a part in the Jacobian matrix singularity.


Final thoughts

The more hands that touch a model (of any kind) the greater the risk of bugs. That sentence could probably be set in stone, but calls the question, can we do any better? I believe we can if we use expert rules like the ones described before in conjunction with computer software that oversees the models creation. This is not a new idea and in computer programming languages, such tools are called linters. So, can we create a linter for power systems? Yes we can, and we definitely should.







831 views0 comments

Updated: Jul 12, 2022


Summary

  • The integrated framework is 1300% more efficient than the business as usual.

  • 2 million euros in productivity savings can be expected in a 5 year period.

  • The technical implementation is expected to take about a year.

  • Once removing the technical barriers, incremental developments are possible.

  • Resistance to cultural change is the main drawback.


Introduction

We are immersed in the great transition from fossil fuels to cleaner energy sources. To enable this transition, we need to perform an astounding amount of calculations, but the methods and tools that are commonly used are sorely lacking. The tools’ interoperability issues have become the bottleneck in the decision making processes related to energy infrastructure. There is too much at stake to continue with the business as usual.


The modeling situation

Investment decisions in the electrical sector have always been hard to make because the effects of adding or removing equipment in an electrical system are not linear. For instance, if we see that installing a 3MW transformer near a city produces a benefit, we cannot ensure that installing two of them will produce twice the benefit. It may be worse, it may be better. This is due to the different network effects that one or two transformers produce depending on their location and utilization once installed. The network effects are counterintuitive to the human brain, hence the need for specialized software.


In the past, energy sources were predictable (coal, nuclear, hydro, oil and gas), so the grid investment studies were simpler; The study of the peak demand and valley demand were sufficient to assess the performance of an asset. This is no longer a valid approach. In the last 20 years, a significant part of energy produced has evolved from being stable and polluting, to being variable and less environmentally damaging. Suddenly, the evaluation of a couple of scenarios could not determine whether a given investment was going to be sufficient or not due to the variability introduced by the solar radiation and the wind. To this uncertainty, we must add climate change and the instability of the fossil fuels supply.


Like every great challenge, there is more than one aspect to it. If we break down the exact issues becoming bottlenecks to policy and investments decision making, on one side, we have the excruciating difficulty to collaborate when creating models of the infrastructure, and on the other side, the model evaluation and debugging times that span from hours to weeks depending on the model accuracy and size. Sadly we must add that the mainstream software programs are deliberately not interoperable, making it very hard to build lean processes that alleviate the workload.


Naturally, these issues produce delays, unsatisfactory results and rotation in the engineering teams. This situation has been going on for at least 15 years and there are few efforts at the moment moving in the right direction.


Reinventing the wheel

If we look at the described situation with fresh eyes, we cannot help but wonder, why is everything so old fashioned? Why is there no “google docs” for models? Why is there no “Git” for models? Why do I have to keep sending excel files over email? (well, maybe just dropping them into a shared folder) Our conclusion is that there has been a lack of innovation because the dominant software is very hard to replace, because to do so requires reinventing the wheel; reinventing power flow, reinventing optimal dispatch, reinventing the file formats, etc.


Solving the collaboration issue; The field is dominated by closed standards and formats. Mainly those matching the mainstream software manufacturers’ views. What about a simple and open electrical modeling format? While working at the planning department of REE, one of the main things we did was to design a simple yet complete Json file format for exchanging data among all applications. Later this led to the creation of a collaboration and tracking system that allowed users to create models with full tractability. CIM/CGMES, despite being a kind of standard format, was out of the question due to its gratuitous complexity. Simplicity is the ultimate form of sophistication, or so we believe.


Solving the interoperability issue; The base situation is usually that all the modeling is done with a closed source program that becomes the centerpiece. This constraints innovation and forces users to come up with maybe too creative solutions to hack this. We discovered that the best strategy was to have such programs as side pieces of an open modeling system that uses the open and simple format mentioned before, so the hacking is not needed at all. This allows users to continue using their preferred software while allowing competition and innovation to improve the data processing pipeline. For instance, now we can run hundreds of simulations in the cloud with a custom developed calculation engine, while still being able to run one of them in the desktop with the commercial dominant software.


Solving the computation capacity issue; The commercial software today was designed in the 1970’s and it has been made abundantly clear that they’re not going to change the design to new computational paradigms. Fine, we have just detailed how to get out of the closed ecosystem by having our simple and open modeling format. This allowed us to build a competitor piece of software to run on the cloud at scale unlike no manufacturer product today. More calculation capacity equals greater ability to run more scenarios to better understand the energy transition. A simple and open file format and computer programs developed around it made us free to solve the collaboration, interoperability and computation capacity issues.


The cost of business as usual

The typical planning business as usual workflow involves the use of traditional power systems (grid modeling) software and the use of a market modeling software to simulate the market effects of the electrical infrastructure. The process involves quite some file preparation and model adaptation. The diagram looks like this:


Evaluating a single decision costs at a employee internal rate of 40€/h:

Step

Time

Cost

Design changes on the grid modeling software

*8 h

320 €

Prepare the input data coherently for the market model

168 h

6.720 €

Adjust the market model results to the grid model software

40 h

1.600 €

Prepare a report

8 h

320 €

Total

224 h

8.960 €

* These times may vary.


We can do better

The business as usual workflow has plenty of steps in which a person has to intervene to adapt the data for the next step. That is a source of friction where mistakes can happen and people get frustrated.


If we observe the modeling workflow from a processes point of view, we immediately find that we have many “sources of truth” that evolve over time with no tractability. We also observe that we are forced into that because the grid modeling software and the market modeling software are incompatible. What if everything was compatible? Then we arrive to a much more lean process where the employees only need to intervene where they add value:


In this process, the data resides in a single place, where the modeling software loads and saves the different models coherently and tractably. Then the modeling software can be provided with custom work routines such as Electrical plus market simulations that produce coherent results, which are processed into automatic reports. The comparable cost scheme improves radically:

Step

Time

Cost

Design changes on the grid modeling software

*8 h

320 €

Prepare a report

*8 h

320 €

Total

16 h

640 €


1300% cost improvement


Now let’s say that we run 50 of these processes per year. With the business as usual approach that is 448k€ per year only on work labor, not counting the ad-hoc software maintenance or the process variations that may make that pipeline even slower. With the improved workflow, the cost of performing the same 50 runs is 32k€ per year. That makes up over 2 million € in productivity savings in five years.


Timeline, side effects and drawbacks

We have outlayed how to make the decision making process leaner and we have calculated the economic benefit. How do we implement the new data architecture? The experience in the planning department of REE is that everyone in the department was aligned with the need for change. After presenting the solution, everyone concluded that the need for a central source of truth was imperative. However, changes take time. In our case there has been a 3 year changing period that includes the research and development to produce the software designs and implementations. There is still a fourth year to go. This long period is due to the fact that we have been changing the workflow while the old workflow was still being used. It would be reasonable to expect that similar change would take about a year in an organization with a clear implementation project, where the software was not an unknown variable like it has been for us.


We have streamlined our decision making process, but far more importantly, we have removed most of the technical barriers for innovation. Now, if we need to add a field to the data model, we just do it. We don’t need to beg the manufacturer anymore. The same goes for the development of new functionality; if we need a new specialized type of simulation, we can develop it in our framework incrementally. The benefit of being able to do something that was impossible before is infinite.


The change has been quite radical. This does not come free from resistance; not everyone shares the same vision of the solution and many are still emotionally attached to the closed manufacturer's solutions. Mostly because of the personal edge gained from being experts in certain manufacturer software. Naturally, if those software and workflows lose importance, those people feel that they get relegated as well. The reality is that those people become more capable than before to provide their best insight, but it is difficult to recognize it at first. Certainly the resistance to cultural change is the most important drawback.


Summary

  • The integrated framework is 1300% more efficient than the business as usual.

  • 2 million euros in productivity savings can be expected in a 5 year period.

  • The technical implementation is expected to take about a year.

  • Once removing the technical barriers, incremental developments are possible.

  • Resistance to cultural change is the main drawback.


318 views0 comments
bottom of page