The Algorithms at the Core of Power Market Modeling

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/06/10/algorithms-at-the-core-of-power-market-modeling/
RSS

In 2007, the U.S. government formed the Advanced Research Projects Agency-Energy (ARPA-E) which encourages research on emerging energy technologies. Last year this agency awarded about 3.1 million dollars to the Pacific Northwest National Laboratory (PNNL) to work on a computational tool called High-Performance Power-Grid Operation (HIPPO) over the next few years. The research team will be led by an applied mathematician at PNNL and be partnered with GE’s Grid Solutions, MISO, and Gurobi Optimization. The group will seek improved ways to solve the unit commitment problem, “one of the most challenging computational problems in the power industry.” The work highlights the general trend over the past twenty years in this and other industries to turn to mathematical optimization for answers to some of the most difficult scheduling and planning problems. What’s astounding is the rate at which commercial mathematical solvers have been able to respond to these needs with enormous leaps in algorithmic efficiency over a relatively short period of time.

At the core of most of the mathematical optimization used in power modeling is linear programming (LP). Linear programs are problems in which some linear function is maximized or minimized given a set of linear constraints. The mathematician George Dantzig invented the simplex algorithm in 1947 in advance of the day when computers could really take advantage of it. For example, in 1953 one implementation of the algorithm on a Card Programmable Calculator (CPC) could solve a certain 26 constraint, 71 variable instance of the classic Stigler Diet Problem in about eight hours. As computer technology advanced, though, the usefulness and power of the simplex algorithm specifically and linear programming in general became apparent. Advances in the algorithm combined with exponential computer speed improvements made linear programming a staple in problem solving by the early 2000s. In fact, algorithmic progress in linear programming (i.e. independent from computer speed improvements) gave a 3300x improvement factor from 1988 to 2004. Coupled with actual computer machine improvements of 1600x in that same time horizon, this produced a 5,280,000x average improvement for solving linear programs!

While progress on linear programs has somewhat plateaued in recent years, improvements in mixed-integer programming (MIP) have continued at impressive rates. In its simplest form, a mixed-integer program is a linear program for which some of the variables are restricted to integer values. This integer-value restriction makes the problem so difficult that it is NP-hard, meaning that finding a guaranteed polynomial time algorithm for all MIPs will most likely never occur. And yet the MIP is at the center of an ever-increasing number of practical problems like the unit commitment problem that the HIPPO tool mentioned above is meant to address, and it is only relatively recently that it really became a practical problem solving tool. According to one expert and active participant in the field, Robert Bixby,

“In 1998 there was a fundamental change in our ability to solve real-world MIPs. With these developments it was possible, arguably for the first time, to use an out-of-the box solver together with default settings to solve a significant fraction of non-trivial, real-world MIP instances.”

He provided this chart showing the improvements in one MIP solver, CPLEX, from 1991 to 2007:

Cplex version-to-version pairs
Figure 1. CPLEX Version-to-Version Pairs. Source

This chart shows that over approximately 16 years, the machine-independent speed improvement was roughly 29,000x! The progress on developing fast algorithms to solve (or at least find good solutions to) mixed-integer programs has been simply explosive.

The importance of this development is highlighted by extensive use of MIPs by regional reliability organizations in the United States. An independent review published by the National Academies Press states that:

In the day-ahead time frame, the CAISO, ERCOT, ISO-NE, MISO, PJM, and SPP markets employ a day-ahead reliability unit commitment process… The optimization for the day-ahead market uses a dc power flow and a mixed integer program for optimization.

In other words, the MIP is at the core of day-ahead market modeling for these major reliability organizations. A presentation given a few years back by PJM shows their increasing need to solve very difficult MIPs in a shorter time frame. The presentation highlights the fact that PJM has a “major computational need” for “better, faster MIP algorithms and software.” The short slide deck states three times in different contexts the need in PJM for “even faster dynamic MIP algorithms.” The entity must solve their day-ahead model for the security constrained unit commitment (SCUC) problem in a four-hour window towards the end of each day, and the presentation explains that they “have a hard time solving deterministic SCUC models in the time allotted.” So the need for ever-improving mixed-integer programs in the energy industry doesn’t seem to be going away any time soon. And with the increasing complexity of problems such as renewable integration, sub-hourly modeling, and the handling of stochastics, the push for “better, faster MIP algorithms” will only continue.

So what does all of this mean for power modelers? Professional solvers’ ability to continue to improve LP/MIP algorithms’ performance will determine whether the most difficult questions can still be addressed and modeled. But, in addition to that, it is crucial that the simulation models that seek to mimic real-world operations with those solvers are able to intelligently implement the fastest possible optimization codes. As EPIS continues to enhance AURORAxmp, we understand that need and spend an enormous amount of time fine-tuning the LP/MIP implementations and seeking new ways to use the solvers to the greatest advantage. Users of AURORAxmp don’t need to understand those implementation details—everything from how to keep the LP constraint matrix numerically stable to how to pick between the interior point and dual simplex LP algorithms—but they can have confidence that we are committed to keeping on pace with the incredible performance improvements of professional solvers. It is in large part due to that commitment that AURORAxmp has also consistently improved its own simulation run time in significant ways in all major releases of the past three years. With development currently in the works to cut run times in half of the most difficult DC SCOPF simulations, we are confident that this trend will only continue in the coming years with future releases of AURORAxmp. As was said about the projected future development of mixed-integer programs, the performance improvement “shows no signs of stopping.”

 

Filed under: Data Management, Power Market InsightsTagged with: , ,

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

Comment *

Name
Email *
Website