The Algorithms at the Core of Power Market Modeling

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/06/10/algorithms-at-the-core-of-power-market-modeling/
RSS

In 2007, the U.S. government formed the Advanced Research Projects Agency-Energy (ARPA-E) which encourages research on emerging energy technologies. Last year this agency awarded about 3.1 million dollars to the Pacific Northwest National Laboratory (PNNL) to work on a computational tool called High-Performance Power-Grid Operation (HIPPO) over the next few years. The research team will be led by an applied mathematician at PNNL and be partnered with GE’s Grid Solutions, MISO, and Gurobi Optimization. The group will seek improved ways to solve the unit commitment problem, “one of the most challenging computational problems in the power industry.” The work highlights the general trend over the past twenty years in this and other industries to turn to mathematical optimization for answers to some of the most difficult scheduling and planning problems. What’s astounding is the rate at which commercial mathematical solvers have been able to respond to these needs with enormous leaps in algorithmic efficiency over a relatively short period of time.

At the core of most of the mathematical optimization used in power modeling is linear programming (LP). Linear programs are problems in which some linear function is maximized or minimized given a set of linear constraints. The mathematician George Dantzig invented the simplex algorithm in 1947 in advance of the day when computers could really take advantage of it. For example, in 1953 one implementation of the algorithm on a Card Programmable Calculator (CPC) could solve a certain 26 constraint, 71 variable instance of the classic Stigler Diet Problem in about eight hours. As computer technology advanced, though, the usefulness and power of the simplex algorithm specifically and linear programming in general became apparent. Advances in the algorithm combined with exponential computer speed improvements made linear programming a staple in problem solving by the early 2000s. In fact, algorithmic progress in linear programming (i.e. independent from computer speed improvements) gave a 3300x improvement factor from 1988 to 2004. Coupled with actual computer machine improvements of 1600x in that same time horizon, this produced a 5,280,000x average improvement for solving linear programs!

While progress on linear programs has somewhat plateaued in recent years, improvements in mixed-integer programming (MIP) have continued at impressive rates. In its simplest form, a mixed-integer program is a linear program for which some of the variables are restricted to integer values. This integer-value restriction makes the problem so difficult that it is NP-hard, meaning that finding a guaranteed polynomial time algorithm for all MIPs will most likely never occur. And yet the MIP is at the center of an ever-increasing number of practical problems like the unit commitment problem that the HIPPO tool mentioned above is meant to address, and it is only relatively recently that it really became a practical problem solving tool. According to one expert and active participant in the field, Robert Bixby,

“In 1998 there was a fundamental change in our ability to solve real-world MIPs. With these developments it was possible, arguably for the first time, to use an out-of-the box solver together with default settings to solve a significant fraction of non-trivial, real-world MIP instances.”

He provided this chart showing the improvements in one MIP solver, CPLEX, from 1991 to 2007:

Cplex version-to-version pairs
Figure 1. CPLEX Version-to-Version Pairs. Source

This chart shows that over approximately 16 years, the machine-independent speed improvement was roughly 29,000x! The progress on developing fast algorithms to solve (or at least find good solutions to) mixed-integer programs has been simply explosive.

The importance of this development is highlighted by extensive use of MIPs by regional reliability organizations in the United States. An independent review published by the National Academies Press states that:

In the day-ahead time frame, the CAISO, ERCOT, ISO-NE, MISO, PJM, and SPP markets employ a day-ahead reliability unit commitment process… The optimization for the day-ahead market uses a dc power flow and a mixed integer program for optimization.

In other words, the MIP is at the core of day-ahead market modeling for these major reliability organizations. A presentation given a few years back by PJM shows their increasing need to solve very difficult MIPs in a shorter time frame. The presentation highlights the fact that PJM has a “major computational need” for “better, faster MIP algorithms and software.” The short slide deck states three times in different contexts the need in PJM for “even faster dynamic MIP algorithms.” The entity must solve their day-ahead model for the security constrained unit commitment (SCUC) problem in a four-hour window towards the end of each day, and the presentation explains that they “have a hard time solving deterministic SCUC models in the time allotted.” So the need for ever-improving mixed-integer programs in the energy industry doesn’t seem to be going away any time soon. And with the increasing complexity of problems such as renewable integration, sub-hourly modeling, and the handling of stochastics, the push for “better, faster MIP algorithms” will only continue.

So what does all of this mean for power modelers? Professional solvers’ ability to continue to improve LP/MIP algorithms’ performance will determine whether the most difficult questions can still be addressed and modeled. But, in addition to that, it is crucial that the simulation models that seek to mimic real-world operations with those solvers are able to intelligently implement the fastest possible optimization codes. As EPIS continues to enhance AURORAxmp, we understand that need and spend an enormous amount of time fine-tuning the LP/MIP implementations and seeking new ways to use the solvers to the greatest advantage. Users of AURORAxmp don’t need to understand those implementation details—everything from how to keep the LP constraint matrix numerically stable to how to pick between the interior point and dual simplex LP algorithms—but they can have confidence that we are committed to keeping on pace with the incredible performance improvements of professional solvers. It is in large part due to that commitment that AURORAxmp has also consistently improved its own simulation run time in significant ways in all major releases of the past three years. With development currently in the works to cut run times in half of the most difficult DC SCOPF simulations, we are confident that this trend will only continue in the coming years with future releases of AURORAxmp. As was said about the projected future development of mixed-integer programs, the performance improvement “shows no signs of stopping.”

 

Filed under: Data Management, Power Market InsightsTagged with: , ,

Living in the Past?

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/05/27/living-in-the-past/
RSS

Living in the past is not healthy. Is your database up-to-date? EPIS just launched the latest update to the North American Database, version 2016_v4, marking the fourth North American data update this year! Recent changes in the power industry present challenges to database management which will be discussed in this post.

In general, the transformation in power generation sources in the U.S. coupled with evolving electricity demand and grid management represents a paradigm shift in the power sector. In order to accurately model power prices in the midst of such change, one must have a model built on fundamentals and a database that is up-to-date, has reasonable assumptions, is transparent and is flexible. A recent post described the technical side of working with databases in power modeling. This entry outlines important changes in the East Interconnect, the impacts those changes have on data assumptions and configuration and the steps we are taking to provide excellent databases to our clients.

Recent shifts in power generation sources challenge database assumptions and management. New plant construction and generation in the U.S. are heavily weighted towards renewables, mostly wind and solar and as a result, record generation from renewables has been reported across the East Interconnect. Specifically, on April 6, 2016, the Southwest Power Pool (SPP) set the record for wind penetration:

Record Wind Penetration Levels 2015

Figure 1. Record wind penetration levels in Eastern ISOs compared with average penetration in 2015. SPP holds the record which was reported on April 6, 2016. Record sources: NYISO, SPP, MISO, ISO-NE, PJM. 2015 Averages compiled from ISO reports, for example: NYISO, SPP, MISO, ISO-NE, PJM. *Average 2015 generation used to calculate penetration.

Similarly, the New York City area reached a milestone of over 100 MW in installed solar distributed resources. Accompanying the increase in renewables are increases in natural gas generation and reductions in coal generation. In ISO-NE, natural gas production has increased 34 percent and coal has decreased 14 percent since 2000, as highlighted in their 2016 Regional Electricity Outlook. These rapid changes in power generation sources require frequent and rigorous database updates.

Continued electric grid management changes in the East Interconnect also requires flexibility in databases. One recent change in grid management was the Integrated System joining the Southwest Power Pool, resulting in Western Area Power Administration’s Heartland Consumers Power District, Basin Electric Power Cooperative and Upper Great Plains Region joining the RTO. The full operational control changed on October 1, 2015, thus expanding SPPs footprint to 14 states, increasing load by approximately 10 percent and tripling hydro capacity. Grid management change is not new, with the integration of MISO South in 2013 as an example. Changes such as these require flexibility in data configuration that allow for easy restructuring of areas, systems and transmission connections.

Variability in parameters, such as fuel prices and demand, introduce further difficulty in modeling power markets. The so called “Polar Vortex” weather phenomena shocked North Eastern power markets in the winter of 2013/2014 with cold temperatures and high natural gas prices resulting in average January 2014 energy prices exceeding $180/MWh in ISO-NE. It seemed like the polar opposite situation occurred this last winter. December 2015 was the mildest since 1960, and together with low natural gas prices, the average wholesale power price hit a 13-year low at $21/MW. The trend continued into Q1 of 2016:

Monthly average power price in ISO-NE Q1 2014 and 2016

Figure 2. Monthly average power price in ISO-NE in Q1 2014 and 2016. Variability between years is a result of high natural gas prices and cold weather in 2014 versus low natural gas prices and mild weather in 2016.

Whether extreme events, evolving demand or volatile markets, capturing uncertainty in power modeling databases is challenging. In AURORAxmp, users can go one step further by performing risk simulations; specifying parameters such as fuel prices and demand to vary across a range of simulations. This is a very powerful approach to understanding the implications of uncertainty within the input data.

The aforementioned changes in generation, grid management and demand, offer exciting new challenges to test power market models and data assumptions. To test our platform, EPIS performs a historical analysis as a part of each database release. Inputs of historical demand and fuel prices are used to ensure basic drivers are captured and model output is evaluated not only in terms of capacity, but monthly generation, fuel usage and power prices. The result of this process is a default database that is accurate, current, contains reasonable assumptions, is transparent and is flexible to ensure you have the proper starting point for analysis and a springboard for success.

With the release of North_American_DB_2016_v4, EPIS continues to provide clients with superb data for rigorous power modelling. The 2016_v4 update focuses on the East Interconnect and includes updates to demand, fuels, resources, DSM and other miscellaneous items. Clients can login to our support site now to download the database and full release notes. Other interested parties can contact us for more information.

Filed under: Data Management, Power Market InsightsTagged with: , , , , , ,

The Fundamentals of Energy Efficiency and Demand Response

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/05/19/energy-efficiency-and-demand-response-fundamentals/
RSS

What are Energy Efficiency & Demand Response Programs?

Though the Energy Information Administration states, “there does not seem to be a single commonly-accepted definition of energy efficiency,” efficient energy use, sometimes simply called energy efficiency, refers to the reduction in the amount of energy required to provide the equivalent quality of products and services. Examples include improvements to home insulation, installation of fluorescent lighting & efficient appliances, or improving building design to minimize energy waste.

Demand response, according to the Department of Energy, is defined as, “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized.” Utilities can signal demand reduction to consumers, either through price-based incentives or through explicit requests. Unlike energy efficiency, which reduces energy consumption at all times, demand response programs aim to shift load away from peak hours towards hours where demand is lower.

What are the Benefits of Energy Efficiency & Demand Response Programs?

The decreasing and ‘flattening’ of the demand curve can directly contribute to improved system and grid reliability. This ultimately translates to lower energy costs, resulting in a financial cost saving to consumers, assuming the energy savings are greater than the cost of implementing these programs and policies. In 2010, Dan Delurey, then president of the Demand Response and Smart Grid Coalition, pointed out that the top 100 hours (or just over 1% of the hours in a year) account for 10-20% of total electricity costs in the United States. Slashing energy consumption during these high peak hours, or at least shifting demand to off-peak hours, relieves stress on the grid and should make electricity cheaper.

Additionally, decreasing energy consumption directly contributes to the reduction of greenhouse gas emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation prevented the emission of 10.2 gigatonnes of CO2, helping to minimize global emissions of greenhouse gases.

Lastly, reductions in energy consumption can provide domestic benefits in the forms of avoided energy capital expenditure and increased energy security. The chart below displays the value of avoided imports by country in 2014 due to the investments in energy efficiency since 1990:

Added Volume and Value of Imports Figure 1: Avoided volume and value of imports in 2014 from efficiency investments in IEA countries since 1990. Source

Based on these estimated savings, energy efficiency not only benefits a country’s trade balance, but also reduces their reliance on foreign countries to meet energy needs.

Modeling the Impacts of Energy Efficiency and Demand Response

Using AURORAxmp, we are able to quantify the impact of energy efficiency and demand response programs. In this simple exercise, we compare the difference between California with 2 GW of energy efficiency and 2 GW of demand response versus a case without energy efficiency or demand response from 2016 to 2030. The charts below show the average wholesale electricity prices & system production costs:

average electricity price $-MWhAverage System Cost (000's)

 Figure 2: Note these are 2014 real dollars.

Holding all else equal, adding demand response and energy efficiency programs into the system decreased average wholesale electricity prices by about $2.88 (5.4%) and the average system production cost fell by $496,000,000 (5.1%). This is a simple example in one part of the country, but one can easily include additional assumptions about the grid, resources characteristics, and load shape as they desire.

Both demand response and energy efficiency programs are intended to be more cost effective and efficient mechanisms of meeting power needs than adding generation. Emphasis on the demand side can lead to lower system production costs, increased grid reliability, and cheaper electric bills; all of which lie in the best interest of governments, utilities, and consumers.

Filed under: Energy Efficency, Power Market InsightsTagged with: , , , , ,

Integrated Gas-Power Modeling

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/04/29/integrated-gas-power-modeling/
RSS

Quantifying the Impacts of the EPA’s Clean Power Plan

Notwithstanding the recent legal stay from the U.S. Supreme Court, it is still important to understand the U.S. EPA’s Clean Power Plan (CPP) and its impact in the larger context of natural gas markets and its role in electric power generation. Because these two markets are becoming even more highly interrelated, integrated gas-power modeling is the most realistic approach for such analyses. EPIS has tested interfacing AURORAxmp® with GPCM®, a calibrated NG model developed by RBAC, Inc. The following is a brief discussion of our experimental setup as well as some of our findings.

Integration Approach

Monthly prices for 39 major natural gas hubs for the next 20 years are represented in AURORAxmp (as an input). They were developed utilizing GPCM’s market model (as an output) in pipeline capacity expansion mode. AURORAxmp then simulates a long-term capacity expansion that utilizes the GPCM-generated gas prices, and produces many results: power prices, transmission flows, generation by each resource/resource type including gas-consumption data. This gas-consumption (output from AURORAxmp) is fed back into GPCM as gas demand by the electricity sector (input to GPCM) for a subsequent market balancing and pipeline capacity expansion simulation which generates a new set of monthly gas hub prices. The iterative process begins at some arbitrary, but plausible, starting point and continues until the solution has converged. Convergence is measured in terms of changes in the gas-burn figures and monthly gas-hub prices between subsequent iterations.

This two-model feedback loop can be utilized as a tool to evaluate energy policies and regulations. To quantify the impact of an energy policy, we need two sets of integrated gas-power runs which are identical in all respects except the specific policy being evaluated. For example, to understand the likely impacts of emission regulation such as CPP, we need two integrated gas-power models with the identical setup, except the implementation of CPP.

Before presenting our findings on the impact of “CPP vs No CPP”, we first provide some further details on the setup of the GPCM and AURORAxmp models.

GPCM Setup Details

• Footprint: All of North America (Alaska, Canada, contiguous USA, and Mexico), including liquefied natural gas terminals for imports, and exports to rest-of-world.
• Time Period: 2016-2036 (monthly)
• CPP Program: All the effects of CPP on the gas market derived from changes to gas demand in the power generation sector.
• Economics: Competitive market produces economically efficient levels of gas production, transmission, storage and consumption, as well as pipeline capacity expansion where needed.

AURORAxmp Setup Details

  • Footprint: All three major interconnections in North America (WECC, ERCOT, and the East Interconnect; which includes the contiguous U.S., most Canadian provinces and Baja California).
  • Time Period: 2016 – 2036 (CPP regulatory period + 6 years to account for economic evaluation)
  • CPP Program: mass-based with new source complement for all U.S. states
    • Mass limits for the CPP were applied using the Constraint table
    • Mass limits were set to arbitrarily high values in the Constraint table for the “No CPP” case.
  • RPS targets were not explicitly enforced in this particular experiment. Future studies will account for these.
  • LT Logic: MIP Maximize Value objective function

Notations

  1. “CPP” – Convergent result from integrated gas-power model with CPP mass limits.
  2. “No CPP” – Convergent result from integrated gas-power model with arbitrarily high mass limits.
  3. “Starting Point” – Gas prices used in the first iteration of integrated gas-power modeling.
    • This is the same for both “CPP” and “No CPP” case.

Quantifying the CPP vs. No CPP

Impact on Gas and Electricity Prices

  1. Both No CPP and CPP cases have generally lower prices than the Starting Point case in our experiment. However, post-2030, CPP prices are higher than the Starting Point.
    • This happens due to capacity expansion in both markets.
    • We stress that the final convergent solutions are independent of the Starting Point case. The lower prices in CPP and No CPP cases compared to the Starting Point case are a feature of our particular setup. If we had selected any other starting price trajectories, the integrated NG-power feedback model would have converged on the same CPP and No CPP price trajectories.
  2. CPP prices are always higher than the No CPP case.
    • This is likely driven by increased NG consumption in CPP over No CPP case.

This behavior was observed in all major gas hubs. Figure 1 shows the average monthly Henry Hub price (in $/mmBTU) for the three cases.

Impact of CPP on Henry Hub PricesFigure 1: Monthly gas prices at Henry Hub for all three cases.

Figure 2 presents the monthly average power prices in a representative AURORAxmp zone.

Comparison of Power Prices in PJM Dominion VPFigure 2: Average monthly price in AURORAxmp zone PJM_Dominion_VP with and without CPP.

Figure 3 shows the impact of CPP as a ratio of average monthly prices in AURORAxmp’s zones for the CPP case over No CPP case. As expected, power prices with the additional CPP constraints are at the same level or higher than those in the No CPP case. However, it is interesting to note that the increase in power prices happens largely in the second half of CPP regulatory period (2026 onwards). It appears that while gas prices go up as soon as the CPP regulation is effective, there is latency in the increase in power prices.

Impact of CPP on Zone Price (CPP/No CPP)Figure 3: Impact of CPP on electricity prices expressed as a ratio of CPP prices over No CPP prices.

Figure 4 presents a comparison of total annual production cost (in $billions) for each of the three regions.

Annual Production Cost (In $billions) for each of the three regions.Figure 4: Total annual production costs by region for CPP and No CPP case.

Figure 5 presents the same comparison as a percentage increase in production cost for the CPP case. The results show that while the CPP drives up the cost of production in all regions, the most dramatic increase is likely to occur in the Eastern Interconnect.

Percentage increase in production cost total for CPP over No CPP CaseFigure 5: Percent increase in production cost for CPP case.

Electricity Capacity Expansions

Comparing the power capacity expansions in Figure 6 and Figure 7, we see that AURORAxmp projected building more SCCTs in the CPP case vs. the No CPP case in the Eastern Interconnect. We believe this is primarily driven by the higher gas prices in the CPP case over No CPP case. SCCTs typically have slightly higher fuel prices compared to CCCTs, which get their fuel directly from the gas hub for the most part. In this long-term analysis, AURORAxmp is seeking to create the mix of new resources that are most profitable while adhering to all of the constraints. The higher gas prices in the CPP case are just high enough to make the SCCTs return on investment whole.

Eastern Interconnect Build Out - No CPPFigure 6: Capacity expansion for Eastern Interconnect – No CPP Case.

Eastern Interconnect Build Out - CPPFigure 7: Capacity expansion for Eastern Interconnect – CPP Case.

Table 1: Capacity expansion by fuel type in total MW.

Build

(MW)

East Int.

ERCOT

WECC

CPP

No CPP

CPP

No CPP

CPP

No CPP

CCCT

206,340

207,940

45,960

29,850

25,040

23,400

SCCT

49,082

1,932

1,030

630

2,435

2,530

Solar

200

300

200

100

200

400

Wind

6,675

0

400

100

1,400

0

Retired

54,563

8,899

16,051

10

10,669

8,417

Table 1 shows the details of power capacity expansion in the three regions with and without CPP emission constraints. In addition to increasing the expansion of SCCTs, we can see that CPP implementation incentivizes growth of wind generation, as well as accelerates retirements. Coal and Peaking Fuel Oil units form the majority of economic retirements in the CPP case.

Fuel Share Displacement

Figure 8 shows the percent share of the three dominant fuels used for power generation: coal, gas, and nuclear. Figure 9 shows the same data as the change in the fuel percentage share between the CPP and No CPP case. Looking at North American as a whole, we see that coal-fired generation is essentially being replaced by gas-fired generation. Our regional data shows that this is most prominent in the Eastern Interconnect and ERCOT regions.

Percentage Share of Dominate Fuel TypeFigure 8: Percentage share of dominant fuel type.

Change in fuel share for power generation (cpp - no cpp)Figure 9: Change in fuel share for power generation (CPP – No CPP).

Natural Gas Pipeline Expansions
The following chart presents a measure of needed additional capacity for the two cases. The needed capacity is highly seasonal, so the real expansion need would follow the upper boundary for both cases.

 

Additional NG Pipeline Capacity RequiredFigure 10: Pipeline capacity needed for the CPP and No CPP cases.

Our analysis shows that the CPP will drive an increase in natural gas consumption for electricity generation. The following chart quantifies the additional capacity required to meet CPP demand for NG.
Additional NG Capacity Required CPP vs No-CPP (bcf/day)

While the analysis presented here assumes a very specific CPP scenario, we stress that the integrated gas-power modeling is an apt tool for obtaining key insights into the potential impacts of CPP on both electricity and gas markets. We are continuously refining the AURORAxmp®-GPCM® integration process as well as performing impact studies for different CPP scenarios. We plan to publish additional findings as they become available.

Filed under: Clean Power Plan, Natural Gas, Power Market Insights, UncategorizedTagged with: , , , ,

Working With Data in Power Modeling

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/04/15/working-with-data-in-power-modeling/
RSS

How Much Data Are We Talking About?

When planning the deployment of a power modeling and forecasting tool in a corporate environment, one of the most important considerations prior to implementation is the size of the data that will be used. IT personnel want to know how much data they are going to be storing, maintaining, backing up, and archiving so they can plan for the hardware and software resources to handle it. The answer varies widely depending on the types of analysis to be performed. Input databases may be relatively small (e.g. 100 megabytes), or they can be several gigabytes if many assumptions require information to be defined on the hourly or even sub-hourly level. Output databases can be anywhere from a few megabytes to several hundred gigabytes or even terabytes depending on what information needs to be reported and the required granularity of the reports. The data managed and stored by the IT department can quickly add up and become a challenge to maintain.

Here are a couple example scenarios:

A single planning analyst does a one-year hourly run (8760 hours) with modest reporting, which produces an output database of 40 MB. On average, the analyst runs about six studies per day over 50 weeks and the total space generated by this analyst is a modest 75GB. This is totally manageable for an IT department using inexpensive disk space.

Now, let’s say there are five analysts, they need more detailed reporting, they are looking at multiple years, and a regulatory agency states that they have to retain all of their data for 10 years. In this scenario, the total data size jumps to 500 MB for a single study. Given the same six studies per day those analysts would accumulate 3.75 TB of output data in a year, all needing to be backed up and archived for the auditors, which will take a considerable amount of hardware and IT resources.

What Are My Database Options?

There are dozens of database management systems available. Many power modeling tools support just one database system natively, so it’s important to know the data limitations of the different modeling tools when selecting one.

Some database systems are file-based. For example, one popular file-based database system is called SQLite. SQLite is fast, free, and flexible. This file-based database system is very efficient and is fairly easy to work with, but is best suited for individual users, as are many other file-based systems. These systems are great options for a single analyst working on a single machine.

As mentioned earlier, groups of analysts might decide to all share a common input database and write simultaneously to many output databases. Typically, this requires a dedicated server to handle all of the interaction between the forecasting systems and the source or destination databases. Microsoft SQL Server is one of the most popular database systems available in corporate environments, and the technical resources for it are usually available in most companies. Once you have your modeling database saved in SQL Server, assuming your modeling tool supports it, you can read from input databases and write to databases simultaneously and share the data with other departments with tools that they are already familiar with.

Here is a quick comparison of some of the more popular database systems used in power modeling:

Database System DB Size Limit (GB) Supported Hardware Client/Server Cost
MySQL Unlimited 64-bit or 32-bit Yes Free
Oracle Unlimited 64-bit or 32-bit Yes High
MS SQL Server 536,854,528 64-bit Only (as of 2016) Yes High
SQLite 131,072 64-bit or 32-bit No Free
XML / Text File OS File Size Limit 64-bit or 32-bit No Free
MS SQL Server Express 10 64-bit or 32-bit Yes Free
MS Access (JET)* 2 32-bit Only No Low

A Word About MS Access (JET)*

In the past, many Windows desktop applications requiring an inexpensive desktop database system used MS Access database (more formally known as the Microsoft JET Database Engine). As hardware and operating systems have transitioned to 64-bit architectures, the use of MS Access database has become less popular due to some of its limitations (2GB max database size, 32,768 objects, etc.), as well as to increasing alternatives. Microsoft has not produced a 64-bit version of JET and does not have plans to do so. There are several other free desktop database engines available that serve the same needs as JET but run natively on 64-bit systems, including Microsoft SQL Server Express, SQLite, or MySQL which offer many more features.

Which Databases Does AURORAxmp Support?

There are several input and output database options when using AURORAxmp for power modeling. Those options, coupled with some department workflow policies, will go a long way in making sure your data is manageable and organized.

EPIS delivers its native AURORAxmp databases in a SQLite format which we call xmpSQL. No external management tools are required to work with these database files – everything you need is built into AURORAxmp. You can read, write, view, change, query, etc., all within the application. Other users with AURORAxmp can also utilize these database files, but xmpSQL doesn’t really lend itself to a team of users all writing to it at the same time. Additionally, some of our customers have connected departments that would like to use the forecast data outside of the model, and that usually leads them to Microsoft SQL Server.

For groups of analysts collaborating on larger studies, AURORAxmp supports SQL Server database, although its use isn’t required. Rather than use SQL Server as the database standard for AURORAxmp (which might be expensive for some customers), the input databases are delivered in a low cost format (xmpSQL), but AURORAxmp offers the tools to easily change the format. Once the database is saved in SQL Server, you are using one of the most powerful, scalable, accessible database formats on the planet with AURORAxmp. Some of our customers also use the free version of SQL Server – called SQL Server Express Edition – which works the same way as the full version, but has a database size limit of 10GB.

Some additional options for output databases within AURORAxmp are:

MySQL: Open source, free, server-based, simultaneous database platform that is only slightly less popular than SQL Server.
XML/Zipped XML: A simple file-based system that makes it easy to import and export data. Many customers like using this database type because the data is easily accessed and is human readable without additional expensive software.
MS Access (JET) : The 32-bit version of AURORAxmp will read from and write to MS Access databases. EPIS, however, does not recommend using it given the other database options available, and due to its 2 GB size limitation. MS Access was largely designed to be an inexpensive desktop database system and given its limitations as previously discussed, we recommend choosing another option such as xmpSQL, SQL Server Express or MySQL which offer far more features.

Where Do We Go From Here?

AURORAxmp is a fantastic tool for power system modeling and forecasting wholesale power market prices. It has been in the marketplace for over twenty years, and is relied upon by many customers to provide accurate and timely information about the markets they model. However, it really can’t do anything without an input database.

EPIS has a team of market analysts that are dedicated to researching, building, testing, and delivering databases for many national and international power markets. We provide these databases as part of the license for AURORAxmp. We have many customers that use our delivered databases and others who choose to model their own data. Either way, AURORAxmp has the power and the flexibility to utilize input data from many different database types.

If you are just finding AURORAxmp and want to see how all of this works, we have a team here that would love to show you the interface, speed and flexibility of our product. If you are already using our model but would like guidance on which database system is best for your situation, contact our EPIS Support Team and we’ll be glad to discuss it with you.

Filed under: Data Management, Power Market InsightsTagged with: , , ,

Large Scale Battery Storage

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/04/05/large-scale-battery-storage/
RSS

Some emerging technologies pave a bright path for the future of Large Scale Batteries

As we move towards more renewable energy sources and away from fossil fuels, we will need new technologies to capture energy production as well as provide new ways to store and deliver power. An ongoing issue with solar and wind production is the inability to predict exactly when you can produce and dispatch power. Additionally, we are seeing more interest for generating, storing, and time-shifting power in other ways to meet environmental goals. Large Scale Batteries are an exciting step toward meeting, and supporting, some of those goals.

While we don’t know what the future will bring, there are some forecasts that predict substantial drops in the cost of the various storage technologies as there is more adoption into the marketplace. Among these, in 2014 Citigroup analysts predicted a drop in battery storage costs to $230/kWh by 2020 and a further drop to $150/kWh in the years after that. Whether it is from the reduced cost or simply an increased need, Navigant Research forecasts worldwide battery storage to grow to almost 14 GW by 2023.

Wordwide forcast of battery storage capacity

Graph 1: IRENA.org Source

These potential reductions in costs could even lead to some ‘grid defection’ as the economics change and become less of a hindrance for adoption.

Lowest current and projected battery cell price by type

Graph 2: IRENA.org Source

Tesla founder Elon Musk has been working with lithium-ion technology both for vehicle battery and grid-level storage for over five years. Li-ion is a familiar battery type, typically a pair of solid electrodes and an electrolyte, and has been around for a long time in smaller applications… Tesla (and other companies) is currently testing larger scale battery installations, currently only for households and businesses. In the future they are looking at becoming scalable for utility systems. Some advantages and disadvantages to Li-ion are:

Advantages

· High-energy density

· Low maintenance

· Low self-discharge

Disadvantages

· High cost to manufacture

· Limited number of charging cycles (They age and will need to be replaced.)

· Heat generated during use

energystorage.org graph

Figure 1: EnergyStorage.org Source

Meanwhile others are pursuing what is known as “as “flow battery” technology. This is aptly named, in that it contains two liquids that flow next to each other, separated by a membrane, and as they move past each other create an electrical current. These batteries use two electrolytes in separate tanks, which are then pumped into a central stack. The central stack has an ion-conducting membrane that captures the electrons as the two liquids are pumped through the stack. Currently, most new flow batteries use a membrane containing vanadium. Some advantages and disadvantages to this technology include:

Advantages

· Electrolyte solutions are safe, non-flammable, and non-corrosive

· The two electrolytes are compatible and easily rechargeable

· Expected to handle many more cycles than Li-ion batteries

Disadvantages

· Maintenance cost of the tanks and pump system are high

· Overall cost is higher $/KWh than Li-ion

· Low energy density

· The volume of space that the tanks may take up

EnergyStorage.org Figure 2

Figure 2: EnergyStorage.org Source

A promising hybrid of these two technologies is also being tested, using solid materials in two separate tanks with an electrolyte fluid that passes over them. The solid material can be lithium-based, while the flow of the liquid then conducts the electrons to the cell stack. Although still in testing, this looks to combine the scalability of flow batteries with the power density in Li-ion batteries.
Electrochemical Society Figure

Figure 3: Eletrochemical Society Source

The need for large scale energy storage will continue to grow as we move forward with renewable energy sources making up a larger portion of our energy generation. The inconsistency of renewables’ generation and the need for maintaining a stable grid will necessitate some form of storage. These are just a few of the most promising utility scale battery technologies that are currently available. Skeptics could argue that technologies like GTs or hydro which are currently used to firm up intermittent renewables will continue to do so in the future. That may be likely in some cases but those current technologies have their own issues. There are environmental and locational issues with hydro; there are pipeline access and saturation issues with small gas. Whatever technology moves us forward it seems apparent that battery storage will be an integral part of that future.

Filed under: Power Market Insights, Power StorageTagged with: , , ,

Integrated Modeling of Natural Gas & Power

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/03/18/integrated-modeling-of-natural-gas-power/
RSS

Natural gas (NG) and electric power markets are becoming increasingly intertwined. The clean burning nature of NG, not to mention its low cost due to increases in discovery and extraction technologies over the past several years, has made it a very popular fuel for the generation of electricity. As a result, the power sector is consistently the largest NG consumer. For example, in 2014, 30.5% of the total NG consumption in the United States was used for the generation of electricity (Figure 1).

 

Figure 1: U.S. Natural Gas Consumption by Sector, 2014. Source

According to EIA’s Annual Energy Outlook (AEO) 2015 projections,

“…natural gas fuels more than 60% of the new generation needed from 2025 to 2040, and growth in generation from renewable energy supplies most of the remainder. Generation from coal and nuclear energy remains fairly flat, as high utilization rates at existing units and high capital costs and long lead times for new units mitigate growth in nuclear and coal-fired generation.”

Economic, environmental and technological changes have helped NG begin to displace coal from its dominant position in power production. Although it was just for a single month, NG surpassed coal for the first time as the most used fuel for electricity generation in April 2015. The EIA also notes that considerable variation in the fuel mix can occur when fuel prices or economic conditions differ from those in the AEO 2015 reference case. The AEO reference case assumes adoption of the Environmental Protection Agency’s (EPA) implementation of Mercury and Air Toxics Standard (MATS) in 2016, but not the Clean Power Plan (CPP). Adoption of CPP, along with favorable market forces, could change the projections of the AEO 2015 reference case significantly. There is a consensus within both NG and power industry that NG-fired power generation will likely increase with the adoption of CPP.

Quantifying such a trend is non-trivial, but is crucial for stakeholders and regulators in both gas and power markets to fully understand what the future holds. Proper accounting of the interdependencies between NG and power markets is integral to the quality of any long-term predictions. Approaches for modelling an integrated NG-power capacity expansion that account for economics and market operations is the key to the most effective analysis.

The issue of gas-power integration has been a topic of active interest in the industry, and that interest is increasing. For example, the East Interconnect Planning Collaborative coordinated a major study in 2013 – 2014 to evaluate the capability of NG infrastructure to: satisfy the needs of electric generation, identify contingencies that could impact reliability in both directions and review dual-fuel capability. Likewise, the notorious “polar vortex” during the winter of 2013-2014 caused unusually cold weather in the New England region, which “tested the ability of gas-fired generators to access fuel supplies,” and caused ISO-NE and others to acknowledge the need to further investigate the issues affecting synchronization between gas and electric systems. More recently, companies like PIRA Energy are sharpening their focus on the interdependencies between gas and electric power.

There is a need for new and improved modeling approaches that realistically consider this growing gas-power market integration. An even greater need is to integrate the modeling of these markets in a way that is both efficient and practical for the end user, and still able to produce commercially viable results. EPIS has extensively tested interfacing AURORAxmp with GPCM, a calibrated NG model developed by RBAC, Inc. Several organizations and agencies have found this approach successful. Utilizing the two models allows us to develop projections for endogenously derived capacity additions (in both electric generation expansion and gas-pipeline expansion), electricity pricing, gas usage and pricing, etc. which are consistent between the two markets. This consistency leads to greater insight and confidence to aid decision-makers.

Figure 2: Abstract representation of integrated NG-power modeling using AURORAxmp and GPCM..

Although the industry is now anxiously waiting for the judiciary to weigh in on the legality of CPP regulations, there is a consensus that some form of carbon emission regulation will likely be in effect in the near future. Some states, such as Colorado, have already undertaken several regulatory initiatives and may implement a state-level CPP-like emissions regulation even if the federal plan is vacated by the courts.

As part of our ongoing research on the topic of gas-power modeling, we have designed and executed a series of test scenarios comparing the standard calibrated cases of AURORAxmp and GPCM against a potential implementation of CPP. If the proposed form of CPP is upheld in the courts, states have a number of implementation options. At this early stage, there has been no good evidence to indicate that one option would be more popular over another. This necessitated we make some broad assumptions in our experimental gas-power integration process. In our test scenarios, we assumed that all states would adopt the mass-based goal with new resource complement option.

An integrated gas-power framework allows us to better understand the most probable direction for the two markets. Our integrated GPCM-AURORAxmp CPP test scenario for the Eastern Interconnect took 7 iterations to converge to a common solution that satisfied both markets. By comparing resulting capacity expansions, fuel share changes, and gas prices between the starting point (Iteration 0) and ending point (Iteration 6) we get a sense of how the markets will coevolve.

Starting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 3: Starting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 3 shows the capacity expansion resulting from Iteration 0, the starting point of the integrated iterations. Iteration 0 is essentially a standalone power model with no regard for the impact the capacity expansion would have on the gas market. Figure 4 shows the capacity expansion after Iteration 6.

Resulting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 4: Resulting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

The convergent prices of NG were lower for Iteration 6 than Iteration 0 at all major gas hubs. Figure 5 shows the monthly prices at Henry Hub for both the iterations. The lower gas prices are unintuitive, but plausible. The combined gas-power sector has several market forces that are interdependent. We are currently working with gas experts to understand some of the mechanisms that could lead to lower gas prices. We hypothesize that our accounting for capacity expansion in both the markets is one of the drivers for this behaviors and our findings will be reported in a future publication.

Comparison of starting and ending price trajectories with integrated GPCM-AURORAxmp model.

Figure 5: Comparison of starting and ending price trajectories with integrated GPCM-AURORAxmp model.

The lower gas prices highlight one of the key benefits of integrated gas-power models. Standalone modeling frameworks are likely to misrepresent the impact of the complex cross-market mechanisms. Integrated models avoid this particular pitfall by explicitly modeling each market and is a more apt tool for evaluating policies such the CPP. AURORAxmp provides the capability to model any of the implementation plans that states might adopt in the future – rate-based, mass-based, emission trading schemes and so forth. The ability to interface with widely used NG models, such as GPCM, provides a convenient option for analysts to confidently navigate the highly uncertain future of intertwined NG and power markets.

Filed under: Clean Power Plan, Natural GasTagged with: , , ,

Simple-Cycle Combustion Turbines in the CPP

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/03/04/simple-cycle-combustion-turbines-in-the-cpp/
RSS

The Environmental Protection Agency’s (EPA) Clean Power Plan (CPP) is full of interesting caveats and exceptions on many issues. One notable quirk is the exclusion of simple-cycle combustion turbines (SCCT) from the list of affected electricity generating units. States must detail how they intend to limit carbon emissions from combined-cycle combustion turbines (CCCT) and coal-powered steam generators, but carbon from SCCTs is not regulated under the CPP.

The EPA’s rationale is that SCCTs cannot meaningfully contribute to emission reductions because they run so rarely. In the full report, the EPA states that it does not expect this to change:

“In addition, while approximately one-fifth of overall fossil fuel-fired capacity (GW) consists of simple cycle turbines, these units historically have operated at capacity factors of less than 5 percent and only provide about 1 percent of the fossil fuel-fired generation (GWh)…the EPA expects existing simple cycle turbines to continue to operate as they historically have operated, as peaking units.”

Is this a realistic assumption? Simple-cycle units currently have low capacity factors, but that is mostly because they are relatively expensive to operate. Natural gas has historically been more expensive than coal. Among units burning natural gas, combined-cycle units are more efficient than simple-cycle units. As such, simple-cycle units are generally kept offline due to their higher operating costs. However, this is not a rule, it is a relationship. If you add costs to one set of generators and not another, the relationship may change.

To illustrate this point, let’s consider a few hypothetical units, operating in 2025, and see how they may respond to carbon pricing. One is a relatively modern and efficient simple-cycle gas plant, another is a typical combined-cycle gas plant, and the last is an older coal plant. Unit characteristics vary significantly within each of these technologies, but we will take a highly competitive simple-cycle and compare it to some of the least competitive coal generation to see where simple-cycle units may start to become cheaper than coal.

Operating characteristics for hypothetical units (2025)

Technology Heat Rate
(Btu/kWh)
CO2 Emission Rate
(lbs/mmbtu)
Fuel Cost
($mmbtu)
VOM
($/MWh)
Zero-Carbon
Operating Cost
($/MWh)
Efficient SCCT 10,000 8.00 18.50 80.00
Typical CCCT 7,500 118 7.00 6.50 52.50
Older Coal ST 12,000 210 3.50 8.50 42.00

We exclude an emission rate for our simple-cycle unit, because they are not regulated under the CPP and will not experience an increase in operating costs due to carbon restrictions or pricing. If we add a carbon price ($/ton) to each of these units, their operating costs will shift accordingly.

Hypothetical Operating Costs by Source and Carbon Price

As the price of carbon reaches $10/ton, the coal unit starts to become more expensive to operate (per MWh of generation) than the combined-cycle unit (Point A). This is expected and intended by the CPP. One of the fundamental building blocks of emission reductions is a shift of generation from coal to combined-cycle units. However, by the time we reach a carbon price of around $30/ton, coal units also become more expensive to operate than simple-cycle generators! Because the SCCT unit is not subject to carbon regulations under the CPP, its costs remain constant, while the operating cost of the coal plant rise quickly as carbon pricing increases.

A carbon price of $30/ton would be unprecedented in the U.S., but not inconceivable. Depending on which discount rate you prefer, the official social cost of carbon can exceed $30/ton. At EPIS, our modeling of mass-based compliance approaches to the CPP have shown that allowance prices greater than $30/ton may be needed for some states to meet their emission goals through a carbon market.

Of course, unit operation cannot be summed up by a single operating cost. Many factors can influence a generator’s decision to run, such as start costs, other environmental regulations, and participation in reserve or ancillary service markets. There may be reasons beyond per-MWh costs why an SCCT unit would continue to provide only peaking services in a high carbon price environment. However, some power providers may find that the strict emission limits placed on coal and combined-cycle plants opens up a unique opportunity for the relatively unregulated SCCT units. Anyone concerned with modeling the CPP would do well to carefully consider the potentially changing role of SCCTs in an uneven regulatory environment, which gives them a free pass while hindering coal and combined-cycle plants.

Will simple-cycle units increase their utilization if the CPP is implemented, becoming more than just peak power providers? Only time will tell. Let us know what you think in the comments.

Filed under: Clean Power Plan, Power Market InsightsTagged with: , , , , , , , ,

The New Electric Market in Mexico

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/02/17/the-new-electric-market-in-mexico/
RSS

The Role of Zonal Resource Planning Analyses

On January 26, 2016 a once-in-a-lifetime event occurred that may have been overlooked by the casual observer: Mexico launched the first phase of its reformed, now competitive, electric market. The day-ahead market began for the Baja Mexico interconnection and is the first component of a comprehensive change to the nation’s electric system.

Over the last few years, sweeping market reforms and designs were drafted, approved by the government, and are now beginning to be implemented in a fundamental shift for electricity in Mexico. The expectation is that incorporating a market structure will modernize a constrained and aging system, improve reliability, increase development of renewable generation and drive new investment.

A market shift like this underscores the critical need to produce meaningful and accurate analyses for long-term resource planning, in addition to participating in the day-ahead nodal market.

The importance of data availability to market participants cannot be overstated. As a result of the market reforms in Mexico, the sole utility, Comisión Federal de Electricidad (CFE), is being split into multiple entities and government organizations are being restructured to address the change from a state-run system to a competitive marketplace. Yet, the detailed data required for trading activities, such as those begun in January, and to support the proposed nodal market is difficult to obtain. Sources for much of this data are still being determined and still not available in some cases.

However, for typical generator development and economics, investment, and lifecycle forecasting – studies that require 30-40 year planning horizons – data is available. Resource planning analytics have become imperative to the development of new generation and transmission, informing investment in the energy sector, producing integrated resource plans for utilities, as well as numerous different studies for other stakeholders. Planning tools like AURORAxmp play a key role in these analyses, but so does the need for accurate market data.

Dispatch simulation models used for these studies typically define market topographies at the zonal (or control area) level. Mexico is currently divided into nine of these zones, or, “control regions”.

New Electric Market Control Regions in Mexico

Each of these zones contains generator information, load/demand information, and aggregated transmission capacities to/from adjoining zones. This data can be used by the dispatch simulation to forecast prices, value, risk, etc. for the study period. In the case of resource planning, it can produce detailed capacity expansion analyses to understand:
-Understand the value and operation of existing units.
-Determine whether to retire uneconomic or obsolete generators.
-Consider the value and performance of new generation that may have been added by the simulation.

Analysts can specify additional information such as new generation technologies (e.g. renewable generator options), capital costs, return components and other financial information to produce results that will inform build/buy decisions.

AURORAxmp has been used in a variety of studies in Mexico since 2002. Consultants and IPPs have utilized the software to produce meaningful results used in long-term resource planning decisions, and the zonal topography has provided the advantage of demonstrating value in the current market.

Developing a solid fundamental outlook that allows the assessment of potential long-term risks and opportunities is imperative for decision making and sound financial planning whether you are assessing the development a new power plant or acquiring an existing asset in Mexico. The wholesale power market in Mexico is expected to from a day-ahead and real-time nodal market to include traded pricing hubs with a futures market. A zonal model using AURORAxmp can provide an invaluable tool for long-term price forecasting, scenario analysis and asset valuation for the new Mexican reality.
– Marcelo Saenz, Pace Global, A Siemens Business

Although the proposed market will eventually operate at the nodal level, long-term studies at the zonal level remove the effects of temporary events at the nodal level, thus providing a more stable result for financial decisions.

AURORAxmp has the robust abilities to simulate both zonal and nodal markets. However, its leading capabilities in performing long-term resource planning analysis will continue to be especially important for markets, like Mexico, that will go through enormous changes and growth over the next few years.

Filed under: Power Market Insights, UncategorizedTagged with: , , , , ,

US Supreme Court Issues Stay on CPP

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/02/12/sumpreme-court-issues-stay-on-cpp/
RSS

On Tuesday, the U.S. Supreme Court issued a stay on the Environmental Protection Agency’s (EPA) Carbon Pollution Emission Guidelines for Existing Stationary Sources, more commonly known as the Clean Power Plan (CPP). This means that states will not be obligated to comply with any part of the CPP until a decision is reached on Chamber of Commerce, et al. v. EPA

The D.C. Circuit Court will begin hearing oral arguments on the merits of the CPP on June 2 of this year. The lower court’s ruling is expected to be appealed to the U.S. Supreme Court, regardless of the outcome. If past regulations of a similar scale are any indication, the U.S. Supreme Court will hear the case.

The final impact of this stay will depend largely on the outcome of Chamber of Commerce, et al. v. EPA. Even if the courts uphold the CPP, it is likely that the initial state submittal deadline of September 6, 2016 will be affected. However, if the case is concluded swiftly in favor of the EPA, they may be able to hold onto their final submittal deadline in 2018, despite the stay.

If the courts rule against the EPA, the CPP may be revised, or it may need to be scrapped all together. However, unless the court ruling overturns Massachusetts v. EPA (2007), the EPA will still be obligated to eventually regulate carbon as a hazardous air pollutant.

In December of 2011, the D.C. Circuit Court issued a similar stay on the Cross-State Air Pollution Rule (CSAPR). That rule went through a series of revisions and court battles, but the stay was eventually lifted in October of 2014.

The future of the CPP remains uncertain, but most industry experts would agree that participants still need to prepare and plan for the eventual impact of some kind of federal limit on carbon emissions.

Filed under: Power Market InsightsTagged with: , , , ,