EMFC Addresses Head-on the Tectonic Industry Changes

With record attendance in one of the most iconic tourist destinations in the world, the 20th Annual Electric Market Forecasting Conference (EMFC) took place September 6-8 in Las Vegas, NV. This industry-leading conference assembled top-notch speakers and gave an exclusive networking experience to attendees from start to finish.

The pre-conference day featured in-depth sessions designed to maximize the value of the Aurora software for its users. Advanced sessions included discussions on resource modeling and improving model productivity, recent database enhancements including the disaggregation of U.S. resources, an update on the nodal capability and data, and other model enhancements.

Pic 1

Michael Soni, Economist, Support | EPIS

Before the afternoon Users’ Group meeting started, EPIS announced that it was dropping “xmp” from the name of its flagship product to purely Aurora, and unveiled a fresh logo. Ben Thompson, CEO of EPIS said, “The new logo reflects our core principles of being solid and dependable, of continuously improving speed and performance, and of our commitment to helping our customers be successful well into this more complex future.”

That evening, attendees kicked-off the main conference with a night under the stars at Eldorado Canyon for drinks, a BBQ dinner and a tour of the Techatticup Mine; the oldest, richest and most famous gold mine in Southern Nevada.

pic5

Eldorado Canyon, Techatticup Mine

On Thursday, thought leaders from across the industry presented various perspectives on the complex implications that recent industry changes will have on grid operations, future planning and investments. The forum session opened with Arne Olson, a partner with E3 Consulting in San Francisco, discussing California’s proposed legislation SB-100, which aimed to mandate that 100% of California’s energy must be met by renewable sources by 2045, along with the bill’s implications for Western power markets and systems. He pointed out that SB-32, last year’s expansion of earlier legislation, which mandates a 40% reduction in GHG emissions (below the 1990 levels by 2030), is actually more binding than SB-100. He explained the economics of negative prices, why solar output will be increasingly curtailed and posited that CAISO’s famous “duck curve” is becoming more an economic issue vs. the reliability issue it was originally intended to illustrate.

Other Thursday morning presentations included “The Rise of Utility-Scale Storage: past, present, and future” by Cody Hill, energy storage manager for IPP LS Power, who outlined the advances in utility-scale lithium ion batteries, and their expected contributions to reserves as well as energy; Masood Parvania, Ph.D., professor of electrical and computer engineering at the University of Utah, who described recent advances in continuous-time operation and pricing models that more accurately capture and compensate for the fast-ramping capability of demand response (DR) and energy storage device; and Mahesh Morjaria, Ph.D., vice president of PV systems for First Solar who discussed innovations in PV solar module technology, plant capabilities and integration with storage.

Pic 3

Masood Parvania, Ph.D., Director – Utah Smart Energy Lab | The University of Utah

The afternoon proceeded with Mark Cook, general manager of Hoover Dam, who gave a fascinating glimpse into the operations and improvements of one of the most iconic sources of hydro power in the country; and concluded with Lee Alter, senior resource planning analyst and policy expert for Tucson Electric Power, who shared some of the challenges and lessons learned in integrating renewables at a mid-sized utility.

Networking continued Thursday afternoon with a few of the unique opportunities Las Vegas offers. In smaller groups attendees were able to better connect with each other while enjoying one of three options which included a delicious foodie tour, swinging clubs at TopGolf, or solving a mystery at the Mob Museum.

The final day of the conference was devoted to giving Aurora clients the opportunity to see how their peers are using the software to solve complex power market issues. It featured practical discussions on how to model battery storage, ancillary services, the integration of renewables and an analysis of the impact of clean energy policies all while using Aurora.

The conference adjourned and attendees headed out for a special tour of the Hoover Dam which included a comprehensive view of the massive dam and its operations, and highlighted many of the unique features around the site.

pic 4

Hoover Dam, Power Plant Tour

The EMFC is a once-a-year opportunity for industry professionals. The 20th Annual EMFC addressed head-on the tectonic industry changes (occurring and expected) from deep renewable penetration, advances in storage technologies, and greater uncertainty. Join EPIS next year for the 21st Annual EMFC!

For more information on the 2017 speakers, please visit http://epis.com/events/2017-emfc/speakers.html
To obtain a copy of any or all of the presentations from this year’s EMFC, Aurora clients can go to EPIS’s Knowledge Base website using their login credentials here. If you do not have login credentials, please email info@epis.com to request copies.

Filed under: Events, UncategorizedTagged with: , , , , ,

Power Market Insights Finishes Strong in 2016

2017 promises to be an even better year of delivering valuable market insight and expertise

The EPIS blog, Power Market Insights is nearly one year old and in that time has posted editorial with a great deal of practical information. The articles, authored by EPIS domain experts, were all carefully researched and delivered valuable intelligence to the industry.

For example, an article on large scale battery storage discussed technology issues and advances that affect the rapidly growing wind and solar market. The article quotes analyst predictions that battery storage costs will drop to $230/kWh by 2020, with an eventual drop to $150/kWh. It goes on to state that worldwide battery storage may grow to almost 14GW by 2023.

Power Market Insights delivered a perspective on the new electric market in Mexico, weeks after that country’s most recent industry reforms were launched. The article reported the fundamental shift in the market and outlined how these reforms would “modernize a constrained and aging system, improve reliability, increase development of renewable generation and drive new investment.” The author discussed the role of zonal resource planning analysis and the importance of data availability. Months later, EPIS announced its Mexico Database for use with AURORAxmp.

Data plays a large role in articles on European power market reporting changes and the EIA easing of data accessibility. Both articles rely on the expertise of EPIS’s Market Research team. The EIA data accessibility article discussed how improvements to the management and delivery of their datasets expand the list of tasks for which EIA data may be useful. For many power modelers, who were unaware of these changes, this information gives important insight that can make their jobs easier. Likewise, the discussion on European power market reporting changes informed readers on ways the available data, while improved, may differ among sources and offered an example of the importance of cross-checking sources.

Two articles lifted the hood to give readers a peek into the workings of algorithms and computing speed. The article on the algorithms at the core of power market modeling offered readers a foundational overview of the mathematical optimizations used in forecasting and analyzing power markets. The computing speed article explained Moore’s Law, discussed how maxed out processors are shifting focus to more cores and how software architecture will soon lose its “free ride.” All of this was put into the perspective of computing data like hourly dispatch and commitment decisions. Both articles enable readers to be able to intelligently discuss the computing parameters that affect their daily performance.

Industry issues were delved into with articles on the water-energy nexus, nuclear retirements, the California market hydropower comeback, uncertainty for ERCOT markets and several articles on the CPP. The writers lent their considerable expertise for these articles—for example, the author of the articles on the CPP had read the entire 304-page filing in the Federal Register before distilling it down for readers to quickly digest.

A number of articles discussed issued faced by modelers as they work to forecast and analyze the market. Pieces on integrated modeling of natural gas and power, working with data in power modeling, the fundamentals of energy efficiency and demand response and reserve margins offered real-world discussions designed to help AURORAxmp users and other industry professionals do their jobs better.

The blog’s 2017 editorial calendar is being finalized right now and will continue to create high-quality articles designed to be of interest to energy and power market professionals. Look for feature editorials next year written by leading analysts and experts in the industry at large. Put Power Market Insights into your must-read list.

Filed under: Power Market InsightsTagged with: ,

Reserve Margins

Discussing reserve margins is often convoluted because of the various definitions and intricacies.  The basic principle is that reserve capacity is used to ensure adequate power supply.  Different types of reserves are defined in terms of various time scales.  In the short-term, operating reserves are used to provide adequate supply in the case of sudden plant or transmission outages.  In the long-term, planning reserves are used to ensure adequate power supply given a forecasted load in the years ahead.  Both types of reserves are often expressed as a ratio of excess capacity (i.e., available capacity less demand) to demand.  In this blog post, we will discuss planning reserves; the typical values, historical trends, market-to-market differences, and modeling within AURORAxmp.

Planning Reserves

Without adequate planning reserves, new generation may not be built in time and thus ultimately cause power disruptions.  But what is adequate?  In 2005, Congress passed The Energy Policy Act of 2005 that requires the North American Reliability Corporation (NERC) to assess the reliability of the bulk power system in North America.  A part of NERCs responsibility is to periodically publish Long-Term Reliability Assessments (LTRA) which include planning reserve targets, or reference margins.  Usually these are based on information provided by each governing body (e.g., ISO, RTO, etc.) in the assessment area.  If no such information is available, NERC sets the reference margin to 15% for thermal-dominated systems and 10% for hydro-dominated systems.  For the 2015 LTRA, the NERC reference margins range from 11% to 20% across the assessment areas as shown in Figure 1.  The highest reference margin, 20% for NPCC Maritimes, is due to a disproportionate amount of load being served by large generating units.

NERC reference margins graph

Figure 1. 2016 Planning reserve margins by NERC assessment area from the 2015 LTRA.
The gold bars represent assessment areas with capacity markets.

In addition to providing reference margins, or published targets from other entities, NERC publishes yearly anticipated planning reserve margins, out 10 years, for 21 assessment areas in North America.  To do this, NERC collects data on peak demand and energy, capacity, transmission and demand response from NERC regional entities.  Data submission is usually due in the first quarter of the report year.  This strategy represents a bottom-up approach to understanding reliability.

Forecasting Anticipated Planning Reserve Margins

Forecasted anticipated planning reserve margins can vary substantially from assessment year to assessment year, area to area, and as a function of markets.  To illustrate this, one-, five-, and 10-year forecasted anticipated planning reserve margins for PJM and ERCOT are shown in Figure 2.  The variability in anticipated planning reserve margin is similar between each assessment area, and, increases with the length of the forecast.  This is presumably due to increasing uncertainty in forecasts as a function of time.  Interestingly, the number of years with shortfalls (fewer reserves than the target) is much larger in ERCOT than PJM.  PJM has a three-year forward capacity market and ERCOT is an energy only market.  Therefore, there is more incentive for long-term excess capacity in PJM.

reserve margins

Figure 2. Planning reserve margins targets (dashed line) and one-, five-, and 10-year anticipated planning reserve margin from the 2011 to 2015 NERC LTRAs.

As shown above, in both ERCOT and PJM, the year-ahead anticipated planning reserve margins are adequate, suggesting long-term planning approaches are working in both markets, however, regional complexities can pose problems.  For example, MISO recently published the 2016 Organization of MISO States (OMS) Survey to assess planning reserve margins.  In 2017, shortfalls are predicted in three zones – IL, MO, and Lower MI.  Excess capacity from other zones will be transferred to make up for the shortfall in the short term.  Similar to the NERC forecasts, uncertainty in the regional forecasted load is key to this issue, and may increase or decrease this shortfall.

In addition to regional issues, the rapid changing generation mix also poses challenges for quantifying adequate planning reserves.  NERC has recognized this and has called for new approaches for assessing reliability in both the 2014 and 2015 LTRA.  One specific issue is traditional load shape disruption with added solar resources.  A typical summer-peaking system may face reliability issues in the winter or other expected off-peak months where demand still is high but solar output is low.  Considering secondary demand peaks, and thus planning reserve margins, may be prudent in these situations.

AURORAxmp and Planning Reserve Margins

In AURORAxmp, planning reserve margins are used in the long-term capacity expansion logic to guide new resource builds.  Our Market Research and Analysis team updates planning reserve margins annually based on the latest NERC LTRA.  Planning reserve margins can be specified on the pool or zone level, thus easily facilitating varying spatial scale studies.   Risk studies can be conducted to quantify the impacts of uncertainty in each aspect of planning reserve margins on long-term resource builds.  Together these features support cutting-edge analysis surrounding the complexities of reserves.

Filed under: Power Market InsightsTagged with: ,

19th Annual Electric Market Forecasting Conference to Focus on the Future of Energy Markets

The 2016 Electric Market Forecasting Conference (EMFC), a leading gathering of industry strategists and executives, will feature in-depth discussions on the driving forces of today’s energy markets. The 19th annual conference, organized by EPIS, LLC, will bring together a stellar lineup of speakers as well as senior executives in the industry.  The EMFC will be held at the Atlanta Evergreen Marriott Conference Resort in Atlanta, Georgia, September 14-16, 2016.

golfcourse2

The EMFC features an optional one-day pre-conference training for both new and advanced power market modelers, as well as an AURORAxmp Users’ Group Meeting. Both clients and non-clients are welcome to attend. The two-day conference will include presentations and case studies from industry experts, as well as special events and networking opportunities. Speakers include: Larry Kellerman, managing partner of Twenty First Century Utilities, Morris Greenberg, managing director of gas and power modeling at PIRA Energy Group and Jeff Burleson, VP of system planning at Southern Company. A full list of speakers is available at http://epis.com/events/2016-emfc/speakers.html.

“Over the past 19 years, the Electric Market Forecasting Conference has become established as a valuable, strategic gathering for clients and non-clients alike,” said Ben Thompson, CEO of EPIS. “It is an event where executives and peers in the industry gather to share market intelligence and discuss the future of the industry.”

EMFC has developed a reputation for being an event that delivers real, actionable intelligence, not just abstract concepts. The organizers focus on an agenda filled with speakers who can share experience and takeaways that can be used to have a positive impact on attendees’ organizations. The conference’s intimate environment allows participants to create lasting relationships with peers and luminaries alike.

Now in its 19th year, EMFC is an essential conference for power industry professionals to come together to share best practices and market intelligence. The one-day pre-conference allows AURORAxmp users to learn techniques to master the AURORAxmp application and maximize ROI. More information can be found at: http://epis.com/events/2016-emfc/index.html.

Filed under: EventsTagged with: , , , , ,

The Algorithms at the Core of Power Market Modeling

In 2007, the U.S. government formed the Advanced Research Projects Agency-Energy (ARPA-E) which encourages research on emerging energy technologies. Last year this agency awarded about 3.1 million dollars to the Pacific Northwest National Laboratory (PNNL) to work on a computational tool called High-Performance Power-Grid Operation (HIPPO) over the next few years. The research team will be led by an applied mathematician at PNNL and be partnered with GE’s Grid Solutions, MISO, and Gurobi Optimization. The group will seek improved ways to solve the unit commitment problem, “one of the most challenging computational problems in the power industry.” The work highlights the general trend over the past twenty years in this and other industries to turn to mathematical optimization for answers to some of the most difficult scheduling and planning problems. What’s astounding is the rate at which commercial mathematical solvers have been able to respond to these needs with enormous leaps in algorithmic efficiency over a relatively short period of time.

At the core of most of the mathematical optimization used in power modeling is linear programming (LP). Linear programs are problems in which some linear function is maximized or minimized given a set of linear constraints. The mathematician George Dantzig invented the simplex algorithm in 1947 in advance of the day when computers could really take advantage of it. For example, in 1953 one implementation of the algorithm on a Card Programmable Calculator (CPC) could solve a certain 26 constraint, 71 variable instance of the classic Stigler Diet Problem in about eight hours. As computer technology advanced, though, the usefulness and power of the simplex algorithm specifically and linear programming in general became apparent. Advances in the algorithm combined with exponential computer speed improvements made linear programming a staple in problem solving by the early 2000s. In fact, algorithmic progress in linear programming (i.e. independent from computer speed improvements) gave a 3300x improvement factor from 1988 to 2004. Coupled with actual computer machine improvements of 1600x in that same time horizon, this produced a 5,280,000x average improvement for solving linear programs!

While progress on linear programs has somewhat plateaued in recent years, improvements in mixed-integer programming (MIP) have continued at impressive rates. In its simplest form, a mixed-integer program is a linear program for which some of the variables are restricted to integer values. This integer-value restriction makes the problem so difficult that it is NP-hard, meaning that finding a guaranteed polynomial time algorithm for all MIPs will most likely never occur. And yet the MIP is at the center of an ever-increasing number of practical problems like the unit commitment problem that the HIPPO tool mentioned above is meant to address, and it is only relatively recently that it really became a practical problem solving tool. According to one expert and active participant in the field, Robert Bixby,

“In 1998 there was a fundamental change in our ability to solve real-world MIPs. With these developments it was possible, arguably for the first time, to use an out-of-the box solver together with default settings to solve a significant fraction of non-trivial, real-world MIP instances.”

He provided this chart showing the improvements in one MIP solver, CPLEX, from 1991 to 2007:

Cplex version-to-version pairs
Figure 1. CPLEX Version-to-Version Pairs. Source

This chart shows that over approximately 16 years, the machine-independent speed improvement was roughly 29,000x! The progress on developing fast algorithms to solve (or at least find good solutions to) mixed-integer programs has been simply explosive.

The importance of this development is highlighted by extensive use of MIPs by regional reliability organizations in the United States. An independent review published by the National Academies Press states that:

In the day-ahead time frame, the CAISO, ERCOT, ISO-NE, MISO, PJM, and SPP markets employ a day-ahead reliability unit commitment process… The optimization for the day-ahead market uses a dc power flow and a mixed integer program for optimization.

In other words, the MIP is at the core of day-ahead market modeling for these major reliability organizations. A presentation given a few years back by PJM shows their increasing need to solve very difficult MIPs in a shorter time frame. The presentation highlights the fact that PJM has a “major computational need” for “better, faster MIP algorithms and software.” The short slide deck states three times in different contexts the need in PJM for “even faster dynamic MIP algorithms.” The entity must solve their day-ahead model for the security constrained unit commitment (SCUC) problem in a four-hour window towards the end of each day, and the presentation explains that they “have a hard time solving deterministic SCUC models in the time allotted.” So the need for ever-improving mixed-integer programs in the energy industry doesn’t seem to be going away any time soon. And with the increasing complexity of problems such as renewable integration, sub-hourly modeling, and the handling of stochastics, the push for “better, faster MIP algorithms” will only continue.

So what does all of this mean for power modelers? Professional solvers’ ability to continue to improve LP/MIP algorithms’ performance will determine whether the most difficult questions can still be addressed and modeled. But, in addition to that, it is crucial that the simulation models that seek to mimic real-world operations with those solvers are able to intelligently implement the fastest possible optimization codes. As EPIS continues to enhance AURORAxmp, we understand that need and spend an enormous amount of time fine-tuning the LP/MIP implementations and seeking new ways to use the solvers to the greatest advantage. Users of AURORAxmp don’t need to understand those implementation details—everything from how to keep the LP constraint matrix numerically stable to how to pick between the interior point and dual simplex LP algorithms—but they can have confidence that we are committed to keeping on pace with the incredible performance improvements of professional solvers. It is in large part due to that commitment that AURORAxmp has also consistently improved its own simulation run time in significant ways in all major releases of the past three years. With development currently in the works to cut run times in half of the most difficult DC SCOPF simulations, we are confident that this trend will only continue in the coming years with future releases of AURORAxmp. As was said about the projected future development of mixed-integer programs, the performance improvement “shows no signs of stopping.”

 

Filed under: Data Management, Power Market InsightsTagged with: , ,

Living in the Past?

Living in the past is not healthy. Is your database up-to-date? EPIS just launched the latest update to the North American Database, version 2016_v4, marking the fourth North American data update this year! Recent changes in the power industry present challenges to database management which will be discussed in this post.

In general, the transformation in power generation sources in the U.S. coupled with evolving electricity demand and grid management represents a paradigm shift in the power sector. In order to accurately model power prices in the midst of such change, one must have a model built on fundamentals and a database that is up-to-date, has reasonable assumptions, is transparent and is flexible. A recent post described the technical side of working with databases in power modeling. This entry outlines important changes in the East Interconnect, the impacts those changes have on data assumptions and configuration and the steps we are taking to provide excellent databases to our clients.

Recent shifts in power generation sources challenge database assumptions and management. New plant construction and generation in the U.S. are heavily weighted towards renewables, mostly wind and solar and as a result, record generation from renewables has been reported across the East Interconnect. Specifically, on April 6, 2016, the Southwest Power Pool (SPP) set the record for wind penetration:

Record Wind Penetration Levels 2015

Figure 1. Record wind penetration levels in Eastern ISOs compared with average penetration in 2015. SPP holds the record which was reported on April 6, 2016. Record sources: NYISO, SPP, MISO, ISO-NE, PJM. 2015 Averages compiled from ISO reports, for example: NYISO, SPP, MISO, ISO-NE, PJM. *Average 2015 generation used to calculate penetration.

Similarly, the New York City area reached a milestone of over 100 MW in installed solar distributed resources. Accompanying the increase in renewables are increases in natural gas generation and reductions in coal generation. In ISO-NE, natural gas production has increased 34 percent and coal has decreased 14 percent since 2000, as highlighted in their 2016 Regional Electricity Outlook. These rapid changes in power generation sources require frequent and rigorous database updates.

Continued electric grid management changes in the East Interconnect also requires flexibility in databases. One recent change in grid management was the Integrated System joining the Southwest Power Pool, resulting in Western Area Power Administration’s Heartland Consumers Power District, Basin Electric Power Cooperative and Upper Great Plains Region joining the RTO. The full operational control changed on October 1, 2015, thus expanding SPPs footprint to 14 states, increasing load by approximately 10 percent and tripling hydro capacity. Grid management change is not new, with the integration of MISO South in 2013 as an example. Changes such as these require flexibility in data configuration that allow for easy restructuring of areas, systems and transmission connections.

Variability in parameters, such as fuel prices and demand, introduce further difficulty in modeling power markets. The so called “Polar Vortex” weather phenomena shocked North Eastern power markets in the winter of 2013/2014 with cold temperatures and high natural gas prices resulting in average January 2014 energy prices exceeding $180/MWh in ISO-NE. It seemed like the polar opposite situation occurred this last winter. December 2015 was the mildest since 1960, and together with low natural gas prices, the average wholesale power price hit a 13-year low at $21/MW. The trend continued into Q1 of 2016:

Monthly average power price in ISO-NE Q1 2014 and 2016

Figure 2. Monthly average power price in ISO-NE in Q1 2014 and 2016. Variability between years is a result of high natural gas prices and cold weather in 2014 versus low natural gas prices and mild weather in 2016.

Whether extreme events, evolving demand or volatile markets, capturing uncertainty in power modeling databases is challenging. In AURORAxmp, users can go one step further by performing risk simulations; specifying parameters such as fuel prices and demand to vary across a range of simulations. This is a very powerful approach to understanding the implications of uncertainty within the input data.

The aforementioned changes in generation, grid management and demand, offer exciting new challenges to test power market models and data assumptions. To test our platform, EPIS performs a historical analysis as a part of each database release. Inputs of historical demand and fuel prices are used to ensure basic drivers are captured and model output is evaluated not only in terms of capacity, but monthly generation, fuel usage and power prices. The result of this process is a default database that is accurate, current, contains reasonable assumptions, is transparent and is flexible to ensure you have the proper starting point for analysis and a springboard for success.

With the release of North_American_DB_2016_v4, EPIS continues to provide clients with superb data for rigorous power modelling. The 2016_v4 update focuses on the East Interconnect and includes updates to demand, fuels, resources, DSM and other miscellaneous items. Clients can login to our support site now to download the database and full release notes. Other interested parties can contact us for more information.

Filed under: Data Management, Power Market InsightsTagged with: , , , , , ,

The Fundamentals of Energy Efficiency and Demand Response

What are Energy Efficiency & Demand Response Programs?

Though the Energy Information Administration states, “there does not seem to be a single commonly-accepted definition of energy efficiency,” efficient energy use, sometimes simply called energy efficiency, refers to the reduction in the amount of energy required to provide the equivalent quality of products and services. Examples include improvements to home insulation, installation of fluorescent lighting & efficient appliances, or improving building design to minimize energy waste.

Demand response, according to the Department of Energy, is defined as, “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized.” Utilities can signal demand reduction to consumers, either through price-based incentives or through explicit requests. Unlike energy efficiency, which reduces energy consumption at all times, demand response programs aim to shift load away from peak hours towards hours where demand is lower.

What are the Benefits of Energy Efficiency & Demand Response Programs?

The decreasing and ‘flattening’ of the demand curve can directly contribute to improved system and grid reliability. This ultimately translates to lower energy costs, resulting in a financial cost saving to consumers, assuming the energy savings are greater than the cost of implementing these programs and policies. In 2010, Dan Delurey, then president of the Demand Response and Smart Grid Coalition, pointed out that the top 100 hours (or just over 1% of the hours in a year) account for 10-20% of total electricity costs in the United States. Slashing energy consumption during these high peak hours, or at least shifting demand to off-peak hours, relieves stress on the grid and should make electricity cheaper.

Additionally, decreasing energy consumption directly contributes to the reduction of greenhouse gas emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation prevented the emission of 10.2 gigatonnes of CO2, helping to minimize global emissions of greenhouse gases.

Lastly, reductions in energy consumption can provide domestic benefits in the forms of avoided energy capital expenditure and increased energy security. The chart below displays the value of avoided imports by country in 2014 due to the investments in energy efficiency since 1990:

Added Volume and Value of Imports Figure 1: Avoided volume and value of imports in 2014 from efficiency investments in IEA countries since 1990. Source

Based on these estimated savings, energy efficiency not only benefits a country’s trade balance, but also reduces their reliance on foreign countries to meet energy needs.

Modeling the Impacts of Energy Efficiency and Demand Response

Using AURORAxmp, we are able to quantify the impact of energy efficiency and demand response programs. In this simple exercise, we compare the difference between California with 2 GW of energy efficiency and 2 GW of demand response versus a case without energy efficiency or demand response from 2016 to 2030. The charts below show the average wholesale electricity prices & system production costs:

average electricity price $-MWhAverage System Cost (000's)

 Figure 2: Note these are 2014 real dollars.

Holding all else equal, adding demand response and energy efficiency programs into the system decreased average wholesale electricity prices by about $2.88 (5.4%) and the average system production cost fell by $496,000,000 (5.1%). This is a simple example in one part of the country, but one can easily include additional assumptions about the grid, resources characteristics, and load shape as they desire.

Both demand response and energy efficiency programs are intended to be more cost effective and efficient mechanisms of meeting power needs than adding generation. Emphasis on the demand side can lead to lower system production costs, increased grid reliability, and cheaper electric bills; all of which lie in the best interest of governments, utilities, and consumers.

Filed under: Energy Efficency, Power Market InsightsTagged with: , , , , ,

Integrated Gas-Power Modeling

Quantifying the Impacts of the EPA’s Clean Power Plan

Notwithstanding the recent legal stay from the U.S. Supreme Court, it is still important to understand the U.S. EPA’s Clean Power Plan (CPP) and its impact in the larger context of natural gas markets and its role in electric power generation. Because these two markets are becoming even more highly interrelated, integrated gas-power modeling is the most realistic approach for such analyses. EPIS has tested interfacing AURORAxmp® with GPCM®, a calibrated NG model developed by RBAC, Inc. The following is a brief discussion of our experimental setup as well as some of our findings.

Integration Approach

Monthly prices for 39 major natural gas hubs for the next 20 years are represented in AURORAxmp (as an input). They were developed utilizing GPCM’s market model (as an output) in pipeline capacity expansion mode. AURORAxmp then simulates a long-term capacity expansion that utilizes the GPCM-generated gas prices, and produces many results: power prices, transmission flows, generation by each resource/resource type including gas-consumption data. This gas-consumption (output from AURORAxmp) is fed back into GPCM as gas demand by the electricity sector (input to GPCM) for a subsequent market balancing and pipeline capacity expansion simulation which generates a new set of monthly gas hub prices. The iterative process begins at some arbitrary, but plausible, starting point and continues until the solution has converged. Convergence is measured in terms of changes in the gas-burn figures and monthly gas-hub prices between subsequent iterations.

This two-model feedback loop can be utilized as a tool to evaluate energy policies and regulations. To quantify the impact of an energy policy, we need two sets of integrated gas-power runs which are identical in all respects except the specific policy being evaluated. For example, to understand the likely impacts of emission regulation such as CPP, we need two integrated gas-power models with the identical setup, except the implementation of CPP.

Before presenting our findings on the impact of “CPP vs No CPP”, we first provide some further details on the setup of the GPCM and AURORAxmp models.

GPCM Setup Details

• Footprint: All of North America (Alaska, Canada, contiguous USA, and Mexico), including liquefied natural gas terminals for imports, and exports to rest-of-world.
• Time Period: 2016-2036 (monthly)
• CPP Program: All the effects of CPP on the gas market derived from changes to gas demand in the power generation sector.
• Economics: Competitive market produces economically efficient levels of gas production, transmission, storage and consumption, as well as pipeline capacity expansion where needed.

AURORAxmp Setup Details

  • Footprint: All three major interconnections in North America (WECC, ERCOT, and the East Interconnect; which includes the contiguous U.S., most Canadian provinces and Baja California).
  • Time Period: 2016 – 2036 (CPP regulatory period + 6 years to account for economic evaluation)
  • CPP Program: mass-based with new source complement for all U.S. states
    • Mass limits for the CPP were applied using the Constraint table
    • Mass limits were set to arbitrarily high values in the Constraint table for the “No CPP” case.
  • RPS targets were not explicitly enforced in this particular experiment. Future studies will account for these.
  • LT Logic: MIP Maximize Value objective function

Notations

  1. “CPP” – Convergent result from integrated gas-power model with CPP mass limits.
  2. “No CPP” – Convergent result from integrated gas-power model with arbitrarily high mass limits.
  3. “Starting Point” – Gas prices used in the first iteration of integrated gas-power modeling.
    • This is the same for both “CPP” and “No CPP” case.

Quantifying the CPP vs. No CPP

Impact on Gas and Electricity Prices

  1. Both No CPP and CPP cases have generally lower prices than the Starting Point case in our experiment. However, post-2030, CPP prices are higher than the Starting Point.
    • This happens due to capacity expansion in both markets.
    • We stress that the final convergent solutions are independent of the Starting Point case. The lower prices in CPP and No CPP cases compared to the Starting Point case are a feature of our particular setup. If we had selected any other starting price trajectories, the integrated NG-power feedback model would have converged on the same CPP and No CPP price trajectories.
  2. CPP prices are always higher than the No CPP case.
    • This is likely driven by increased NG consumption in CPP over No CPP case.

This behavior was observed in all major gas hubs. Figure 1 shows the average monthly Henry Hub price (in $/mmBTU) for the three cases.

Impact of CPP on Henry Hub PricesFigure 1: Monthly gas prices at Henry Hub for all three cases.

Figure 2 presents the monthly average power prices in a representative AURORAxmp zone.

Comparison of Power Prices in PJM Dominion VPFigure 2: Average monthly price in AURORAxmp zone PJM_Dominion_VP with and without CPP.

Figure 3 shows the impact of CPP as a ratio of average monthly prices in AURORAxmp’s zones for the CPP case over No CPP case. As expected, power prices with the additional CPP constraints are at the same level or higher than those in the No CPP case. However, it is interesting to note that the increase in power prices happens largely in the second half of CPP regulatory period (2026 onwards). It appears that while gas prices go up as soon as the CPP regulation is effective, there is latency in the increase in power prices.

Impact of CPP on Zone Price (CPP/No CPP)Figure 3: Impact of CPP on electricity prices expressed as a ratio of CPP prices over No CPP prices.

Figure 4 presents a comparison of total annual production cost (in $billions) for each of the three regions.

Annual Production Cost (In $billions) for each of the three regions.Figure 4: Total annual production costs by region for CPP and No CPP case.

Figure 5 presents the same comparison as a percentage increase in production cost for the CPP case. The results show that while the CPP drives up the cost of production in all regions, the most dramatic increase is likely to occur in the Eastern Interconnect.

Percentage increase in production cost total for CPP over No CPP CaseFigure 5: Percent increase in production cost for CPP case.

Electricity Capacity Expansions

Comparing the power capacity expansions in Figure 6 and Figure 7, we see that AURORAxmp projected building more SCCTs in the CPP case vs. the No CPP case in the Eastern Interconnect. We believe this is primarily driven by the higher gas prices in the CPP case over No CPP case. SCCTs typically have slightly higher fuel prices compared to CCCTs, which get their fuel directly from the gas hub for the most part. In this long-term analysis, AURORAxmp is seeking to create the mix of new resources that are most profitable while adhering to all of the constraints. The higher gas prices in the CPP case are just high enough to make the SCCTs return on investment whole.

Eastern Interconnect Build Out - No CPPFigure 6: Capacity expansion for Eastern Interconnect – No CPP Case.

Eastern Interconnect Build Out - CPPFigure 7: Capacity expansion for Eastern Interconnect – CPP Case.

Table 1: Capacity expansion by fuel type in total MW.

Build

(MW)

East Int.

ERCOT

WECC

CPP

No CPP

CPP

No CPP

CPP

No CPP

CCCT

206,340

207,940

45,960

29,850

25,040

23,400

SCCT

49,082

1,932

1,030

630

2,435

2,530

Solar

200

300

200

100

200

400

Wind

6,675

0

400

100

1,400

0

Retired

54,563

8,899

16,051

10

10,669

8,417

Table 1 shows the details of power capacity expansion in the three regions with and without CPP emission constraints. In addition to increasing the expansion of SCCTs, we can see that CPP implementation incentivizes growth of wind generation, as well as accelerates retirements. Coal and Peaking Fuel Oil units form the majority of economic retirements in the CPP case.

Fuel Share Displacement

Figure 8 shows the percent share of the three dominant fuels used for power generation: coal, gas, and nuclear. Figure 9 shows the same data as the change in the fuel percentage share between the CPP and No CPP case. Looking at North American as a whole, we see that coal-fired generation is essentially being replaced by gas-fired generation. Our regional data shows that this is most prominent in the Eastern Interconnect and ERCOT regions.

Percentage Share of Dominate Fuel TypeFigure 8: Percentage share of dominant fuel type.

Change in fuel share for power generation (cpp - no cpp)Figure 9: Change in fuel share for power generation (CPP – No CPP).

Natural Gas Pipeline Expansions
The following chart presents a measure of needed additional capacity for the two cases. The needed capacity is highly seasonal, so the real expansion need would follow the upper boundary for both cases.

 

Additional NG Pipeline Capacity RequiredFigure 10: Pipeline capacity needed for the CPP and No CPP cases.

Our analysis shows that the CPP will drive an increase in natural gas consumption for electricity generation. The following chart quantifies the additional capacity required to meet CPP demand for NG.
Additional NG Capacity Required CPP vs No-CPP (bcf/day)

While the analysis presented here assumes a very specific CPP scenario, we stress that the integrated gas-power modeling is an apt tool for obtaining key insights into the potential impacts of CPP on both electricity and gas markets. We are continuously refining the AURORAxmp®-GPCM® integration process as well as performing impact studies for different CPP scenarios. We plan to publish additional findings as they become available.

Filed under: Clean Power Plan, Natural Gas, Power Market Insights, UncategorizedTagged with: , , , ,

Integrated Modeling of Natural Gas & Power

Natural gas (NG) and electric power markets are becoming increasingly intertwined. The clean burning nature of NG, not to mention its low cost due to increases in discovery and extraction technologies over the past several years, has made it a very popular fuel for the generation of electricity. As a result, the power sector is consistently the largest NG consumer. For example, in 2014, 30.5% of the total NG consumption in the United States was used for the generation of electricity (Figure 1).

 

Figure 1: U.S. Natural Gas Consumption by Sector, 2014. Source

According to EIA’s Annual Energy Outlook (AEO) 2015 projections,

“…natural gas fuels more than 60% of the new generation needed from 2025 to 2040, and growth in generation from renewable energy supplies most of the remainder. Generation from coal and nuclear energy remains fairly flat, as high utilization rates at existing units and high capital costs and long lead times for new units mitigate growth in nuclear and coal-fired generation.”

Economic, environmental and technological changes have helped NG begin to displace coal from its dominant position in power production. Although it was just for a single month, NG surpassed coal for the first time as the most used fuel for electricity generation in April 2015. The EIA also notes that considerable variation in the fuel mix can occur when fuel prices or economic conditions differ from those in the AEO 2015 reference case. The AEO reference case assumes adoption of the Environmental Protection Agency’s (EPA) implementation of Mercury and Air Toxics Standard (MATS) in 2016, but not the Clean Power Plan (CPP). Adoption of CPP, along with favorable market forces, could change the projections of the AEO 2015 reference case significantly. There is a consensus within both NG and power industry that NG-fired power generation will likely increase with the adoption of CPP.

Quantifying such a trend is non-trivial, but is crucial for stakeholders and regulators in both gas and power markets to fully understand what the future holds. Proper accounting of the interdependencies between NG and power markets is integral to the quality of any long-term predictions. Approaches for modelling an integrated NG-power capacity expansion that account for economics and market operations is the key to the most effective analysis.

The issue of gas-power integration has been a topic of active interest in the industry, and that interest is increasing. For example, the East Interconnect Planning Collaborative coordinated a major study in 2013 – 2014 to evaluate the capability of NG infrastructure to: satisfy the needs of electric generation, identify contingencies that could impact reliability in both directions and review dual-fuel capability. Likewise, the notorious “polar vortex” during the winter of 2013-2014 caused unusually cold weather in the New England region, which “tested the ability of gas-fired generators to access fuel supplies,” and caused ISO-NE and others to acknowledge the need to further investigate the issues affecting synchronization between gas and electric systems. More recently, companies like PIRA Energy are sharpening their focus on the interdependencies between gas and electric power.

There is a need for new and improved modeling approaches that realistically consider this growing gas-power market integration. An even greater need is to integrate the modeling of these markets in a way that is both efficient and practical for the end user, and still able to produce commercially viable results. EPIS has extensively tested interfacing AURORAxmp with GPCM, a calibrated NG model developed by RBAC, Inc. Several organizations and agencies have found this approach successful. Utilizing the two models allows us to develop projections for endogenously derived capacity additions (in both electric generation expansion and gas-pipeline expansion), electricity pricing, gas usage and pricing, etc. which are consistent between the two markets. This consistency leads to greater insight and confidence to aid decision-makers.

Figure 2: Abstract representation of integrated NG-power modeling using AURORAxmp and GPCM..

Although the industry is now anxiously waiting for the judiciary to weigh in on the legality of CPP regulations, there is a consensus that some form of carbon emission regulation will likely be in effect in the near future. Some states, such as Colorado, have already undertaken several regulatory initiatives and may implement a state-level CPP-like emissions regulation even if the federal plan is vacated by the courts.

As part of our ongoing research on the topic of gas-power modeling, we have designed and executed a series of test scenarios comparing the standard calibrated cases of AURORAxmp and GPCM against a potential implementation of CPP. If the proposed form of CPP is upheld in the courts, states have a number of implementation options. At this early stage, there has been no good evidence to indicate that one option would be more popular over another. This necessitated we make some broad assumptions in our experimental gas-power integration process. In our test scenarios, we assumed that all states would adopt the mass-based goal with new resource complement option.

An integrated gas-power framework allows us to better understand the most probable direction for the two markets. Our integrated GPCM-AURORAxmp CPP test scenario for the Eastern Interconnect took 7 iterations to converge to a common solution that satisfied both markets. By comparing resulting capacity expansions, fuel share changes, and gas prices between the starting point (Iteration 0) and ending point (Iteration 6) we get a sense of how the markets will coevolve.

Starting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 3: Starting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 3 shows the capacity expansion resulting from Iteration 0, the starting point of the integrated iterations. Iteration 0 is essentially a standalone power model with no regard for the impact the capacity expansion would have on the gas market. Figure 4 shows the capacity expansion after Iteration 6.

Resulting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

Figure 4: Resulting capacity expansion in the Eastern Interconnect for GPCM-AURORAxmp model.

The convergent prices of NG were lower for Iteration 6 than Iteration 0 at all major gas hubs. Figure 5 shows the monthly prices at Henry Hub for both the iterations. The lower gas prices are unintuitive, but plausible. The combined gas-power sector has several market forces that are interdependent. We are currently working with gas experts to understand some of the mechanisms that could lead to lower gas prices. We hypothesize that our accounting for capacity expansion in both the markets is one of the drivers for this behaviors and our findings will be reported in a future publication.

Comparison of starting and ending price trajectories with integrated GPCM-AURORAxmp model.

Figure 5: Comparison of starting and ending price trajectories with integrated GPCM-AURORAxmp model.

The lower gas prices highlight one of the key benefits of integrated gas-power models. Standalone modeling frameworks are likely to misrepresent the impact of the complex cross-market mechanisms. Integrated models avoid this particular pitfall by explicitly modeling each market and is a more apt tool for evaluating policies such the CPP. AURORAxmp provides the capability to model any of the implementation plans that states might adopt in the future – rate-based, mass-based, emission trading schemes and so forth. The ability to interface with widely used NG models, such as GPCM, provides a convenient option for analysts to confidently navigate the highly uncertain future of intertwined NG and power markets.

Filed under: Clean Power Plan, Natural GasTagged with: , , ,

The New Electric Market in Mexico

The Role of Zonal Resource Planning Analyses

On January 26, 2016 a once-in-a-lifetime event occurred that may have been overlooked by the casual observer: Mexico launched the first phase of its reformed, now competitive, electric market. The day-ahead market began for the Baja Mexico interconnection and is the first component of a comprehensive change to the nation’s electric system.

Over the last few years, sweeping market reforms and designs were drafted, approved by the government, and are now beginning to be implemented in a fundamental shift for electricity in Mexico. The expectation is that incorporating a market structure will modernize a constrained and aging system, improve reliability, increase development of renewable generation and drive new investment.

A market shift like this underscores the critical need to produce meaningful and accurate analyses for long-term resource planning, in addition to participating in the day-ahead nodal market.

The importance of data availability to market participants cannot be overstated. As a result of the market reforms in Mexico, the sole utility, Comisión Federal de Electricidad (CFE), is being split into multiple entities and government organizations are being restructured to address the change from a state-run system to a competitive marketplace. Yet, the detailed data required for trading activities, such as those begun in January, and to support the proposed nodal market is difficult to obtain. Sources for much of this data are still being determined and still not available in some cases.

However, for typical generator development and economics, investment, and lifecycle forecasting – studies that require 30-40 year planning horizons – data is available. Resource planning analytics have become imperative to the development of new generation and transmission, informing investment in the energy sector, producing integrated resource plans for utilities, as well as numerous different studies for other stakeholders. Planning tools like AURORAxmp play a key role in these analyses, but so does the need for accurate market data.

Dispatch simulation models used for these studies typically define market topographies at the zonal (or control area) level. Mexico is currently divided into nine of these zones, or, “control regions”.

New Electric Market Control Regions in Mexico

Each of these zones contains generator information, load/demand information, and aggregated transmission capacities to/from adjoining zones. This data can be used by the dispatch simulation to forecast prices, value, risk, etc. for the study period. In the case of resource planning, it can produce detailed capacity expansion analyses to understand:
-Understand the value and operation of existing units.
-Determine whether to retire uneconomic or obsolete generators.
-Consider the value and performance of new generation that may have been added by the simulation.

Analysts can specify additional information such as new generation technologies (e.g. renewable generator options), capital costs, return components and other financial information to produce results that will inform build/buy decisions.

AURORAxmp has been used in a variety of studies in Mexico since 2002. Consultants and IPPs have utilized the software to produce meaningful results used in long-term resource planning decisions, and the zonal topography has provided the advantage of demonstrating value in the current market.

Developing a solid fundamental outlook that allows the assessment of potential long-term risks and opportunities is imperative for decision making and sound financial planning whether you are assessing the development a new power plant or acquiring an existing asset in Mexico. The wholesale power market in Mexico is expected to from a day-ahead and real-time nodal market to include traded pricing hubs with a futures market. A zonal model using AURORAxmp can provide an invaluable tool for long-term price forecasting, scenario analysis and asset valuation for the new Mexican reality.
– Marcelo Saenz, Pace Global, A Siemens Business

Although the proposed market will eventually operate at the nodal level, long-term studies at the zonal level remove the effects of temporary events at the nodal level, thus providing a more stable result for financial decisions.

AURORAxmp has the robust abilities to simulate both zonal and nodal markets. However, its leading capabilities in performing long-term resource planning analysis will continue to be especially important for markets, like Mexico, that will go through enormous changes and growth over the next few years.

Filed under: Power Market Insights, UncategorizedTagged with: , , , , ,