In-Home Storage: The Virtual Power Plant

Rapid Growth

Solar and wind are considered the most popular renewable resources across the world, but due to their intermittent and unpredictable nature, utilities are still relying on natural gas and coal. However, when renewable technologies are combined with energy storage they smooth out load fluctuations and have the potential to significantly impact the generation mix.
Total energy storage deployment has increased dramatically in the past few years because of low-carbon, clean energy policies, and is anticipated to grow even more in the near-term. By 2022, GTM Research expects the U.S. energy storage market to reach 2.5 GW annually, with residential opportunities contributing around 800 MW.

sn1

Source: GTM Research

How Does It Work?

Energy storage works as a three-step process that consists of extracting power from the grid, solar panels, or wind turbines, storing it (charging phase) during the off-peak period when power prices are lower, and returning it (discharging period) at a later stage during the on-peak period when the prices are much higher.

sn2

For electric vehicles (EV), most of the charging happens at night and during weekends, when the prices are comparatively lower, and vehicles are not used that much. As EVs continue to enter the mainstream market, they would increase the off-peak prices and contribute to load shifting.
Energy storage devices and EVs can complement each other or they may be competitive. But energy storage is the key element for EV charging during on-peak hours.

Different Market Players

Residential energy storage has been a holy grail for companies like Tesla, Panasonic, LG, Sunverge Energy, and Orison with lithium ion (Li-ion) batteries as the leading technology type. Now with plug-in electric and hybrid vehicles on the rise, automobile companies Tesla, Nissan, Mercedes Benz, BMW, Renault and Audi have also joined the residential market to integrate EV charging stations, battery storage and rooftop solar that in essence has a residence operating as a virtual power plant.
Beginning in December of last year, Arizona Public Service Company deployed Sunverge Energy’s energy storage hardware coupled with advanced, intelligent energy management systems that predict future load requirements and solar generation. Additionally, Tesla is enjoying significant market share, shown recently by Vermont-based Green Mountain Power’s launch of a comprehensive solution to reduce customer electricity bills using Tesla’s cutting edge Powerwall 2 and GridLogic software.
A few other utility companies, especially in Florida and California, are also exploring residential energy storage programs, as shown in the figure below.

sn3

Source: Hawaii PUC; General Assembly of Maryland

So, what are some other current thoughts about the pros and cons of in-home energy storage?

Advantages
Drawbacks
  • Energy storage reduces load fluctuations by providing localized ramping services for PV and ensuring constant, combined output (PV plus storage).
  • Improves demand response and reduces the peak demand.
  • Extra savings for customers through net metering systems and end-user bill management.
  • Reduces reliance on the grid; the customer can generate and store the energy during severe outages also.
  • Disposal of Li-ion batteries is not easy, and they are difficult to recycle
  • Automakers, like Nissan and BMW, are implementing second-life batteries, thereby reducing the durability and reliability of the product.

 

Concluding Thoughts

Clearly, a wider acceptance of energy storage resources would be a game changer in the U.S. power sector. Utilities, consumers, and automakers are profiting from this exponential growth of energy storage. With an increasing number of companies using artificial intelligence and machine learning algorithms for energy management systems, the synergy with energy storage creates a perfect, smart, personal power plant which has tremendous potential to change the landscape of the energy industry.

Filed under: Clean Power Plan, Hydro Power, Power Grid, Power Market Insights, Power Storage, Renewable Portfolio Standards, Renewable Power, Solar Power, UncategorizedTagged with: , , , , , , , , , , , , , , , , ,

Power Market Insights Finishes Strong in 2016

2017 promises to be an even better year of delivering valuable market insight and expertise

The EPIS blog, Power Market Insights is nearly one year old and in that time has posted editorial with a great deal of practical information. The articles, authored by EPIS domain experts, were all carefully researched and delivered valuable intelligence to the industry.

For example, an article on large scale battery storage discussed technology issues and advances that affect the rapidly growing wind and solar market. The article quotes analyst predictions that battery storage costs will drop to $230/kWh by 2020, with an eventual drop to $150/kWh. It goes on to state that worldwide battery storage may grow to almost 14GW by 2023.

Power Market Insights delivered a perspective on the new electric market in Mexico, weeks after that country’s most recent industry reforms were launched. The article reported the fundamental shift in the market and outlined how these reforms would “modernize a constrained and aging system, improve reliability, increase development of renewable generation and drive new investment.” The author discussed the role of zonal resource planning analysis and the importance of data availability. Months later, EPIS announced its Mexico Database for use with AURORAxmp.

Data plays a large role in articles on European power market reporting changes and the EIA easing of data accessibility. Both articles rely on the expertise of EPIS’s Market Research team. The EIA data accessibility article discussed how improvements to the management and delivery of their datasets expand the list of tasks for which EIA data may be useful. For many power modelers, who were unaware of these changes, this information gives important insight that can make their jobs easier. Likewise, the discussion on European power market reporting changes informed readers on ways the available data, while improved, may differ among sources and offered an example of the importance of cross-checking sources.

Two articles lifted the hood to give readers a peek into the workings of algorithms and computing speed. The article on the algorithms at the core of power market modeling offered readers a foundational overview of the mathematical optimizations used in forecasting and analyzing power markets. The computing speed article explained Moore’s Law, discussed how maxed out processors are shifting focus to more cores and how software architecture will soon lose its “free ride.” All of this was put into the perspective of computing data like hourly dispatch and commitment decisions. Both articles enable readers to be able to intelligently discuss the computing parameters that affect their daily performance.

Industry issues were delved into with articles on the water-energy nexus, nuclear retirements, the California market hydropower comeback, uncertainty for ERCOT markets and several articles on the CPP. The writers lent their considerable expertise for these articles—for example, the author of the articles on the CPP had read the entire 304-page filing in the Federal Register before distilling it down for readers to quickly digest.

A number of articles discussed issued faced by modelers as they work to forecast and analyze the market. Pieces on integrated modeling of natural gas and power, working with data in power modeling, the fundamentals of energy efficiency and demand response and reserve margins offered real-world discussions designed to help AURORAxmp users and other industry professionals do their jobs better.

The blog’s 2017 editorial calendar is being finalized right now and will continue to create high-quality articles designed to be of interest to energy and power market professionals. Look for feature editorials next year written by leading analysts and experts in the industry at large. Put Power Market Insights into your must-read list.

Filed under: Power Market InsightsTagged with: ,

European Power Market Reporting Changes

Data Transparency Doesn’t Always Mean Ease of Use

The ENTSO-E Transparency Platform has increased the amount of European power market data publicly available in recent years.  While not completely comprehensive, it does help consolidate a vast amount of information in a single location.  ENTSO-E (European Network of Transmission System Operator for Electricity) was established in 2009 for the purpose of “further liberalising the gas and electricity markets in the EU.”  ENTSO-E represents 42 TSOs from 35 countries, including EU countries and non-EU countries like Iceland, Norway and Turkey, among others.

Diverse Levels of Compliance

Unfortunately, the various TSO’s have diverse levels of compliance in reporting data completely, or in some cases on a regular basis, as they follow their own time schedule and level of detail.  Some appear to only report units with installed capacity above 10 MW, while others also report smaller units.  ENTSO-E provides data in two different levels of detail: by unit and by country.  The by-country values are totals for the entire country for units above 1 MW.  By unit, ENTOS-E only asks that its members report details on units above 100 MW; but the actual minimum size for unit detail reported varies by country, as well as the fuel type.  Some countries identify the fuel explicitly, while others simply identify units as thermal, which might be coal, natural gas, fuel oil, or a combination of fuels.  When comparing old data sources to each TSOs publicly-released data, a complete and exact unit-by-unit match with ENTSO-E reported data nearly impossible.

Reviewing ENTSO-E Data by Country

For example, EPIS recently performed an update to resources in Italy.  While gathering data from ENTSO-E at the country level, we found this year over year comparison provided by ENTSO-E.

fig1

Figure 1: ENTSO-E: installed capacity by fuel type, by country Source

Note that the 2014 total of 102,547 MW is only a five percent variance to the 2015 total of 97,794 MW. But the interesting values in this report are the variances reported in the different fuel categories.  For instance, there are a number of Production Types that are relatively close year-over-year, but notice that the “Other” category in 2014 was ~37k MW, while 2015 was ~14k MW, resulting in a 63% decrease for that fuel type.  Another set of values also should jump out at the casual observer: “Fossil Hard coal” increased from 1,360 MW to 6,386 MW.  Was Italy introducing new coal units?  No. They were simply modifying their reported fuel type to be more in line with ENTOS-E reporting policies.

Differences in ENTSO-E Data by Unit

Next we reviewed the ENTSO-E data by unit, which is required above 100 MW.

fig2

Figure 2: ENTSO-E 2015: installed capacity by fuel type, by unit Source

In this analysis, the item that is most unique is that while the data is now at a finer granularity of detail (i.e. by unit), the “Other” category has now grown larger, to ~43k MW, than the reported values by country of ~14k MW and ~37k MW in 2015 and 2014 respectively.

In other words, their own by unit data is not matching their reported country level totals. What is going on here?  Primarily, when researched further, we found a large number of units, that can rely on multiple fuels, are categorized as “Other” in the by-unit report.  When we then condensed the Production Type detail a little further and compared 2014 and 2015 by country to the 2015 by unit data, we found this:

fig3

Figure 3: ENTSO-E: capacity differences reported by country or by unit

After reviewing these summaries, we saw that the renewable fuels are fairly close when comparing by-unit to by-country totals: wind is comparable, GST is also very close, but solar does not compare well since many units are under 1 MW and not included in the by unit report.  This comparison also showed that the totals of thermal and “Other” fuels together are fairly similar and make up over 60% of the total installed capacity in each report.

Moving Forward & Cross-checking

So where to go from here in making sense of reporting variability?  ENTSO-E is currently compiling data submitted by TERNA, the TSO in Italy, and we took a look at what data is available in that report.

fig4

Figure 4: TERNA 2015: installed capacity by fuel Source

Two things to note here are that the TERNA resource database only reports units 100 MW and larger, and they have an even smaller set of Production Type groups.  Again we noticed the total capacity reported by unit is very different at ~73k MW versus the ~93k MW from the previous report, but explainable due to renewable sources generally having smaller installed capacity values and therefore not included in this report.  Of note, no solar is reported here, only two wind units that total 243 MW are included, and the reported hydro is approximately 60% of the total MW reported to ENTSO-E.  However, the thermo electric total matches fairly well with the ENTSO-E data at ~60k MW.

So, what have we seen in reviewing these 3 sets of data from these two sources?  ENTSO-E and TERNA have come a long way in providing transparency with their data, but as the details here show, there is still a long way to go before the data can be easily adopted without a lot of scrubbing.

Filed under: European Power Market, Power Market InsightsTagged with: , ,

Reserve Margins

Discussing reserve margins is often convoluted because of the various definitions and intricacies.  The basic principle is that reserve capacity is used to ensure adequate power supply.  Different types of reserves are defined in terms of various time scales.  In the short-term, operating reserves are used to provide adequate supply in the case of sudden plant or transmission outages.  In the long-term, planning reserves are used to ensure adequate power supply given a forecasted load in the years ahead.  Both types of reserves are often expressed as a ratio of excess capacity (i.e., available capacity less demand) to demand.  In this blog post, we will discuss planning reserves; the typical values, historical trends, market-to-market differences, and modeling within AURORAxmp.

Planning Reserves

Without adequate planning reserves, new generation may not be built in time and thus ultimately cause power disruptions.  But what is adequate?  In 2005, Congress passed The Energy Policy Act of 2005 that requires the North American Reliability Corporation (NERC) to assess the reliability of the bulk power system in North America.  A part of NERCs responsibility is to periodically publish Long-Term Reliability Assessments (LTRA) which include planning reserve targets, or reference margins.  Usually these are based on information provided by each governing body (e.g., ISO, RTO, etc.) in the assessment area.  If no such information is available, NERC sets the reference margin to 15% for thermal-dominated systems and 10% for hydro-dominated systems.  For the 2015 LTRA, the NERC reference margins range from 11% to 20% across the assessment areas as shown in Figure 1.  The highest reference margin, 20% for NPCC Maritimes, is due to a disproportionate amount of load being served by large generating units.

NERC reference margins graph

Figure 1. 2016 Planning reserve margins by NERC assessment area from the 2015 LTRA.
The gold bars represent assessment areas with capacity markets.

In addition to providing reference margins, or published targets from other entities, NERC publishes yearly anticipated planning reserve margins, out 10 years, for 21 assessment areas in North America.  To do this, NERC collects data on peak demand and energy, capacity, transmission and demand response from NERC regional entities.  Data submission is usually due in the first quarter of the report year.  This strategy represents a bottom-up approach to understanding reliability.

Forecasting Anticipated Planning Reserve Margins

Forecasted anticipated planning reserve margins can vary substantially from assessment year to assessment year, area to area, and as a function of markets.  To illustrate this, one-, five-, and 10-year forecasted anticipated planning reserve margins for PJM and ERCOT are shown in Figure 2.  The variability in anticipated planning reserve margin is similar between each assessment area, and, increases with the length of the forecast.  This is presumably due to increasing uncertainty in forecasts as a function of time.  Interestingly, the number of years with shortfalls (fewer reserves than the target) is much larger in ERCOT than PJM.  PJM has a three-year forward capacity market and ERCOT is an energy only market.  Therefore, there is more incentive for long-term excess capacity in PJM.

reserve margins

Figure 2. Planning reserve margins targets (dashed line) and one-, five-, and 10-year anticipated planning reserve margin from the 2011 to 2015 NERC LTRAs.

As shown above, in both ERCOT and PJM, the year-ahead anticipated planning reserve margins are adequate, suggesting long-term planning approaches are working in both markets, however, regional complexities can pose problems.  For example, MISO recently published the 2016 Organization of MISO States (OMS) Survey to assess planning reserve margins.  In 2017, shortfalls are predicted in three zones – IL, MO, and Lower MI.  Excess capacity from other zones will be transferred to make up for the shortfall in the short term.  Similar to the NERC forecasts, uncertainty in the regional forecasted load is key to this issue, and may increase or decrease this shortfall.

In addition to regional issues, the rapid changing generation mix also poses challenges for quantifying adequate planning reserves.  NERC has recognized this and has called for new approaches for assessing reliability in both the 2014 and 2015 LTRA.  One specific issue is traditional load shape disruption with added solar resources.  A typical summer-peaking system may face reliability issues in the winter or other expected off-peak months where demand still is high but solar output is low.  Considering secondary demand peaks, and thus planning reserve margins, may be prudent in these situations.

AURORAxmp and Planning Reserve Margins

In AURORAxmp, planning reserve margins are used in the long-term capacity expansion logic to guide new resource builds.  Our Market Research and Analysis team updates planning reserve margins annually based on the latest NERC LTRA.  Planning reserve margins can be specified on the pool or zone level, thus easily facilitating varying spatial scale studies.   Risk studies can be conducted to quantify the impacts of uncertainty in each aspect of planning reserve margins on long-term resource builds.  Together these features support cutting-edge analysis surrounding the complexities of reserves.

Filed under: Power Market InsightsTagged with: ,

The Algorithms at the Core of Power Market Modeling

In 2007, the U.S. government formed the Advanced Research Projects Agency-Energy (ARPA-E) which encourages research on emerging energy technologies. Last year this agency awarded about 3.1 million dollars to the Pacific Northwest National Laboratory (PNNL) to work on a computational tool called High-Performance Power-Grid Operation (HIPPO) over the next few years. The research team will be led by an applied mathematician at PNNL and be partnered with GE’s Grid Solutions, MISO, and Gurobi Optimization. The group will seek improved ways to solve the unit commitment problem, “one of the most challenging computational problems in the power industry.” The work highlights the general trend over the past twenty years in this and other industries to turn to mathematical optimization for answers to some of the most difficult scheduling and planning problems. What’s astounding is the rate at which commercial mathematical solvers have been able to respond to these needs with enormous leaps in algorithmic efficiency over a relatively short period of time.

At the core of most of the mathematical optimization used in power modeling is linear programming (LP). Linear programs are problems in which some linear function is maximized or minimized given a set of linear constraints. The mathematician George Dantzig invented the simplex algorithm in 1947 in advance of the day when computers could really take advantage of it. For example, in 1953 one implementation of the algorithm on a Card Programmable Calculator (CPC) could solve a certain 26 constraint, 71 variable instance of the classic Stigler Diet Problem in about eight hours. As computer technology advanced, though, the usefulness and power of the simplex algorithm specifically and linear programming in general became apparent. Advances in the algorithm combined with exponential computer speed improvements made linear programming a staple in problem solving by the early 2000s. In fact, algorithmic progress in linear programming (i.e. independent from computer speed improvements) gave a 3300x improvement factor from 1988 to 2004. Coupled with actual computer machine improvements of 1600x in that same time horizon, this produced a 5,280,000x average improvement for solving linear programs!

While progress on linear programs has somewhat plateaued in recent years, improvements in mixed-integer programming (MIP) have continued at impressive rates. In its simplest form, a mixed-integer program is a linear program for which some of the variables are restricted to integer values. This integer-value restriction makes the problem so difficult that it is NP-hard, meaning that finding a guaranteed polynomial time algorithm for all MIPs will most likely never occur. And yet the MIP is at the center of an ever-increasing number of practical problems like the unit commitment problem that the HIPPO tool mentioned above is meant to address, and it is only relatively recently that it really became a practical problem solving tool. According to one expert and active participant in the field, Robert Bixby,

“In 1998 there was a fundamental change in our ability to solve real-world MIPs. With these developments it was possible, arguably for the first time, to use an out-of-the box solver together with default settings to solve a significant fraction of non-trivial, real-world MIP instances.”

He provided this chart showing the improvements in one MIP solver, CPLEX, from 1991 to 2007:

Cplex version-to-version pairs
Figure 1. CPLEX Version-to-Version Pairs. Source

This chart shows that over approximately 16 years, the machine-independent speed improvement was roughly 29,000x! The progress on developing fast algorithms to solve (or at least find good solutions to) mixed-integer programs has been simply explosive.

The importance of this development is highlighted by extensive use of MIPs by regional reliability organizations in the United States. An independent review published by the National Academies Press states that:

In the day-ahead time frame, the CAISO, ERCOT, ISO-NE, MISO, PJM, and SPP markets employ a day-ahead reliability unit commitment process… The optimization for the day-ahead market uses a dc power flow and a mixed integer program for optimization.

In other words, the MIP is at the core of day-ahead market modeling for these major reliability organizations. A presentation given a few years back by PJM shows their increasing need to solve very difficult MIPs in a shorter time frame. The presentation highlights the fact that PJM has a “major computational need” for “better, faster MIP algorithms and software.” The short slide deck states three times in different contexts the need in PJM for “even faster dynamic MIP algorithms.” The entity must solve their day-ahead model for the security constrained unit commitment (SCUC) problem in a four-hour window towards the end of each day, and the presentation explains that they “have a hard time solving deterministic SCUC models in the time allotted.” So the need for ever-improving mixed-integer programs in the energy industry doesn’t seem to be going away any time soon. And with the increasing complexity of problems such as renewable integration, sub-hourly modeling, and the handling of stochastics, the push for “better, faster MIP algorithms” will only continue.

So what does all of this mean for power modelers? Professional solvers’ ability to continue to improve LP/MIP algorithms’ performance will determine whether the most difficult questions can still be addressed and modeled. But, in addition to that, it is crucial that the simulation models that seek to mimic real-world operations with those solvers are able to intelligently implement the fastest possible optimization codes. As EPIS continues to enhance AURORAxmp, we understand that need and spend an enormous amount of time fine-tuning the LP/MIP implementations and seeking new ways to use the solvers to the greatest advantage. Users of AURORAxmp don’t need to understand those implementation details—everything from how to keep the LP constraint matrix numerically stable to how to pick between the interior point and dual simplex LP algorithms—but they can have confidence that we are committed to keeping on pace with the incredible performance improvements of professional solvers. It is in large part due to that commitment that AURORAxmp has also consistently improved its own simulation run time in significant ways in all major releases of the past three years. With development currently in the works to cut run times in half of the most difficult DC SCOPF simulations, we are confident that this trend will only continue in the coming years with future releases of AURORAxmp. As was said about the projected future development of mixed-integer programs, the performance improvement “shows no signs of stopping.”

 

Filed under: Data Management, Power Market InsightsTagged with: , ,

Living in the Past?

Living in the past is not healthy. Is your database up-to-date? EPIS just launched the latest update to the North American Database, version 2016_v4, marking the fourth North American data update this year! Recent changes in the power industry present challenges to database management which will be discussed in this post.

In general, the transformation in power generation sources in the U.S. coupled with evolving electricity demand and grid management represents a paradigm shift in the power sector. In order to accurately model power prices in the midst of such change, one must have a model built on fundamentals and a database that is up-to-date, has reasonable assumptions, is transparent and is flexible. A recent post described the technical side of working with databases in power modeling. This entry outlines important changes in the East Interconnect, the impacts those changes have on data assumptions and configuration and the steps we are taking to provide excellent databases to our clients.

Recent shifts in power generation sources challenge database assumptions and management. New plant construction and generation in the U.S. are heavily weighted towards renewables, mostly wind and solar and as a result, record generation from renewables has been reported across the East Interconnect. Specifically, on April 6, 2016, the Southwest Power Pool (SPP) set the record for wind penetration:

Record Wind Penetration Levels 2015

Figure 1. Record wind penetration levels in Eastern ISOs compared with average penetration in 2015. SPP holds the record which was reported on April 6, 2016. Record sources: NYISO, SPP, MISO, ISO-NE, PJM. 2015 Averages compiled from ISO reports, for example: NYISO, SPP, MISO, ISO-NE, PJM. *Average 2015 generation used to calculate penetration.

Similarly, the New York City area reached a milestone of over 100 MW in installed solar distributed resources. Accompanying the increase in renewables are increases in natural gas generation and reductions in coal generation. In ISO-NE, natural gas production has increased 34 percent and coal has decreased 14 percent since 2000, as highlighted in their 2016 Regional Electricity Outlook. These rapid changes in power generation sources require frequent and rigorous database updates.

Continued electric grid management changes in the East Interconnect also requires flexibility in databases. One recent change in grid management was the Integrated System joining the Southwest Power Pool, resulting in Western Area Power Administration’s Heartland Consumers Power District, Basin Electric Power Cooperative and Upper Great Plains Region joining the RTO. The full operational control changed on October 1, 2015, thus expanding SPPs footprint to 14 states, increasing load by approximately 10 percent and tripling hydro capacity. Grid management change is not new, with the integration of MISO South in 2013 as an example. Changes such as these require flexibility in data configuration that allow for easy restructuring of areas, systems and transmission connections.

Variability in parameters, such as fuel prices and demand, introduce further difficulty in modeling power markets. The so called “Polar Vortex” weather phenomena shocked North Eastern power markets in the winter of 2013/2014 with cold temperatures and high natural gas prices resulting in average January 2014 energy prices exceeding $180/MWh in ISO-NE. It seemed like the polar opposite situation occurred this last winter. December 2015 was the mildest since 1960, and together with low natural gas prices, the average wholesale power price hit a 13-year low at $21/MW. The trend continued into Q1 of 2016:

Monthly average power price in ISO-NE Q1 2014 and 2016

Figure 2. Monthly average power price in ISO-NE in Q1 2014 and 2016. Variability between years is a result of high natural gas prices and cold weather in 2014 versus low natural gas prices and mild weather in 2016.

Whether extreme events, evolving demand or volatile markets, capturing uncertainty in power modeling databases is challenging. In AURORAxmp, users can go one step further by performing risk simulations; specifying parameters such as fuel prices and demand to vary across a range of simulations. This is a very powerful approach to understanding the implications of uncertainty within the input data.

The aforementioned changes in generation, grid management and demand, offer exciting new challenges to test power market models and data assumptions. To test our platform, EPIS performs a historical analysis as a part of each database release. Inputs of historical demand and fuel prices are used to ensure basic drivers are captured and model output is evaluated not only in terms of capacity, but monthly generation, fuel usage and power prices. The result of this process is a default database that is accurate, current, contains reasonable assumptions, is transparent and is flexible to ensure you have the proper starting point for analysis and a springboard for success.

With the release of North_American_DB_2016_v4, EPIS continues to provide clients with superb data for rigorous power modelling. The 2016_v4 update focuses on the East Interconnect and includes updates to demand, fuels, resources, DSM and other miscellaneous items. Clients can login to our support site now to download the database and full release notes. Other interested parties can contact us for more information.

Filed under: Data Management, Power Market InsightsTagged with: , , , , , ,

The Fundamentals of Energy Efficiency and Demand Response

What are Energy Efficiency & Demand Response Programs?

Though the Energy Information Administration states, “there does not seem to be a single commonly-accepted definition of energy efficiency,” efficient energy use, sometimes simply called energy efficiency, refers to the reduction in the amount of energy required to provide the equivalent quality of products and services. Examples include improvements to home insulation, installation of fluorescent lighting & efficient appliances, or improving building design to minimize energy waste.

Demand response, according to the Department of Energy, is defined as, “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized.” Utilities can signal demand reduction to consumers, either through price-based incentives or through explicit requests. Unlike energy efficiency, which reduces energy consumption at all times, demand response programs aim to shift load away from peak hours towards hours where demand is lower.

What are the Benefits of Energy Efficiency & Demand Response Programs?

The decreasing and ‘flattening’ of the demand curve can directly contribute to improved system and grid reliability. This ultimately translates to lower energy costs, resulting in a financial cost saving to consumers, assuming the energy savings are greater than the cost of implementing these programs and policies. In 2010, Dan Delurey, then president of the Demand Response and Smart Grid Coalition, pointed out that the top 100 hours (or just over 1% of the hours in a year) account for 10-20% of total electricity costs in the United States. Slashing energy consumption during these high peak hours, or at least shifting demand to off-peak hours, relieves stress on the grid and should make electricity cheaper.

Additionally, decreasing energy consumption directly contributes to the reduction of greenhouse gas emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation prevented the emission of 10.2 gigatonnes of CO2, helping to minimize global emissions of greenhouse gases.

Lastly, reductions in energy consumption can provide domestic benefits in the forms of avoided energy capital expenditure and increased energy security. The chart below displays the value of avoided imports by country in 2014 due to the investments in energy efficiency since 1990:

Added Volume and Value of Imports Figure 1: Avoided volume and value of imports in 2014 from efficiency investments in IEA countries since 1990. Source

Based on these estimated savings, energy efficiency not only benefits a country’s trade balance, but also reduces their reliance on foreign countries to meet energy needs.

Modeling the Impacts of Energy Efficiency and Demand Response

Using AURORAxmp, we are able to quantify the impact of energy efficiency and demand response programs. In this simple exercise, we compare the difference between California with 2 GW of energy efficiency and 2 GW of demand response versus a case without energy efficiency or demand response from 2016 to 2030. The charts below show the average wholesale electricity prices & system production costs:

average electricity price $-MWhAverage System Cost (000's)

 Figure 2: Note these are 2014 real dollars.

Holding all else equal, adding demand response and energy efficiency programs into the system decreased average wholesale electricity prices by about $2.88 (5.4%) and the average system production cost fell by $496,000,000 (5.1%). This is a simple example in one part of the country, but one can easily include additional assumptions about the grid, resources characteristics, and load shape as they desire.

Both demand response and energy efficiency programs are intended to be more cost effective and efficient mechanisms of meeting power needs than adding generation. Emphasis on the demand side can lead to lower system production costs, increased grid reliability, and cheaper electric bills; all of which lie in the best interest of governments, utilities, and consumers.

Filed under: Energy Efficency, Power Market InsightsTagged with: , , , , ,

Integrated Gas-Power Modeling

Quantifying the Impacts of the EPA’s Clean Power Plan

Notwithstanding the recent legal stay from the U.S. Supreme Court, it is still important to understand the U.S. EPA’s Clean Power Plan (CPP) and its impact in the larger context of natural gas markets and its role in electric power generation. Because these two markets are becoming even more highly interrelated, integrated gas-power modeling is the most realistic approach for such analyses. EPIS has tested interfacing AURORAxmp® with GPCM®, a calibrated NG model developed by RBAC, Inc. The following is a brief discussion of our experimental setup as well as some of our findings.

Integration Approach

Monthly prices for 39 major natural gas hubs for the next 20 years are represented in AURORAxmp (as an input). They were developed utilizing GPCM’s market model (as an output) in pipeline capacity expansion mode. AURORAxmp then simulates a long-term capacity expansion that utilizes the GPCM-generated gas prices, and produces many results: power prices, transmission flows, generation by each resource/resource type including gas-consumption data. This gas-consumption (output from AURORAxmp) is fed back into GPCM as gas demand by the electricity sector (input to GPCM) for a subsequent market balancing and pipeline capacity expansion simulation which generates a new set of monthly gas hub prices. The iterative process begins at some arbitrary, but plausible, starting point and continues until the solution has converged. Convergence is measured in terms of changes in the gas-burn figures and monthly gas-hub prices between subsequent iterations.

This two-model feedback loop can be utilized as a tool to evaluate energy policies and regulations. To quantify the impact of an energy policy, we need two sets of integrated gas-power runs which are identical in all respects except the specific policy being evaluated. For example, to understand the likely impacts of emission regulation such as CPP, we need two integrated gas-power models with the identical setup, except the implementation of CPP.

Before presenting our findings on the impact of “CPP vs No CPP”, we first provide some further details on the setup of the GPCM and AURORAxmp models.

GPCM Setup Details

• Footprint: All of North America (Alaska, Canada, contiguous USA, and Mexico), including liquefied natural gas terminals for imports, and exports to rest-of-world.
• Time Period: 2016-2036 (monthly)
• CPP Program: All the effects of CPP on the gas market derived from changes to gas demand in the power generation sector.
• Economics: Competitive market produces economically efficient levels of gas production, transmission, storage and consumption, as well as pipeline capacity expansion where needed.

AURORAxmp Setup Details

  • Footprint: All three major interconnections in North America (WECC, ERCOT, and the East Interconnect; which includes the contiguous U.S., most Canadian provinces and Baja California).
  • Time Period: 2016 – 2036 (CPP regulatory period + 6 years to account for economic evaluation)
  • CPP Program: mass-based with new source complement for all U.S. states
    • Mass limits for the CPP were applied using the Constraint table
    • Mass limits were set to arbitrarily high values in the Constraint table for the “No CPP” case.
  • RPS targets were not explicitly enforced in this particular experiment. Future studies will account for these.
  • LT Logic: MIP Maximize Value objective function

Notations

  1. “CPP” – Convergent result from integrated gas-power model with CPP mass limits.
  2. “No CPP” – Convergent result from integrated gas-power model with arbitrarily high mass limits.
  3. “Starting Point” – Gas prices used in the first iteration of integrated gas-power modeling.
    • This is the same for both “CPP” and “No CPP” case.

Quantifying the CPP vs. No CPP

Impact on Gas and Electricity Prices

  1. Both No CPP and CPP cases have generally lower prices than the Starting Point case in our experiment. However, post-2030, CPP prices are higher than the Starting Point.
    • This happens due to capacity expansion in both markets.
    • We stress that the final convergent solutions are independent of the Starting Point case. The lower prices in CPP and No CPP cases compared to the Starting Point case are a feature of our particular setup. If we had selected any other starting price trajectories, the integrated NG-power feedback model would have converged on the same CPP and No CPP price trajectories.
  2. CPP prices are always higher than the No CPP case.
    • This is likely driven by increased NG consumption in CPP over No CPP case.

This behavior was observed in all major gas hubs. Figure 1 shows the average monthly Henry Hub price (in $/mmBTU) for the three cases.

Impact of CPP on Henry Hub PricesFigure 1: Monthly gas prices at Henry Hub for all three cases.

Figure 2 presents the monthly average power prices in a representative AURORAxmp zone.

Comparison of Power Prices in PJM Dominion VPFigure 2: Average monthly price in AURORAxmp zone PJM_Dominion_VP with and without CPP.

Figure 3 shows the impact of CPP as a ratio of average monthly prices in AURORAxmp’s zones for the CPP case over No CPP case. As expected, power prices with the additional CPP constraints are at the same level or higher than those in the No CPP case. However, it is interesting to note that the increase in power prices happens largely in the second half of CPP regulatory period (2026 onwards). It appears that while gas prices go up as soon as the CPP regulation is effective, there is latency in the increase in power prices.

Impact of CPP on Zone Price (CPP/No CPP)Figure 3: Impact of CPP on electricity prices expressed as a ratio of CPP prices over No CPP prices.

Figure 4 presents a comparison of total annual production cost (in $billions) for each of the three regions.

Annual Production Cost (In $billions) for each of the three regions.Figure 4: Total annual production costs by region for CPP and No CPP case.

Figure 5 presents the same comparison as a percentage increase in production cost for the CPP case. The results show that while the CPP drives up the cost of production in all regions, the most dramatic increase is likely to occur in the Eastern Interconnect.

Percentage increase in production cost total for CPP over No CPP CaseFigure 5: Percent increase in production cost for CPP case.

Electricity Capacity Expansions

Comparing the power capacity expansions in Figure 6 and Figure 7, we see that AURORAxmp projected building more SCCTs in the CPP case vs. the No CPP case in the Eastern Interconnect. We believe this is primarily driven by the higher gas prices in the CPP case over No CPP case. SCCTs typically have slightly higher fuel prices compared to CCCTs, which get their fuel directly from the gas hub for the most part. In this long-term analysis, AURORAxmp is seeking to create the mix of new resources that are most profitable while adhering to all of the constraints. The higher gas prices in the CPP case are just high enough to make the SCCTs return on investment whole.

Eastern Interconnect Build Out - No CPPFigure 6: Capacity expansion for Eastern Interconnect – No CPP Case.

Eastern Interconnect Build Out - CPPFigure 7: Capacity expansion for Eastern Interconnect – CPP Case.

Table 1: Capacity expansion by fuel type in total MW.

Build

(MW)

East Int.

ERCOT

WECC

CPP

No CPP

CPP

No CPP

CPP

No CPP

CCCT

206,340

207,940

45,960

29,850

25,040

23,400

SCCT

49,082

1,932

1,030

630

2,435

2,530

Solar

200

300

200

100

200

400

Wind

6,675

0

400

100

1,400

0

Retired

54,563

8,899

16,051

10

10,669

8,417

Table 1 shows the details of power capacity expansion in the three regions with and without CPP emission constraints. In addition to increasing the expansion of SCCTs, we can see that CPP implementation incentivizes growth of wind generation, as well as accelerates retirements. Coal and Peaking Fuel Oil units form the majority of economic retirements in the CPP case.

Fuel Share Displacement

Figure 8 shows the percent share of the three dominant fuels used for power generation: coal, gas, and nuclear. Figure 9 shows the same data as the change in the fuel percentage share between the CPP and No CPP case. Looking at North American as a whole, we see that coal-fired generation is essentially being replaced by gas-fired generation. Our regional data shows that this is most prominent in the Eastern Interconnect and ERCOT regions.

Percentage Share of Dominate Fuel TypeFigure 8: Percentage share of dominant fuel type.

Change in fuel share for power generation (cpp - no cpp)Figure 9: Change in fuel share for power generation (CPP – No CPP).

Natural Gas Pipeline Expansions
The following chart presents a measure of needed additional capacity for the two cases. The needed capacity is highly seasonal, so the real expansion need would follow the upper boundary for both cases.

 

Additional NG Pipeline Capacity RequiredFigure 10: Pipeline capacity needed for the CPP and No CPP cases.

Our analysis shows that the CPP will drive an increase in natural gas consumption for electricity generation. The following chart quantifies the additional capacity required to meet CPP demand for NG.
Additional NG Capacity Required CPP vs No-CPP (bcf/day)

While the analysis presented here assumes a very specific CPP scenario, we stress that the integrated gas-power modeling is an apt tool for obtaining key insights into the potential impacts of CPP on both electricity and gas markets. We are continuously refining the AURORAxmp®-GPCM® integration process as well as performing impact studies for different CPP scenarios. We plan to publish additional findings as they become available.

Filed under: Clean Power Plan, Natural Gas, Power Market Insights, UncategorizedTagged with: , , , ,

Working With Data in Power Modeling

How Much Data Are We Talking About?

When planning the deployment of a power modeling and forecasting tool in a corporate environment, one of the most important considerations prior to implementation is the size of the data that will be used. IT personnel want to know how much data they are going to be storing, maintaining, backing up, and archiving so they can plan for the hardware and software resources to handle it. The answer varies widely depending on the types of analysis to be performed. Input databases may be relatively small (e.g. 100 megabytes), or they can be several gigabytes if many assumptions require information to be defined on the hourly or even sub-hourly level. Output databases can be anywhere from a few megabytes to several hundred gigabytes or even terabytes depending on what information needs to be reported and the required granularity of the reports. The data managed and stored by the IT department can quickly add up and become a challenge to maintain.

Here are a couple example scenarios:

A single planning analyst does a one-year hourly run (8760 hours) with modest reporting, which produces an output database of 40 MB. On average, the analyst runs about six studies per day over 50 weeks and the total space generated by this analyst is a modest 75GB. This is totally manageable for an IT department using inexpensive disk space.

Now, let’s say there are five analysts, they need more detailed reporting, they are looking at multiple years, and a regulatory agency states that they have to retain all of their data for 10 years. In this scenario, the total data size jumps to 500 MB for a single study. Given the same six studies per day those analysts would accumulate 3.75 TB of output data in a year, all needing to be backed up and archived for the auditors, which will take a considerable amount of hardware and IT resources.

What Are My Database Options?

There are dozens of database management systems available. Many power modeling tools support just one database system natively, so it’s important to know the data limitations of the different modeling tools when selecting one.

Some database systems are file-based. For example, one popular file-based database system is called SQLite. SQLite is fast, free, and flexible. This file-based database system is very efficient and is fairly easy to work with, but is best suited for individual users, as are many other file-based systems. These systems are great options for a single analyst working on a single machine.

As mentioned earlier, groups of analysts might decide to all share a common input database and write simultaneously to many output databases. Typically, this requires a dedicated server to handle all of the interaction between the forecasting systems and the source or destination databases. Microsoft SQL Server is one of the most popular database systems available in corporate environments, and the technical resources for it are usually available in most companies. Once you have your modeling database saved in SQL Server, assuming your modeling tool supports it, you can read from input databases and write to databases simultaneously and share the data with other departments with tools that they are already familiar with.

Here is a quick comparison of some of the more popular database systems used in power modeling:

Database System DB Size Limit (GB) Supported Hardware Client/Server Cost
MySQL Unlimited 64-bit or 32-bit Yes Free
Oracle Unlimited 64-bit or 32-bit Yes High
MS SQL Server 536,854,528 64-bit Only (as of 2016) Yes High
SQLite 131,072 64-bit or 32-bit No Free
XML / Text File OS File Size Limit 64-bit or 32-bit No Free
MS SQL Server Express 10 64-bit or 32-bit Yes Free
MS Access (JET)* 2 32-bit Only No Low

A Word About MS Access (JET)*

In the past, many Windows desktop applications requiring an inexpensive desktop database system used MS Access database (more formally known as the Microsoft JET Database Engine). As hardware and operating systems have transitioned to 64-bit architectures, the use of MS Access database has become less popular due to some of its limitations (2GB max database size, 32,768 objects, etc.), as well as to increasing alternatives. Microsoft has not produced a 64-bit version of JET and does not have plans to do so. There are several other free desktop database engines available that serve the same needs as JET but run natively on 64-bit systems, including Microsoft SQL Server Express, SQLite, or MySQL which offer many more features.

Which Databases Does AURORAxmp Support?

There are several input and output database options when using AURORAxmp for power modeling. Those options, coupled with some department workflow policies, will go a long way in making sure your data is manageable and organized.

EPIS delivers its native AURORAxmp databases in a SQLite format which we call xmpSQL. No external management tools are required to work with these database files – everything you need is built into AURORAxmp. You can read, write, view, change, query, etc., all within the application. Other users with AURORAxmp can also utilize these database files, but xmpSQL doesn’t really lend itself to a team of users all writing to it at the same time. Additionally, some of our customers have connected departments that would like to use the forecast data outside of the model, and that usually leads them to Microsoft SQL Server.

For groups of analysts collaborating on larger studies, AURORAxmp supports SQL Server database, although its use isn’t required. Rather than use SQL Server as the database standard for AURORAxmp (which might be expensive for some customers), the input databases are delivered in a low cost format (xmpSQL), but AURORAxmp offers the tools to easily change the format. Once the database is saved in SQL Server, you are using one of the most powerful, scalable, accessible database formats on the planet with AURORAxmp. Some of our customers also use the free version of SQL Server – called SQL Server Express Edition – which works the same way as the full version, but has a database size limit of 10GB.

Some additional options for output databases within AURORAxmp are:

MySQL: Open source, free, server-based, simultaneous database platform that is only slightly less popular than SQL Server.
XML/Zipped XML: A simple file-based system that makes it easy to import and export data. Many customers like using this database type because the data is easily accessed and is human readable without additional expensive software.
MS Access (JET) : The 32-bit version of AURORAxmp will read from and write to MS Access databases. EPIS, however, does not recommend using it given the other database options available, and due to its 2 GB size limitation. MS Access was largely designed to be an inexpensive desktop database system and given its limitations as previously discussed, we recommend choosing another option such as xmpSQL, SQL Server Express or MySQL which offer far more features.

Where Do We Go From Here?

AURORAxmp is a fantastic tool for power system modeling and forecasting wholesale power market prices. It has been in the marketplace for over twenty years, and is relied upon by many customers to provide accurate and timely information about the markets they model. However, it really can’t do anything without an input database.

EPIS has a team of market analysts that are dedicated to researching, building, testing, and delivering databases for many national and international power markets. We provide these databases as part of the license for AURORAxmp. We have many customers that use our delivered databases and others who choose to model their own data. Either way, AURORAxmp has the power and the flexibility to utilize input data from many different database types.

If you are just finding AURORAxmp and want to see how all of this works, we have a team here that would love to show you the interface, speed and flexibility of our product. If you are already using our model but would like guidance on which database system is best for your situation, contact our EPIS Support Team and we’ll be glad to discuss it with you.

Filed under: Data Management, Power Market InsightsTagged with: , , ,

Large Scale Battery Storage

Some emerging technologies pave a bright path for the future of Large Scale Batteries

As we move towards more renewable energy sources and away from fossil fuels, we will need new technologies to capture energy production as well as provide new ways to store and deliver power. An ongoing issue with solar and wind production is the inability to predict exactly when you can produce and dispatch power. Additionally, we are seeing more interest for generating, storing, and time-shifting power in other ways to meet environmental goals. Large Scale Batteries are an exciting step toward meeting, and supporting, some of those goals.

While we don’t know what the future will bring, there are some forecasts that predict substantial drops in the cost of the various storage technologies as there is more adoption into the marketplace. Among these, in 2014 Citigroup analysts predicted a drop in battery storage costs to $230/kWh by 2020 and a further drop to $150/kWh in the years after that. Whether it is from the reduced cost or simply an increased need, Navigant Research forecasts worldwide battery storage to grow to almost 14 GW by 2023.

Wordwide forcast of battery storage capacity

Graph 1: IRENA.org Source

These potential reductions in costs could even lead to some ‘grid defection’ as the economics change and become less of a hindrance for adoption.

Lowest current and projected battery cell price by type

Graph 2: IRENA.org Source

Tesla founder Elon Musk has been working with lithium-ion technology both for vehicle battery and grid-level storage for over five years. Li-ion is a familiar battery type, typically a pair of solid electrodes and an electrolyte, and has been around for a long time in smaller applications… Tesla (and other companies) is currently testing larger scale battery installations, currently only for households and businesses. In the future they are looking at becoming scalable for utility systems. Some advantages and disadvantages to Li-ion are:

Advantages

· High-energy density

· Low maintenance

· Low self-discharge

Disadvantages

· High cost to manufacture

· Limited number of charging cycles (They age and will need to be replaced.)

· Heat generated during use

energystorage.org graph

Figure 1: EnergyStorage.org Source

Meanwhile others are pursuing what is known as “as “flow battery” technology. This is aptly named, in that it contains two liquids that flow next to each other, separated by a membrane, and as they move past each other create an electrical current. These batteries use two electrolytes in separate tanks, which are then pumped into a central stack. The central stack has an ion-conducting membrane that captures the electrons as the two liquids are pumped through the stack. Currently, most new flow batteries use a membrane containing vanadium. Some advantages and disadvantages to this technology include:

Advantages

· Electrolyte solutions are safe, non-flammable, and non-corrosive

· The two electrolytes are compatible and easily rechargeable

· Expected to handle many more cycles than Li-ion batteries

Disadvantages

· Maintenance cost of the tanks and pump system are high

· Overall cost is higher $/KWh than Li-ion

· Low energy density

· The volume of space that the tanks may take up

EnergyStorage.org Figure 2

Figure 2: EnergyStorage.org Source

A promising hybrid of these two technologies is also being tested, using solid materials in two separate tanks with an electrolyte fluid that passes over them. The solid material can be lithium-based, while the flow of the liquid then conducts the electrons to the cell stack. Although still in testing, this looks to combine the scalability of flow batteries with the power density in Li-ion batteries.
Electrochemical Society Figure

Figure 3: Eletrochemical Society Source

The need for large scale energy storage will continue to grow as we move forward with renewable energy sources making up a larger portion of our energy generation. The inconsistency of renewables’ generation and the need for maintaining a stable grid will necessitate some form of storage. These are just a few of the most promising utility scale battery technologies that are currently available. Skeptics could argue that technologies like GTs or hydro which are currently used to firm up intermittent renewables will continue to do so in the future. That may be likely in some cases but those current technologies have their own issues. There are environmental and locational issues with hydro; there are pipeline access and saturation issues with small gas. Whatever technology moves us forward it seems apparent that battery storage will be an integral part of that future.

Filed under: Power Market Insights, Power StorageTagged with: , , ,