Contemplating the Water-Energy Nexus

The concept of the water-energy nexus is a broad, over-arching term for the relationship between water and energy production systems, including both electricity generation and fuel extraction. Water plays an important role in all phases of fuel extraction and electricity generation. In turn, energy is required to extract, convey, and deliver water of appropriate quality for its diverse uses. In 2014, the U.S. Department of Energy (DOE) released their report on the water-energy nexus, citing the need for joint water-energy policies and better understanding of the interconnection and its susceptibility to climate change as a matter of national security.

water_flow

Figure 1: The complex dependence of water and energy systems. Source: US DOE

Power Generation and Water Consumption

As shown in the Sankey Diagram above, thermoelectric power generation is the single largest user of water, which is used mainly for cooling. Agriculture competes directly with the energy sector for water resources, putting the nation’s food security in competition with its energy sufficiency. The impact of changes in climate, shifting precipitation patterns, and greater frequency of extreme weather events has the potential to alter the availability of water resources. These effects, combined with population growth, could intensify existing competition for water resources and impact energy production and distribution. Furthermore, it is important to note that water and energy systems are also dependent on weather systems. Loosely speaking, warmer and drier weather conditions tend to increase demand for electricity while generally reducing the availability of water resource for hydropower and cooling purposes. Acknowledging the interactions between water and energy could help us understand how scarce conditions tend to occur together and cause compounding issues.

Citing the EIA Form 860 (EIA 2013a), the DOE report on water-energy nexus notes that over 90 percent of plants that are planned to retire require a water-intensive cooling process. Looking forward, only 45 percent of planned additions would not need cooling, meaning that the majority of new plants will still require water-intensive cooling. This complex interdependence of water and energy systems will have an impact on which energy technologies remain viable in the future. Obviously, changes in water resource availability would have the biggest influence on hydropower plants. The EIA states that, “In 2015, hydropower accounted for about 6% of total U.S. electricity generation and 46% of electricity generation from all renewables.” Beyond hydropower, the water-energy nexus may also affect the future of emerging technologies. The U.S. DOE report notes that while cost is the biggest hurdle today for Carbon Capture and Sequestration (CCS) technology, water usage may prove to be an equally large hurdle to overcome. Water consumption, measured in gallons per kWh of electricity generated, is estimated to double if CCS technology is adopted in its current form. The water-energy nexus is also expected to have an impact on oil and gas exploration. It is possible that technologies such as solar power that require low water usage may prove to be crucial in the near future.

The Importance of Strategic Planning

Situations like the recent drought in California will bring more and more scrutiny and prioritization of water usage. Proper strategic planning is going to be paramount in dealing with water shortages and competing priorities. A holistic modeling framework will be vital for such planning.

EPIS, LLC has been actively engaged in understanding the key challenges posed by the water-energy nexus. Recently, EPIS participated in the “Understanding the water-energy nexus: integrated water and power system modelling” workshop that was organized by the U.S. DOE and European Union’s Joint Research Centre (JRC). The workshop, which brought together academics, industry experts, regulators and model developers, focused on developing a framework for an integrated water-energy model that captures the critical factors in a tractable manner. AURORAxmp’s ability to explicitly model energy conversion capabilities was seen as a possible approach for easily representing the complexities of the water-energy nexus. While the event provided us with insights into various subtleties of this problem, one thing became clear: As water demand and prioritization becomes a larger issue, further research and development are needed.

Filed under: Water-Energy NexusTagged with: , , ,

Nuclear Retirements – The Unknown Future of Nuclear Power in the United States

Nuclear Plants Nearing Retirement

The U.S. currently has over 2 GW of nuclear capacity scheduled to be retired within the next four years.  The three planned closures are the 678 MW Pilgrim Nuclear Power Station, the 610 MW Oyster Creek Generating Station, and the 852 MW James A. Fitzpatrick Power Station.  The operators of these plants determined that while they had received extensions to their initial licenses, remaining operational was not economically viable.

nuc1

Figure 1: U.S. Nuclear Capacity Source

As of August 2016, announced retirements looking even further into the future total above 7 GW with a few others being politically tenuous it further compounds the uncertainty within the nuclear fleet. Included in this 7 GW is the Fort Calhoun plant in Nebraska that was shut down by Omaha Public Power this year on October 24. However, this is just the tip of the iceberg when you consider the remaining plants and their need for future license extensions.

The Arduous Licensing Process

Nuclear plants are initially licensed for up to forty years by the U.S. Nuclear Regulatory Commission (NRC).  The operator may then apply for an additional twenty-year renewal; following that they can apply for a further extension of twenty more years.  All extensions are initiated by the operator and must be started sufficiently ahead of the expiration of their current license for the NRC to evaluate the safety and environmental impacts of an extension.  When operators apply more than five years prior to expiration, they can usually continue to operate while under this review.  If they don’t apply until within five years of the expiration, they may be forced to stop operating until they are approved.  The renewal process contains multiple cumbersome steps as shown in the diagram below.

nuc2

Figure 2: License Renewal Process Source

 

Current Operating Nuclear Plants

The U.S. has 100 operating nuclear power plants; 45, or nearly half, have already operated through their forty-year operating license and are on their initial twenty-year extension.  Two of these are approaching the need to apply for their second extension: Peach Bottom in Pennsylvania and Surry in Virginia.

nuc3

Figure 3: Active Nuclear Reactors  Source

To look at it another way, 81 plants have received their first renewal, an aging fleet in its own right.   But this means up to 30 GW of nuclear power has an unknown fate based on a not-yet-granted second license extension alone.  To date, no renewal applications have been permanently rejected but several plants have needed to make extensive improvements to gain approval.

nuc4

Figure 4: Licensed Nuclear Plants Source

According to a recent Moody’s report, today’s low gas environment is making it difficult for some smaller nuclear units to survive competitively in the power market.  The future of gas will likely play a key role in the future of nuclear viability, as even without costly improvements some nuclear generators are struggling to stay afloat.

Nuclear Plants Coming Online

Interestingly, there are still a number of newly constructed plants currently in the process of becoming licensed that will bring over 5,000 MW online by 2020; these include plants in Tennessee, North Carolina and Georgia.  Additionally, there are up to six more applications for a combined 10 new reactors currently under review by the NRC.  A few companies are also looking into new designs that are smaller in scale, under 500 MW as opposed to +1,000 MW, that are more modular in design.  This new technology would give them the flexibility to be placed on more urban sites as needed to accommodate grid needs.

The Future Role of Nuclear Power

While a few sites are in the process of retiring their reactors, nuclear power is likely to be a part of the energy solution going forward for some time.  The minutiae of the policies may change, but one thing is certain: nuclear power will play a significant role in meeting U.S. electricity needs while curbing carbon pollution.  The U.S. Department of Energy reports the level of nuclear power generation for the country has been at 20 percent, the question is what hurdles will nuclear owners and operators have to overcome to maintain that level?

Filed under: Clean Power Plan, Nuclear Power, Renewable Portfolio StandardsTagged with: , , , ,

EPIS Releases Mexico Database for Use with AURORAxmp

Database will provide power market simulation, forecasting and analysis for Mexico and borders

Salt Lake City, Utah – October 26, 2016

http://www.globenewswire.com/news-release/2016/10/26/883166/0/en/EPIS-Releases-Mexico-Database-for-Use-with-AURORAxmp.html

EPIS, the market leader in power market simulation, forecasting and analysis, has released the Mexico Wholesale Market (Mercado Eléctrico Mayorista – MEM) database.  The database will be offered as an upgrade or add-in to its industry-leading AURORAxmp software.

Users of the AURORAxmp software, which is known for delivering unparalleled forecasting and analytical productivity, ease of use and support, will now have access to high quality MEM data, pulled from trusted sources. The AURORAxmp MEM database will be regularly updated to reflect the most recent PRODESEN assumptions from SENER and other key sources including: CENACE data, and analyst experience with CFE and other IPPs in Mexico.

“Recent and ongoing energy market reforms in Mexico, coupled with growth expectations, are creating significant investment opportunities in electric power generation and transmission infrastructure. The most recent PRODESEN (2016-2030) report estimates approximately $90B (USD) in generation investment opportunities and $25B (USD) in transmission and distribution investment opportunities,” said Ben Thompson, CEO of EPIS. “Our MEM database allows users of AURORAxmp to forecast and do market simulations, taking into account this important market.”

It is critical that data sources represent the current state of the National Electricity System and its expected evolution over the next 15 or 20 years. These sources need to be updated regularly, scrubbed to fill in gaps and reflect operational realties, and are tested and calibrated in models so it is trustworthy and commercially reliable. The MEM database offers this needed level of quality.

The AURORAxmp MEM database is formatted, tested, and immediately ready to use for high-quality valuations, market analysis (including energy and capacity), as well as congestion and risk analysis of Mexican power markets. It offers cross-border analysis with boundary zones, including Belize, Guatemala, ERCOT (TX), WECC (AZ) and WECC (CAISO).

The AURORAxmp MEM Database includes primary Mexican power grids, including:

  • Sistema Interconectado Nacional (SIN)
  • Baja California (BCA)
  • Baja California Sur (BCS)

The systems are fully represented by 53 zones that align with PRODESEN and include “proxies” for transmission with boundary zones like Belize, Guatemala, ERCOT (TX), WECC (AZ) and WECC (CAISO).

Our product contains the best available data, refined to represent the current system’s operational realities and market including:

  • Gas constraints
  • Hydro conditions
  • Policy initiatives, including clean energy goals
  • Well-documented sources

Highlights include:

  • Generation: Approximately 800 operational generators, with another 150 in advanced development (construction or LT auction winners), including supporting hourly wind and solar profiles for each zone
  • Fuel prices, including Mexico natural gas hubs Mexico diesel prices (driven to an extent by U.S. imports), Houston Ship Channel, Henry Hub, South Texas, Waha, SoCal Border and distillate/residual fuel oil (FO2/FO6), coal and diesel from U.S. EIA, adjusted for Mexican transport costs
  • Transmission: inter-zonal transfer limits (links) and underlying physical lines, with resistance values, from which loss assumptions can be derived

As with any AURORAxmp database, users can expect the highest level of software integration, model control and easy data exchange. Users can easily import and overlay their own assumptions and other data sources for more powerful, customized insights.

About EPIS

EPIS, LLC (www.epis.com) is the developer of AURORAxmp, the leading-edge software for forecasting wholesale power market prices. The company also provides ready-to-use data for North America and Europe, and unrivaled customer support to its growing body of customers worldwide. A variety of organizations-including utilities (large and small), independent power producers (IPPs), developers, traders, energy consultants, regulatory agencies and universities-use AURORAxmp to model power system dispatch and the formation of both nodal and zonal wholesale power prices, and to perform a wide range of associated analytics over the short- and long-term. AURORAxmp is a comprehensive solution to power market modeling needs. Offices are located in Salt Lake City, UT, Tigard, OR and Sandpoint, ID.

Filed under: Data Management, Mexico Power MarketTagged with: , , ,

EIA Eases Data Accessibility for Power Modelers

The U.S. Energy Information Administration (EIA) has long been a key source for electrical market data. In the past, much of the EIA’s data have been useful for long-term planning, but have suffered from long lag times and cumbersome manual downloads. Some data have not been published until months or even years after the time period they describe. For example, a generator which began operating in May of 2012 might not have appeared in the EIA’s primary resource list (the EIA-860) until October or November of 2013. Historically, these issues have limited the usefulness of EIA data for many modeling purposes.

However, over the last 2 years, the EIA has made several improvements to the management and delivery of their datasets which some longtime modelers may not be aware of. These enhancements include the EIA-860M, the new Excel Add-in, and the U.S. Electric System Operating Data application. Together, these enhancements greatly expand the list of tasks for which EIA data may be useful.

Form 860M

The EIA-860 is a comprehensive list of grid-connected generators in the U.S. with capacity greater than 1 MW. No data set is perfect, but the EIA-860 has characteristics which are attractive to anyone concerned with data quality. EIA-860 data are collected directly from plant owners who are legally required to respond, it is expressed in consistent terms nationwide, and it is vetted by EIA staff prior to release. While thorough and generally accurate, this process is slow and has only been conducted once each year, leading to lag times of 10-22 months.

In July of 2015, the EIA quietly started publishing data from a new monthly survey, the EIA-860M. This survey is sent to plant owners which reported capacity coming online or retiring in the near future as reported in the most recent EIA-860. The EIA-860M keeps track of these expected changes, and gives plant owners a chance to update the EIA on their progress mid-year. Much of this information has previously been available through the Electric Power Monthly reports, but the EIA-860M combines these data with similar information from the full EIA-860 to create a comprehensive list of active generators. Here are a few things to keep in mind when working with the EIA-860M:

  • It includes a smaller set of unit characteristics than the full EIA-860
  • It has a lag of 2-3 month, so responses for May are posted late-July
  • Like the EIA-860, the Retired list for the EIA-860M is not comprehensive. Only entities with operating plants are required to file with the EIA. So, if a company shuts down its last plant, it no longer responds to the EIA-860 or EIA-860M surveys, and its retired plants will not show up in the Retired list
  • Unlike the EIA-860, the EIA-860M is not vetted prior to release. In order to maintain a timely publishing schedule, the EIA-860M is posted “as-is” and is subject to update without notification

Despite these limitations, the EIA-860M is a relatively thorough and current census of existing and planned generating capacity in the US. It is a welcome addition to the EIA’s current offerings.

Electric System Operating Data

The EIA has taken their first step into the world of intra-day reporting with the new U.S. Electric System Operating Data viewer. While the tool is still in Open Beta, and comes with a fair number of known issues, it promises to be an excellent source for very near-term information about the bulk electrical grid of the U.S.

nyis1

Figure 1: EIA Operating Data – Status Map

Since July of 2015, the EIA has been collecting hourly data from all 66 Balancing Authorities operating in the U.S., including:

  • Day-ahead demand forecasts
  • Actual demand
  • Net generation
  • Interchange with surrounding Balancing Authorities

When everything is working smoothly, the EIA posts these data with a lag of only 80 minutes! These same data are available for download in table form and include API codes for pulling them directly into an Excel workbook using the add-in described below. The EIA also includes a series of pre-made charts and reports on daily supply-demand balance, discrepancies between forecast and actual demand, and much more.

Even for long-term planners, the new datasets collected by the EIA will likely be useful. Never before has the EIA published such granular demand and interchange data. The interchange data in particular has historically been very difficult to find from a publicly available source. Also, Balancing Authorities are much more useful footprints for modeling purposes than states, which is how the EIA partitions much of their information currently. Although it is still in its infancy, the Electric System Operating Data tool promises to open many avenues of analysis which were previously infeasible.

Excel Add-in

Released in February of 2015, the EIA Excel Add-in is useful for importing frequently updated data series into an existing process. While the EIA Interactive Table Viewer is handy for browsing and pulling individual data series, the data almost always need some sort of manipulation or conversion before being input into production cost models such as AURORAxmp. Whether you are converting between nominal and real dollars, changing units, extrapolating growth rates, or combining EIA data with other sources, a series of computations are usually required between raw data and useful inputs. The new Excel add-in allows a user to construct an Excel workbook with all the necessary conversions which can be updated to the latest EIA data with a single click.

ribbon

Figure 2: EIA Excel Add-in Ribbon

Economic data series from the St. Louis Federal Reserve are also available through the same add-in, allowing the user to pull in indicators such as inflation or exchange rates alongside energy-specific data from the EIA. Not only does this save time, it ensures that the correct data series is queried each time the data are updated.

The EIA has always been a key data source for energy analysts, and they are rapidly evolving to become even better. Staying up to date with their latest offerings can reveal relatively easy solutions for some of the toughest data management and upkeep issues encountered by power system modelers.

Filed under: Data ManagementTagged with: , , , , ,

EMFC Delivers Practical and Strategic Insight

The conference will be held at the Atlanta Evergreen Marriott Conference Resort, September 14-16, 2016

http://www.globenewswire.com/news-release/2016/10/26/883166/0/en/EPIS-Releases-Mexico-Database-for-Use-with-AURORAxmp.html
gears

(Photos courtesy of Sahabia Ahmed, Entergy; Cameron Porter, Robin Hood Studios; and EPIS employees.)

 

EPIS headed south to Georgia’s beautiful Stone Mountain as it hosted the premier Electric Market Forecasting Conference (EMFC) for the 19th consecutive year, which took place Sept. 14-16. The EMFC featured a stellar lineup of speakers and activities to facilitate Expanding Perspectives on the Future of Energy Markets and provide a unique networking opportunity for over 75 industry experts and professionals in attendance.

The conference kicked off with a fun and relaxing evening at Stone Mountain Park’s Memorial Hall with an impressive view of the mountain and a one-of-a-kind Mountainvision® laser show, inclusive of fireworks and musical scores.

Thursday’s speakers focused on industry-wide issues opening with Jeff Burleson, vice president, system planning of Southern Company, who said that utilities couldn’t ignore what happens on the customer side of the meter. Burleson went on to state that past planning has focused on wholesale generation and transmission, but going forward, utilities will need to consider how customers are shaping and changing their load with new technologies.

Other Thursday morning presentations included “Outlook on Opportunities in Renewable Development” from Mark Herrmann, vice president, structuring of NRG Energy; “Market Evolution for Renewable Integration” from Todd Levin, Ph.D., energy systems engineer of Argonne National Laboratory; and “Advances and Opportunities for Internal Combustion Engine Power Plants” from Joe Ferrari, market development analyst of Wärtsilä North America.

The afternoon proceeded with Lakshmi Alagappan, director, and Jack Moore, director, market analysis, of Energy and Environmental Economics (E3), who presented “California Clean Energy Policy: Implications for Western Markets”. In the session, Alagappan stated that as California’s aggressive RPS comes to fruition, the EIM Market may help alleviate some over-generation by reducing thermal dispatch across the West Interconnection to make room for cheap exports to flow out of California. Alagappan went on to say that over-generation is not an abstract concern. Already, roughly 10 percent of dispatch hours in CAISO this year have resulted in zero or negative prices.

Following Alagappan and Moore, Larry Kellerman, managing partner of Twenty First Century Utilities wrapped up Thursday’s session by proposing a new paradigm for utilities that would allow these organizations to take advantage of low cost of capital and play a role in developments on the customer side of the meter. Strategies included personalized rate structures and curated services and technologies. Kellerman said, “We talk about energy efficiency as a resource, but energy efficiency is only a resource when you can deploy the capital and make the investment.”

Networking continued outside the conference room during Thursday’s afternoon activities. Some attendees took in the scenic views of Stone Mountain on a championship golf course while others participated in a breezy cruise on beautiful Stone Mountain Lake in a 1940’s era Army DUKW followed by a guided tour, highlighting early Georgia life, through Stone Mountain’s Historic Square.

During Friday’s Electric Market Forum speakers, and expert users of AURORAxmp, showcased effective examples of how to enhance your modeling endeavors.  The morning began with Morris Greenberg, managing director, gas and power modeling of PIRA Energy Group. Greenberg, focusing on “Integrating Natural Gas and Power Modeling”, said that the electrical sector is one of the most price elastic categories of natural gas demand. Combining gas and electrical models can capture feedback loops between gas and power markets. Greenberg continued by saying that as the electrical market’s share of total gas consumption increases, the behavior of the electrical market will continue to have a larger and larger impact on gas prices.

Switching gears, Eina Ooka, senior structure and pricing analyst of The Energy Authority, gave a very well received presentation on “Discovering Insights from Outputs – Exploratory Visualization and Reporting Through R”. Ooka said, “Interfacing AURORAxmp with other tools, such as R, allows users to quickly and effectively perform detailed analysis by automating almost all stages of the process.” Ooka concluded with a detailed discussion and demonstration on the visualization of data to make complex information easily digestible.

Additional Friday presentations included “Investing in Mexico Gas and Power” from Brett Blankenship, research director Americas primary fuel fundamentals from Wood Mackenzie and “Challenges of Forecasting Reliability Prices – Capacity Price in PJM & ORDC in ERCOT” from Joo Hyun Jin, commercial analysis of E.ON Climate & Renewables North America.

The EMFC is a once-a-year opportunity for industry professionals. Attendees of the 19th Annual EMFC gained new connections and an enriched market perspective.  As one attendee put it, “I really enjoyed the [presentations]… it was great to have exposure to such a wide range of topics from such qualified speakers. Congrats for doing such a great job with conference planning and execution.” Join EPIS next year for the 20th anniversary in Las Vegas!

For more information on this year’s speakers, please visit http://epis.com/events/2016-emfc/speakers.html

To obtain a copy of any or all of the presentations from this year’s EMFC, please go to EPIS’s Knowledge Base using your login credentials here. If you do not have login credentials, please email info@epis.com to request copies.

Filed under: EventsTagged with: , ,

New Developments in Computing Speed

Moore’s Law Explained

Since the beginning of modern computing early last century, processing speed and power has grown at an amazing rate.  Computer scientist and co-founder of Intel, Gordon Moore, predicted that the number of transistors in computer processors would double every two years.  Over the last half-century this hypothesis, known as Moore’s Law, has proven remarkably accurate.  Due to continuous innovation in the industry, clock speeds in computer chips have improved at a dramatic rate since the early 1970s—if airplane travel times had improved at the same rate over the same time we would be able to get anywhere in the world in a matter of seconds (and it would cost pennies).

One of the great advantages of this improvement in CPU performance over the years has been the fact that every piece of software benefits automatically from a higher processor clock speed.  A computer with a faster clock speed can run the exact same program more quickly with no code changes to the software required.

Maxed Out Processors Shift Focus to More Cores

But the story of computing power has started to change.  Over the last decade, the clock speed of computer processors has begun to top out.  Starting in 2003, processor developers like Intel and AMD started moving away from efforts to continue pushing clock speeds higher and shifted efforts towards increasing the number of processor cores in their chips.

The following graph shows the relationship between transistor density (in red), processor clock speed (or frequency, in green) and the number of processor cores (in black) over time.  The slowing of clock speed increases is clearly visible, as well as the shift toward adding more cores to the processors that have been produced over the last ten years.

cpu_dev

Figure 1: Computing speed developments.  Source

Software Architecture’s Free Ride Ending

These additional cores allow modern processors to perform more tasks simultaneously.  Today’s consumer PCs generally have processors that contain between two to eight cores, while some server processors have as many as 22 cores on a single chip.  However, unlike a clock speed boost, the performance improvements that come with multiple processor cores don’t come for free.  Software has to be significantly re-architected to take advantage of all those cores.  In order for software to run on more than one core at a time, it must be broken down into tasks that can be run simultaneously on the various cores that are available.  This can only be done when a particular task doesn’t require the result of a previous task as input.  Additionally, software must be designed so that resources, such as databases and hardware resources, can be properly accessed by multiple tasks that are running at the same time.

This has specific application to power market modeling software, such as AURORAxmp, that simulates the commitment and dispatch of power plants.  Suppose, for example, that we want to model one full year of 8760 dispatch hours using multiple processors, and assume that we know the hourly load, generator availability, fuel prices, transmission capability, etc. for every hour.  If we had more than 12 available cores to work with, we might break up the run into 12 simultaneous simulations that each run one month of the year.  We could even get all output data results in one database that allows concurrent access such as SQL Server, and the total time to run the 12 months would approach 1/12th the time required to run the full year on one core (though in reality it would not be quite that good because of the overhead managing all the cores).

So what’s the problem?  The hourly dispatch and commitment decisions in the different months are not independent.  Because of constraints that tie one hour’s solution to the next—such as generator minimum up and minimum down times, ramp rates, storage contents, hydro reservoir levels, annual emission limits, etc.—the simulation needs to know what happened in the previous hours to properly model the current hour.  The simplifying assumption that the operations of the power plants in each month are independent might be acceptable in some types of scenarios, but for a precise solution we simply can’t solve one hour until we know the solution from the previous hours.

Utilizing Multicore Advancement

But that doesn’t mean that there aren’t still great gains to be had in power market modeling software with multicore processors.  Certainly there is much processing of input and output data into this type of model that, if built properly, can take advantage of multiple processors.

For example, the standard DC power flow approximation using shift factors (multipliers used to calculate and constrain power flows) can require an enormous amount of computation.  A large system such as the Eastern Interconnect may well have over one billion non-zero factors that must be used in each hour’s simulation to calculate power flow between buses.  Intelligently using multiple processors to calculate those flows can drastically reduce the run time of these types of simulations.

Another place where utilizing multiple cores will help in this kind of software is in the mathematical solvers that perform the core optimizations.  Those solvers (such as Gurobi, CPLEX, and MOSEK) continue to improve their internal use of threading in their LP (linear programming) and MIP (mixed-integer programming) algorithms.  As they continue to get better at exploiting multiple processors, the power market models that use them will be significant beneficiaries.

We don’t know for sure what the next decade of computer processor improvements will bring.  We can undoubtedly expect some single processor speed improvements, but to keep the 2x trend of Moore’s Law going, it will almost certainly take a major effort on the part of software developers to utilize the new threading paradigm.  The capability of power market models to continue to tackle the most complex optimization problems with reasonable solution times may very well depend on their ability to embrace our new environment of multiprocessor architectures.

Filed under: Computing SpeedTagged with: , , ,

European Power Market Reporting Changes

Data Transparency Doesn’t Always Mean Ease of Use

The ENTSO-E Transparency Platform has increased the amount of European power market data publicly available in recent years.  While not completely comprehensive, it does help consolidate a vast amount of information in a single location.  ENTSO-E (European Network of Transmission System Operator for Electricity) was established in 2009 for the purpose of “further liberalising the gas and electricity markets in the EU.”  ENTSO-E represents 42 TSOs from 35 countries, including EU countries and non-EU countries like Iceland, Norway and Turkey, among others.

Diverse Levels of Compliance

Unfortunately, the various TSO’s have diverse levels of compliance in reporting data completely, or in some cases on a regular basis, as they follow their own time schedule and level of detail.  Some appear to only report units with installed capacity above 10 MW, while others also report smaller units.  ENTSO-E provides data in two different levels of detail: by unit and by country.  The by-country values are totals for the entire country for units above 1 MW.  By unit, ENTOS-E only asks that its members report details on units above 100 MW; but the actual minimum size for unit detail reported varies by country, as well as the fuel type.  Some countries identify the fuel explicitly, while others simply identify units as thermal, which might be coal, natural gas, fuel oil, or a combination of fuels.  When comparing old data sources to each TSOs publicly-released data, a complete and exact unit-by-unit match with ENTSO-E reported data nearly impossible.

Reviewing ENTSO-E Data by Country

For example, EPIS recently performed an update to resources in Italy.  While gathering data from ENTSO-E at the country level, we found this year over year comparison provided by ENTSO-E.

fig1

Figure 1: ENTSO-E: installed capacity by fuel type, by country Source

Note that the 2014 total of 102,547 MW is only a five percent variance to the 2015 total of 97,794 MW. But the interesting values in this report are the variances reported in the different fuel categories.  For instance, there are a number of Production Types that are relatively close year-over-year, but notice that the “Other” category in 2014 was ~37k MW, while 2015 was ~14k MW, resulting in a 63% decrease for that fuel type.  Another set of values also should jump out at the casual observer: “Fossil Hard coal” increased from 1,360 MW to 6,386 MW.  Was Italy introducing new coal units?  No. They were simply modifying their reported fuel type to be more in line with ENTOS-E reporting policies.

Differences in ENTSO-E Data by Unit

Next we reviewed the ENTSO-E data by unit, which is required above 100 MW.

fig2

Figure 2: ENTSO-E 2015: installed capacity by fuel type, by unit Source

In this analysis, the item that is most unique is that while the data is now at a finer granularity of detail (i.e. by unit), the “Other” category has now grown larger, to ~43k MW, than the reported values by country of ~14k MW and ~37k MW in 2015 and 2014 respectively.

In other words, their own by unit data is not matching their reported country level totals. What is going on here?  Primarily, when researched further, we found a large number of units, that can rely on multiple fuels, are categorized as “Other” in the by-unit report.  When we then condensed the Production Type detail a little further and compared 2014 and 2015 by country to the 2015 by unit data, we found this:

fig3

Figure 3: ENTSO-E: capacity differences reported by country or by unit

After reviewing these summaries, we saw that the renewable fuels are fairly close when comparing by-unit to by-country totals: wind is comparable, GST is also very close, but solar does not compare well since many units are under 1 MW and not included in the by unit report.  This comparison also showed that the totals of thermal and “Other” fuels together are fairly similar and make up over 60% of the total installed capacity in each report.

Moving Forward & Cross-checking

So where to go from here in making sense of reporting variability?  ENTSO-E is currently compiling data submitted by TERNA, the TSO in Italy, and we took a look at what data is available in that report.

fig4

Figure 4: TERNA 2015: installed capacity by fuel Source

Two things to note here are that the TERNA resource database only reports units 100 MW and larger, and they have an even smaller set of Production Type groups.  Again we noticed the total capacity reported by unit is very different at ~73k MW versus the ~93k MW from the previous report, but explainable due to renewable sources generally having smaller installed capacity values and therefore not included in this report.  Of note, no solar is reported here, only two wind units that total 243 MW are included, and the reported hydro is approximately 60% of the total MW reported to ENTSO-E.  However, the thermo electric total matches fairly well with the ENTSO-E data at ~60k MW.

So, what have we seen in reviewing these 3 sets of data from these two sources?  ENTSO-E and TERNA have come a long way in providing transparency with their data, but as the details here show, there is still a long way to go before the data can be easily adopted without a lot of scrubbing.

Filed under: European Power Market, Power Market InsightsTagged with: , ,

Using AURORAxmp to Meet Renewable Portfolio Standards (RPS)

According to the National Renewable Energy Laboratory (NREL), a renewable portfolio standard (RPS) is a “regulatory mandate to increase production of energy from renewable sources such as wind, solar, biomass and other alternatives to fossil and nuclear electric generation.” In 1983, Iowa became the first US state to adopt a renewable portfolio standard. In the last two decades, over half of the states in the country have adopted some form of RPS. Below is a chart displaying renewable capacity additions by state between 2000 and 2015:

RPS_USA

Source: Lawrence Berkeley National Laboratory

Though RPS can be enforced in several ways, the mandates typically require utilities to provide some level of electric supply with renewable energy. The federal government, and sometimes state & local governments, provides financial incentives, often in the form of tax credits or rebates, to encourage investment in renewable energy. According to the Lawrence Berkeley National Laboratory, 60% of renewable electricity generation and 57% of renewable capacity builds since 2000 are tied to RPS mandates. The ultimate goal of these policies is to migrate away from fossil fuel generation in an attempt to reduce carbon and other various emissions.

AURORAxmp provides the flexibility to model various state RPS mandates and can be used to measure the impact of RPS standards on system cost, zonal prices, and emission reductions. The built-in constraint logic is used to easily specify the minimum amount of electric generation needed to meet any specified fleet of resources. Multiple parameters apply the constraint geographically as well; for example, RPS constraints can be applied on a local, state, and national level, and the model will solve all of them in the same run.

RPS constraints can also be defined in the form of renewable capacity rather than electric generation. For example, the Clean Power Plan (CPP), proposed by the EPA last year, contains several intricate details that specify how conventional fuel generators will be required to operate, both individually and as a group. Any capacity or generation constraints can be used in conjunction with a variety of other defined limits, such as emission rates, restrictions on fuel usage, and limitations on capacity factors. Additionally, these constraints are fluidly incorporated into various types of studies, such as long-term capacity expansion, risk, and scenario analysis. Below is a chart created using AURORAxmp to estimate the total system cost of different programs:

Total_System_Cost

Generation attributed to RPS is expected to double by 2030 in the United States. As we look into the future, it is evident that the integration of renewable energy will continue to be a major point of interest in power markets. AURORAxmp offers an easy, reliable, and robust tool to analyze the impact of additional renewable generation on resources, the environment, and the electric grid.

Filed under: Renewable Portfolio StandardsTagged with: , , ,

Reserve Margins

Discussing reserve margins is often convoluted because of the various definitions and intricacies.  The basic principle is that reserve capacity is used to ensure adequate power supply.  Different types of reserves are defined in terms of various time scales.  In the short-term, operating reserves are used to provide adequate supply in the case of sudden plant or transmission outages.  In the long-term, planning reserves are used to ensure adequate power supply given a forecasted load in the years ahead.  Both types of reserves are often expressed as a ratio of excess capacity (i.e., available capacity less demand) to demand.  In this blog post, we will discuss planning reserves; the typical values, historical trends, market-to-market differences, and modeling within AURORAxmp.

Planning Reserves

Without adequate planning reserves, new generation may not be built in time and thus ultimately cause power disruptions.  But what is adequate?  In 2005, Congress passed The Energy Policy Act of 2005 that requires the North American Reliability Corporation (NERC) to assess the reliability of the bulk power system in North America.  A part of NERCs responsibility is to periodically publish Long-Term Reliability Assessments (LTRA) which include planning reserve targets, or reference margins.  Usually these are based on information provided by each governing body (e.g., ISO, RTO, etc.) in the assessment area.  If no such information is available, NERC sets the reference margin to 15% for thermal-dominated systems and 10% for hydro-dominated systems.  For the 2015 LTRA, the NERC reference margins range from 11% to 20% across the assessment areas as shown in Figure 1.  The highest reference margin, 20% for NPCC Maritimes, is due to a disproportionate amount of load being served by large generating units.

NERC reference margins graph

Figure 1. 2016 Planning reserve margins by NERC assessment area from the 2015 LTRA.
The gold bars represent assessment areas with capacity markets.

In addition to providing reference margins, or published targets from other entities, NERC publishes yearly anticipated planning reserve margins, out 10 years, for 21 assessment areas in North America.  To do this, NERC collects data on peak demand and energy, capacity, transmission and demand response from NERC regional entities.  Data submission is usually due in the first quarter of the report year.  This strategy represents a bottom-up approach to understanding reliability.

Forecasting Anticipated Planning Reserve Margins

Forecasted anticipated planning reserve margins can vary substantially from assessment year to assessment year, area to area, and as a function of markets.  To illustrate this, one-, five-, and 10-year forecasted anticipated planning reserve margins for PJM and ERCOT are shown in Figure 2.  The variability in anticipated planning reserve margin is similar between each assessment area, and, increases with the length of the forecast.  This is presumably due to increasing uncertainty in forecasts as a function of time.  Interestingly, the number of years with shortfalls (fewer reserves than the target) is much larger in ERCOT than PJM.  PJM has a three-year forward capacity market and ERCOT is an energy only market.  Therefore, there is more incentive for long-term excess capacity in PJM.

reserve margins

Figure 2. Planning reserve margins targets (dashed line) and one-, five-, and 10-year anticipated planning reserve margin from the 2011 to 2015 NERC LTRAs.

As shown above, in both ERCOT and PJM, the year-ahead anticipated planning reserve margins are adequate, suggesting long-term planning approaches are working in both markets, however, regional complexities can pose problems.  For example, MISO recently published the 2016 Organization of MISO States (OMS) Survey to assess planning reserve margins.  In 2017, shortfalls are predicted in three zones – IL, MO, and Lower MI.  Excess capacity from other zones will be transferred to make up for the shortfall in the short term.  Similar to the NERC forecasts, uncertainty in the regional forecasted load is key to this issue, and may increase or decrease this shortfall.

In addition to regional issues, the rapid changing generation mix also poses challenges for quantifying adequate planning reserves.  NERC has recognized this and has called for new approaches for assessing reliability in both the 2014 and 2015 LTRA.  One specific issue is traditional load shape disruption with added solar resources.  A typical summer-peaking system may face reliability issues in the winter or other expected off-peak months where demand still is high but solar output is low.  Considering secondary demand peaks, and thus planning reserve margins, may be prudent in these situations.

AURORAxmp and Planning Reserve Margins

In AURORAxmp, planning reserve margins are used in the long-term capacity expansion logic to guide new resource builds.  Our Market Research and Analysis team updates planning reserve margins annually based on the latest NERC LTRA.  Planning reserve margins can be specified on the pool or zone level, thus easily facilitating varying spatial scale studies.   Risk studies can be conducted to quantify the impacts of uncertainty in each aspect of planning reserve margins on long-term resource builds.  Together these features support cutting-edge analysis surrounding the complexities of reserves.

Filed under: Power Market InsightsTagged with: ,

19th Annual Electric Market Forecasting Conference to Focus on the Future of Energy Markets

The 2016 Electric Market Forecasting Conference (EMFC), a leading gathering of industry strategists and executives, will feature in-depth discussions on the driving forces of today’s energy markets. The 19th annual conference, organized by EPIS, LLC, will bring together a stellar lineup of speakers as well as senior executives in the industry.  The EMFC will be held at the Atlanta Evergreen Marriott Conference Resort in Atlanta, Georgia, September 14-16, 2016.

golfcourse2

The EMFC features an optional one-day pre-conference training for both new and advanced power market modelers, as well as an AURORAxmp Users’ Group Meeting. Both clients and non-clients are welcome to attend. The two-day conference will include presentations and case studies from industry experts, as well as special events and networking opportunities. Speakers include: Larry Kellerman, managing partner of Twenty First Century Utilities, Morris Greenberg, managing director of gas and power modeling at PIRA Energy Group and Jeff Burleson, VP of system planning at Southern Company. A full list of speakers is available at http://epis.com/events/2016-emfc/speakers.html.

“Over the past 19 years, the Electric Market Forecasting Conference has become established as a valuable, strategic gathering for clients and non-clients alike,” said Ben Thompson, CEO of EPIS. “It is an event where executives and peers in the industry gather to share market intelligence and discuss the future of the industry.”

EMFC has developed a reputation for being an event that delivers real, actionable intelligence, not just abstract concepts. The organizers focus on an agenda filled with speakers who can share experience and takeaways that can be used to have a positive impact on attendees’ organizations. The conference’s intimate environment allows participants to create lasting relationships with peers and luminaries alike.

Now in its 19th year, EMFC is an essential conference for power industry professionals to come together to share best practices and market intelligence. The one-day pre-conference allows AURORAxmp users to learn techniques to master the AURORAxmp application and maximize ROI. More information can be found at: http://epis.com/events/2016-emfc/index.html.

Filed under: EventsTagged with: , , , , ,