In-Home Storage: The Virtual Power Plant

Rapid Growth

Solar and wind are considered the most popular renewable resources across the world, but due to their intermittent and unpredictable nature, utilities are still relying on natural gas and coal. However, when renewable technologies are combined with energy storage they smooth out load fluctuations and have the potential to significantly impact the generation mix.
Total energy storage deployment has increased dramatically in the past few years because of low-carbon, clean energy policies, and is anticipated to grow even more in the near-term. By 2022, GTM Research expects the U.S. energy storage market to reach 2.5 GW annually, with residential opportunities contributing around 800 MW.


Source: GTM Research

How Does It Work?

Energy storage works as a three-step process that consists of extracting power from the grid, solar panels, or wind turbines, storing it (charging phase) during the off-peak period when power prices are lower, and returning it (discharging period) at a later stage during the on-peak period when the prices are much higher.


For electric vehicles (EV), most of the charging happens at night and during weekends, when the prices are comparatively lower, and vehicles are not used that much. As EVs continue to enter the mainstream market, they would increase the off-peak prices and contribute to load shifting.
Energy storage devices and EVs can complement each other or they may be competitive. But energy storage is the key element for EV charging during on-peak hours.

Different Market Players

Residential energy storage has been a holy grail for companies like Tesla, Panasonic, LG, Sunverge Energy, and Orison with lithium ion (Li-ion) batteries as the leading technology type. Now with plug-in electric and hybrid vehicles on the rise, automobile companies Tesla, Nissan, Mercedes Benz, BMW, Renault and Audi have also joined the residential market to integrate EV charging stations, battery storage and rooftop solar that in essence has a residence operating as a virtual power plant.
Beginning in December of last year, Arizona Public Service Company deployed Sunverge Energy’s energy storage hardware coupled with advanced, intelligent energy management systems that predict future load requirements and solar generation. Additionally, Tesla is enjoying significant market share, shown recently by Vermont-based Green Mountain Power’s launch of a comprehensive solution to reduce customer electricity bills using Tesla’s cutting edge Powerwall 2 and GridLogic software.
A few other utility companies, especially in Florida and California, are also exploring residential energy storage programs, as shown in the figure below.


Source: Hawaii PUC; General Assembly of Maryland

So, what are some other current thoughts about the pros and cons of in-home energy storage?

  • Energy storage reduces load fluctuations by providing localized ramping services for PV and ensuring constant, combined output (PV plus storage).
  • Improves demand response and reduces the peak demand.
  • Extra savings for customers through net metering systems and end-user bill management.
  • Reduces reliance on the grid; the customer can generate and store the energy during severe outages also.
  • Disposal of Li-ion batteries is not easy, and they are difficult to recycle
  • Automakers, like Nissan and BMW, are implementing second-life batteries, thereby reducing the durability and reliability of the product.


Concluding Thoughts

Clearly, a wider acceptance of energy storage resources would be a game changer in the U.S. power sector. Utilities, consumers, and automakers are profiting from this exponential growth of energy storage. With an increasing number of companies using artificial intelligence and machine learning algorithms for energy management systems, the synergy with energy storage creates a perfect, smart, personal power plant which has tremendous potential to change the landscape of the energy industry.

Filed under: Clean Power Plan, Hydro Power, Power Grid, Power Market Insights, Power Storage, Renewable Portfolio Standards, Renewable Power, Solar Power, UncategorizedTagged with: , , , , , , , , , , , , , , , , ,

EMFC Addresses Head-on the Tectonic Industry Changes

With record attendance in one of the most iconic tourist destinations in the world, the 20th Annual Electric Market Forecasting Conference (EMFC) took place September 6-8 in Las Vegas, NV. This industry-leading conference assembled top-notch speakers and gave an exclusive networking experience to attendees from start to finish.

The pre-conference day featured in-depth sessions designed to maximize the value of the Aurora software for its users. Advanced sessions included discussions on resource modeling and improving model productivity, recent database enhancements including the disaggregation of U.S. resources, an update on the nodal capability and data, and other model enhancements.

Pic 1

Michael Soni, Economist, Support | EPIS

Before the afternoon Users’ Group meeting started, EPIS announced that it was dropping “xmp” from the name of its flagship product to purely Aurora, and unveiled a fresh logo. Ben Thompson, CEO of EPIS said, “The new logo reflects our core principles of being solid and dependable, of continuously improving speed and performance, and of our commitment to helping our customers be successful well into this more complex future.”

That evening, attendees kicked-off the main conference with a night under the stars at Eldorado Canyon for drinks, a BBQ dinner and a tour of the Techatticup Mine; the oldest, richest and most famous gold mine in Southern Nevada.


Eldorado Canyon, Techatticup Mine

On Thursday, thought leaders from across the industry presented various perspectives on the complex implications that recent industry changes will have on grid operations, future planning and investments. The forum session opened with Arne Olson, a partner with E3 Consulting in San Francisco, discussing California’s proposed legislation SB-100, which aimed to mandate that 100% of California’s energy must be met by renewable sources by 2045, along with the bill’s implications for Western power markets and systems. He pointed out that SB-32, last year’s expansion of earlier legislation, which mandates a 40% reduction in GHG emissions (below the 1990 levels by 2030), is actually more binding than SB-100. He explained the economics of negative prices, why solar output will be increasingly curtailed and posited that CAISO’s famous “duck curve” is becoming more an economic issue vs. the reliability issue it was originally intended to illustrate.

Other Thursday morning presentations included “The Rise of Utility-Scale Storage: past, present, and future” by Cody Hill, energy storage manager for IPP LS Power, who outlined the advances in utility-scale lithium ion batteries, and their expected contributions to reserves as well as energy; Masood Parvania, Ph.D., professor of electrical and computer engineering at the University of Utah, who described recent advances in continuous-time operation and pricing models that more accurately capture and compensate for the fast-ramping capability of demand response (DR) and energy storage device; and Mahesh Morjaria, Ph.D., vice president of PV systems for First Solar who discussed innovations in PV solar module technology, plant capabilities and integration with storage.

Pic 3

Masood Parvania, Ph.D., Director – Utah Smart Energy Lab | The University of Utah

The afternoon proceeded with Mark Cook, general manager of Hoover Dam, who gave a fascinating glimpse into the operations and improvements of one of the most iconic sources of hydro power in the country; and concluded with Lee Alter, senior resource planning analyst and policy expert for Tucson Electric Power, who shared some of the challenges and lessons learned in integrating renewables at a mid-sized utility.

Networking continued Thursday afternoon with a few of the unique opportunities Las Vegas offers. In smaller groups attendees were able to better connect with each other while enjoying one of three options which included a delicious foodie tour, swinging clubs at TopGolf, or solving a mystery at the Mob Museum.

The final day of the conference was devoted to giving Aurora clients the opportunity to see how their peers are using the software to solve complex power market issues. It featured practical discussions on how to model battery storage, ancillary services, the integration of renewables and an analysis of the impact of clean energy policies all while using Aurora.

The conference adjourned and attendees headed out for a special tour of the Hoover Dam which included a comprehensive view of the massive dam and its operations, and highlighted many of the unique features around the site.

pic 4

Hoover Dam, Power Plant Tour

The EMFC is a once-a-year opportunity for industry professionals. The 20th Annual EMFC addressed head-on the tectonic industry changes (occurring and expected) from deep renewable penetration, advances in storage technologies, and greater uncertainty. Join EPIS next year for the 21st Annual EMFC!

For more information on the 2017 speakers, please visit
To obtain a copy of any or all of the presentations from this year’s EMFC, Aurora clients can go to EPIS’s Knowledge Base website using their login credentials here. If you do not have login credentials, please email to request copies.

Filed under: Events, UncategorizedTagged with: , , , , ,

How Good is the EIA at Predicting Henry Hub?

Natural gas power plants are a key component of bulk electrical systems in North America. In the U.S., natural gas power plants made up the largest portion of installed capacity, 42%, as of December 2016 and contributed more to generation than any other source. In Mexico, natural gas power plants supplied 54% of the required electricity in 2015 and are a key component of the capacity additions in development of the national electrical system. Natural gas is also likely to be the primary energy source in the U.S. due to increased regulation on coal units, uncertainty around the future of nuclear generation, and low natural gas prices.

Natural gas prices are a critical driver of electricity prices and a key input variable in electric power models. Due to the large amount of natural gas power plants in North America, and because fuel costs are the largest cost component of a thermal power plant, wholesale electricity prices are tightly coupled with natural gas prices. There is also an important feedback loop, in that natural gas demand, and price, is tightly coupled to the operation of natural gas power plants. Understanding the interplay between gas and power markets, and uncertainties in forecasts, is critical for forecasting either.

The U.S. Energy Information Administration (EIA) provides natural gas price short-term forecasts through the Short-Term Energy Outlook (STEO) and long-term forecasts through the Annual Energy Outlook (AEO). For the purposes of this article, we will focus on the STEO. The STEO is a monthly report with, among other items, a natural gas consumption and price forecast for 13 to 24 months in the future depending on the month published. The model predicts consumption and prices for three sectors (commercial, industrial, and residential) in the nine U.S. census districts. To do this, the model calculates natural gas consumption and supply levels to build an inventory. Prices are derived from a regression equation using the inventory and heating and cooling degree days, and analysts then make adjustments for final prices. Detailed information on each equation and method is provided by EIA Natural Gas Consumption and Prices document.

How good is the EIA at forecasting natural gas prices from a month to a year out?

To evaluate the STEO forecasts of natural gas prices, we downloaded each monthly STEO report from January 2012 to December 2016 to allow for at least a full year of analysis with historical prices. This period was selected because it is representative of the current trend of low natural gas prices (relative to historical). The mean absolute error (MAE) and mean absolute percent error (MAPE) were calculated for each forecasted value. Prices were then evaluated for the first forecast in each year and a subset of forecasts from consecutive months during a price spike. The mean absolute percent error was also evaluated for each report year and across all reports.

For the period analyzed (2012 to 2016, shown in orange below), the wholesale Henry Hub gas price averaged $3.30/mmbtu with a high price of $6.19/mmbtu in early 2014 due to the extreme Northeast weather (i.e., the polar vortex) and a low price of $1.78/mmbtu due to warm weather conditions and large amount of storage late in 2016. This period is representative of relatively low natural gas prices as compared to the previous five-year period with high prices exceeding $10/mmbtu driven by high oil prices and an average of $5.63/mmbtu despite the sharp decline due to the financial crisis in 2008-2009.


Figure 1. Historical Henry Hub natural gas prices. The yellow period denotes the study period used for this analysis. Source: EIA.

We started by looking at the longest-term forecasts (24 months) that are delivered in January of each year, and saw an inability to capture rapid fluctuations in prices in the study period:


Figure 2. Historical Henry Hub gas prices with 24 month forecasts from the January STEO of each year starting in 2012 and ending in 2015 using the base case data. Source: EIA STEO.

The January 2012 forecast missed the sharp reduction in prices from the winter to summer that were driven by high storage volumes. Less volatility occurred over the first part of the January 2013 forecast, however this forecast missed the large increase in prices to over $6/mmbtu which were driven by extreme weather conditions. The January 2014 forecast also missed the weather-driven high price for this period and then was high-biased in the later months of the forecast. The January 2015 forecast was high-biased the entire forecast period and missed the lower prices which were driven by a combination of mild weather and high storage volumes.
The STEO forecast is very sensitive to the initial conditions or starting month’s price. For example, plotting each month’s forecast during the increase from $3.74/mmbtu in November 2013 to $6.19/mmbtu in February of 2014 shows the impact of the rapid change in initial condition (last known price) on the first month forecasted value:


Figure 3. Historical Henry Hub gas prices with forecasted values from the months leading up to the rapid price spike in February 2014.

Presumably the long-term fundamental drivers of the STEO do not change as much as the initial conditions, and thus the longer-term forecast is much less sensitive to initial conditions.
Despite missing the fluctuation events, on average across the years analyzed the STEO is within 8% of the price in the first month of the forecast, 25% of the price out to eight months and 33% of the price out to 13 months:


Figure 4. Mean absolute percent error calculated for each forecasted month of STEO reports. Data are averaged over a report year, as well as over all of the report years. Maximum and minimum percent error is calculated over all STEO reports.

On average, the trend has increasing error with forecast length, however, this does not occur in the 13-month 2012 or 2013 STEOs. The expected error growth with time does appear in the 2014 and 2015 STEOs, reaching nearly 60% in the 2014 STEO. The maximum percent error in any given forecast grows rapidly from 26% in the first forecasted month to 75% in the fourth forecasted month, and reaches a high of over 100% 12 and 13 months out.
In absolute terms, the error ranges on average from $0.25/mmbtu in the first forecasted month to $0.88/mmbtu 13 months out. Maximum and minimum errors range from less than a penny up to $2.45/mmbtu.


Figure 5. Mean absolute error calculated for each forecasted month of STEO reports. Data are averaged over a report year, as well as over all of the report years. Maximum and minimum absolute error is calculated over all STEO reports.

Is the STEO forecast good enough? Unfortunately, as with many answers, it depends. More importantly, however, is understanding the limitations and uncertainties in their gas forecasts. If relying on EIA forecasts, you must realize the sensitivity to initial conditions and the typical error growth in the first months to year of the forecast. With this information, sensitivity studies can be formulated to capture possible fluctuations in gas prices. Taken together with other uncertainties such as demand, transmission outages, and plant outages, you can begin to form an ensemble of forecasts.

Filed under: Natural Gas, UncategorizedTagged with: , , ,

Integrated Gas-Power Modeling

Quantifying the Impacts of the EPA’s Clean Power Plan

Notwithstanding the recent legal stay from the U.S. Supreme Court, it is still important to understand the U.S. EPA’s Clean Power Plan (CPP) and its impact in the larger context of natural gas markets and its role in electric power generation. Because these two markets are becoming even more highly interrelated, integrated gas-power modeling is the most realistic approach for such analyses. EPIS has tested interfacing AURORAxmp® with GPCM®, a calibrated NG model developed by RBAC, Inc. The following is a brief discussion of our experimental setup as well as some of our findings.

Integration Approach

Monthly prices for 39 major natural gas hubs for the next 20 years are represented in AURORAxmp (as an input). They were developed utilizing GPCM’s market model (as an output) in pipeline capacity expansion mode. AURORAxmp then simulates a long-term capacity expansion that utilizes the GPCM-generated gas prices, and produces many results: power prices, transmission flows, generation by each resource/resource type including gas-consumption data. This gas-consumption (output from AURORAxmp) is fed back into GPCM as gas demand by the electricity sector (input to GPCM) for a subsequent market balancing and pipeline capacity expansion simulation which generates a new set of monthly gas hub prices. The iterative process begins at some arbitrary, but plausible, starting point and continues until the solution has converged. Convergence is measured in terms of changes in the gas-burn figures and monthly gas-hub prices between subsequent iterations.

This two-model feedback loop can be utilized as a tool to evaluate energy policies and regulations. To quantify the impact of an energy policy, we need two sets of integrated gas-power runs which are identical in all respects except the specific policy being evaluated. For example, to understand the likely impacts of emission regulation such as CPP, we need two integrated gas-power models with the identical setup, except the implementation of CPP.

Before presenting our findings on the impact of “CPP vs No CPP”, we first provide some further details on the setup of the GPCM and AURORAxmp models.

GPCM Setup Details

• Footprint: All of North America (Alaska, Canada, contiguous USA, and Mexico), including liquefied natural gas terminals for imports, and exports to rest-of-world.
• Time Period: 2016-2036 (monthly)
• CPP Program: All the effects of CPP on the gas market derived from changes to gas demand in the power generation sector.
• Economics: Competitive market produces economically efficient levels of gas production, transmission, storage and consumption, as well as pipeline capacity expansion where needed.

AURORAxmp Setup Details

  • Footprint: All three major interconnections in North America (WECC, ERCOT, and the East Interconnect; which includes the contiguous U.S., most Canadian provinces and Baja California).
  • Time Period: 2016 – 2036 (CPP regulatory period + 6 years to account for economic evaluation)
  • CPP Program: mass-based with new source complement for all U.S. states
    • Mass limits for the CPP were applied using the Constraint table
    • Mass limits were set to arbitrarily high values in the Constraint table for the “No CPP” case.
  • RPS targets were not explicitly enforced in this particular experiment. Future studies will account for these.
  • LT Logic: MIP Maximize Value objective function


  1. “CPP” – Convergent result from integrated gas-power model with CPP mass limits.
  2. “No CPP” – Convergent result from integrated gas-power model with arbitrarily high mass limits.
  3. “Starting Point” – Gas prices used in the first iteration of integrated gas-power modeling.
    • This is the same for both “CPP” and “No CPP” case.

Quantifying the CPP vs. No CPP

Impact on Gas and Electricity Prices

  1. Both No CPP and CPP cases have generally lower prices than the Starting Point case in our experiment. However, post-2030, CPP prices are higher than the Starting Point.
    • This happens due to capacity expansion in both markets.
    • We stress that the final convergent solutions are independent of the Starting Point case. The lower prices in CPP and No CPP cases compared to the Starting Point case are a feature of our particular setup. If we had selected any other starting price trajectories, the integrated NG-power feedback model would have converged on the same CPP and No CPP price trajectories.
  2. CPP prices are always higher than the No CPP case.
    • This is likely driven by increased NG consumption in CPP over No CPP case.

This behavior was observed in all major gas hubs. Figure 1 shows the average monthly Henry Hub price (in $/mmBTU) for the three cases.

Impact of CPP on Henry Hub PricesFigure 1: Monthly gas prices at Henry Hub for all three cases.

Figure 2 presents the monthly average power prices in a representative AURORAxmp zone.

Comparison of Power Prices in PJM Dominion VPFigure 2: Average monthly price in AURORAxmp zone PJM_Dominion_VP with and without CPP.

Figure 3 shows the impact of CPP as a ratio of average monthly prices in AURORAxmp’s zones for the CPP case over No CPP case. As expected, power prices with the additional CPP constraints are at the same level or higher than those in the No CPP case. However, it is interesting to note that the increase in power prices happens largely in the second half of CPP regulatory period (2026 onwards). It appears that while gas prices go up as soon as the CPP regulation is effective, there is latency in the increase in power prices.

Impact of CPP on Zone Price (CPP/No CPP)Figure 3: Impact of CPP on electricity prices expressed as a ratio of CPP prices over No CPP prices.

Figure 4 presents a comparison of total annual production cost (in $billions) for each of the three regions.

Annual Production Cost (In $billions) for each of the three regions.Figure 4: Total annual production costs by region for CPP and No CPP case.

Figure 5 presents the same comparison as a percentage increase in production cost for the CPP case. The results show that while the CPP drives up the cost of production in all regions, the most dramatic increase is likely to occur in the Eastern Interconnect.

Percentage increase in production cost total for CPP over No CPP CaseFigure 5: Percent increase in production cost for CPP case.

Electricity Capacity Expansions

Comparing the power capacity expansions in Figure 6 and Figure 7, we see that AURORAxmp projected building more SCCTs in the CPP case vs. the No CPP case in the Eastern Interconnect. We believe this is primarily driven by the higher gas prices in the CPP case over No CPP case. SCCTs typically have slightly higher fuel prices compared to CCCTs, which get their fuel directly from the gas hub for the most part. In this long-term analysis, AURORAxmp is seeking to create the mix of new resources that are most profitable while adhering to all of the constraints. The higher gas prices in the CPP case are just high enough to make the SCCTs return on investment whole.

Eastern Interconnect Build Out - No CPPFigure 6: Capacity expansion for Eastern Interconnect – No CPP Case.

Eastern Interconnect Build Out - CPPFigure 7: Capacity expansion for Eastern Interconnect – CPP Case.

Table 1: Capacity expansion by fuel type in total MW.



East Int.












































Table 1 shows the details of power capacity expansion in the three regions with and without CPP emission constraints. In addition to increasing the expansion of SCCTs, we can see that CPP implementation incentivizes growth of wind generation, as well as accelerates retirements. Coal and Peaking Fuel Oil units form the majority of economic retirements in the CPP case.

Fuel Share Displacement

Figure 8 shows the percent share of the three dominant fuels used for power generation: coal, gas, and nuclear. Figure 9 shows the same data as the change in the fuel percentage share between the CPP and No CPP case. Looking at North American as a whole, we see that coal-fired generation is essentially being replaced by gas-fired generation. Our regional data shows that this is most prominent in the Eastern Interconnect and ERCOT regions.

Percentage Share of Dominate Fuel TypeFigure 8: Percentage share of dominant fuel type.

Change in fuel share for power generation (cpp - no cpp)Figure 9: Change in fuel share for power generation (CPP – No CPP).

Natural Gas Pipeline Expansions
The following chart presents a measure of needed additional capacity for the two cases. The needed capacity is highly seasonal, so the real expansion need would follow the upper boundary for both cases.


Additional NG Pipeline Capacity RequiredFigure 10: Pipeline capacity needed for the CPP and No CPP cases.

Our analysis shows that the CPP will drive an increase in natural gas consumption for electricity generation. The following chart quantifies the additional capacity required to meet CPP demand for NG.
Additional NG Capacity Required CPP vs No-CPP (bcf/day)

While the analysis presented here assumes a very specific CPP scenario, we stress that the integrated gas-power modeling is an apt tool for obtaining key insights into the potential impacts of CPP on both electricity and gas markets. We are continuously refining the AURORAxmp®-GPCM® integration process as well as performing impact studies for different CPP scenarios. We plan to publish additional findings as they become available.

Filed under: Clean Power Plan, Natural Gas, Power Market Insights, UncategorizedTagged with: , , , ,

The New Electric Market in Mexico

The Role of Zonal Resource Planning Analyses

On January 26, 2016 a once-in-a-lifetime event occurred that may have been overlooked by the casual observer: Mexico launched the first phase of its reformed, now competitive, electric market. The day-ahead market began for the Baja Mexico interconnection and is the first component of a comprehensive change to the nation’s electric system.

Over the last few years, sweeping market reforms and designs were drafted, approved by the government, and are now beginning to be implemented in a fundamental shift for electricity in Mexico. The expectation is that incorporating a market structure will modernize a constrained and aging system, improve reliability, increase development of renewable generation and drive new investment.

A market shift like this underscores the critical need to produce meaningful and accurate analyses for long-term resource planning, in addition to participating in the day-ahead nodal market.

The importance of data availability to market participants cannot be overstated. As a result of the market reforms in Mexico, the sole utility, Comisión Federal de Electricidad (CFE), is being split into multiple entities and government organizations are being restructured to address the change from a state-run system to a competitive marketplace. Yet, the detailed data required for trading activities, such as those begun in January, and to support the proposed nodal market is difficult to obtain. Sources for much of this data are still being determined and still not available in some cases.

However, for typical generator development and economics, investment, and lifecycle forecasting – studies that require 30-40 year planning horizons – data is available. Resource planning analytics have become imperative to the development of new generation and transmission, informing investment in the energy sector, producing integrated resource plans for utilities, as well as numerous different studies for other stakeholders. Planning tools like AURORAxmp play a key role in these analyses, but so does the need for accurate market data.

Dispatch simulation models used for these studies typically define market topographies at the zonal (or control area) level. Mexico is currently divided into nine of these zones, or, “control regions”.

New Electric Market Control Regions in Mexico

Each of these zones contains generator information, load/demand information, and aggregated transmission capacities to/from adjoining zones. This data can be used by the dispatch simulation to forecast prices, value, risk, etc. for the study period. In the case of resource planning, it can produce detailed capacity expansion analyses to understand:
-Understand the value and operation of existing units.
-Determine whether to retire uneconomic or obsolete generators.
-Consider the value and performance of new generation that may have been added by the simulation.

Analysts can specify additional information such as new generation technologies (e.g. renewable generator options), capital costs, return components and other financial information to produce results that will inform build/buy decisions.

AURORAxmp has been used in a variety of studies in Mexico since 2002. Consultants and IPPs have utilized the software to produce meaningful results used in long-term resource planning decisions, and the zonal topography has provided the advantage of demonstrating value in the current market.

Developing a solid fundamental outlook that allows the assessment of potential long-term risks and opportunities is imperative for decision making and sound financial planning whether you are assessing the development a new power plant or acquiring an existing asset in Mexico. The wholesale power market in Mexico is expected to from a day-ahead and real-time nodal market to include traded pricing hubs with a futures market. A zonal model using AURORAxmp can provide an invaluable tool for long-term price forecasting, scenario analysis and asset valuation for the new Mexican reality.
– Marcelo Saenz, Pace Global, A Siemens Business

Although the proposed market will eventually operate at the nodal level, long-term studies at the zonal level remove the effects of temporary events at the nodal level, thus providing a more stable result for financial decisions.

AURORAxmp has the robust abilities to simulate both zonal and nodal markets. However, its leading capabilities in performing long-term resource planning analysis will continue to be especially important for markets, like Mexico, that will go through enormous changes and growth over the next few years.

Filed under: Power Market Insights, UncategorizedTagged with: , , , , ,