New Developments in Computing Speed

LinkedIn
Facebook
Facebook
Google+
Google+
http://epis.com/powermarketinsights/index.php/2016/09/20/new-developments-in-computing-speed/
RSS

Moore’s Law Explained

Since the beginning of modern computing early last century, processing speed and power has grown at an amazing rate.  Computer scientist and co-founder of Intel, Gordon Moore, predicted that the number of transistors in computer processors would double every two years.  Over the last half-century this hypothesis, known as Moore’s Law, has proven remarkably accurate.  Due to continuous innovation in the industry, clock speeds in computer chips have improved at a dramatic rate since the early 1970s—if airplane travel times had improved at the same rate over the same time we would be able to get anywhere in the world in a matter of seconds (and it would cost pennies).

One of the great advantages of this improvement in CPU performance over the years has been the fact that every piece of software benefits automatically from a higher processor clock speed.  A computer with a faster clock speed can run the exact same program more quickly with no code changes to the software required.

Maxed Out Processors Shift Focus to More Cores

But the story of computing power has started to change.  Over the last decade, the clock speed of computer processors has begun to top out.  Starting in 2003, processor developers like Intel and AMD started moving away from efforts to continue pushing clock speeds higher and shifted efforts towards increasing the number of processor cores in their chips.

The following graph shows the relationship between transistor density (in red), processor clock speed (or frequency, in green) and the number of processor cores (in black) over time.  The slowing of clock speed increases is clearly visible, as well as the shift toward adding more cores to the processors that have been produced over the last ten years.

cpu_dev

Figure 1: Computing speed developments.  Source

Software Architecture’s Free Ride Ending

These additional cores allow modern processors to perform more tasks simultaneously.  Today’s consumer PCs generally have processors that contain between two to eight cores, while some server processors have as many as 22 cores on a single chip.  However, unlike a clock speed boost, the performance improvements that come with multiple processor cores don’t come for free.  Software has to be significantly re-architected to take advantage of all those cores.  In order for software to run on more than one core at a time, it must be broken down into tasks that can be run simultaneously on the various cores that are available.  This can only be done when a particular task doesn’t require the result of a previous task as input.  Additionally, software must be designed so that resources, such as databases and hardware resources, can be properly accessed by multiple tasks that are running at the same time.

This has specific application to power market modeling software, such as AURORAxmp, that simulates the commitment and dispatch of power plants.  Suppose, for example, that we want to model one full year of 8760 dispatch hours using multiple processors, and assume that we know the hourly load, generator availability, fuel prices, transmission capability, etc. for every hour.  If we had more than 12 available cores to work with, we might break up the run into 12 simultaneous simulations that each run one month of the year.  We could even get all output data results in one database that allows concurrent access such as SQL Server, and the total time to run the 12 months would approach 1/12th the time required to run the full year on one core (though in reality it would not be quite that good because of the overhead managing all the cores).

So what’s the problem?  The hourly dispatch and commitment decisions in the different months are not independent.  Because of constraints that tie one hour’s solution to the next—such as generator minimum up and minimum down times, ramp rates, storage contents, hydro reservoir levels, annual emission limits, etc.—the simulation needs to know what happened in the previous hours to properly model the current hour.  The simplifying assumption that the operations of the power plants in each month are independent might be acceptable in some types of scenarios, but for a precise solution we simply can’t solve one hour until we know the solution from the previous hours.

Utilizing Multicore Advancement

But that doesn’t mean that there aren’t still great gains to be had in power market modeling software with multicore processors.  Certainly there is much processing of input and output data into this type of model that, if built properly, can take advantage of multiple processors.

For example, the standard DC power flow approximation using shift factors (multipliers used to calculate and constrain power flows) can require an enormous amount of computation.  A large system such as the Eastern Interconnect may well have over one billion non-zero factors that must be used in each hour’s simulation to calculate power flow between buses.  Intelligently using multiple processors to calculate those flows can drastically reduce the run time of these types of simulations.

Another place where utilizing multiple cores will help in this kind of software is in the mathematical solvers that perform the core optimizations.  Those solvers (such as Gurobi, CPLEX, and MOSEK) continue to improve their internal use of threading in their LP (linear programming) and MIP (mixed-integer programming) algorithms.  As they continue to get better at exploiting multiple processors, the power market models that use them will be significant beneficiaries.

We don’t know for sure what the next decade of computer processor improvements will bring.  We can undoubtedly expect some single processor speed improvements, but to keep the 2x trend of Moore’s Law going, it will almost certainly take a major effort on the part of software developers to utilize the new threading paradigm.  The capability of power market models to continue to tackle the most complex optimization problems with reasonable solution times may very well depend on their ability to embrace our new environment of multiprocessor architectures.

Filed under: Computing SpeedTagged with: , , ,

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

Comment *

Name
Email *
Website