Pages

Thursday, November 19, 2015

OpenPOWER's Order–of-Magnitude Performance Improvements

By Rich Ptak and Bill Moran

Performance improvements come in different sizes. Often vendors announce a 20% or 30% performance improvement along with an increase in the price/performance of their product or technology. Much more rarely, a vendor delivers an order-of-magnitude improvement. An order-of-magnitude improvement equates to a performance increase of a factor of 10. Improvements on this scale underlie recent[1] technology acceleration announcements[2] by IBM and other OpenPOWER Foundation members.

Why are tenfold performance improvements especially important? Here’s why.  Consider this transportation example of what an order-of-magnitude change means. Let’s say running can be sustained at a rate of 10 miles per hour. An order-of-magnitude change raises that to 100 miles per hour. Many cars can achieve and maintain that speed. (We aren’t recommending that!) Another order of magnitude improvement in speed moves us to a jet airplane at 1,000 miles per hour. Another increase of this magnitude moves to a rocket reaching 10,000 mph.

Notice that each magnitude change increases not just speed, but dramatically transforms a whole landscape. Moving from the jet to the rocket allows escape from earth’s atmosphere to go to the moon. This demonstrates the potential importance of order-of-magnitude improvements. The OpenPOWER announcements detail multiple such improvements, let’s examine a few.

One example comes from Baylor College of Medicine and Rice University announcing breakthrough research in DNA structuring[3]. The discoveries were made possible by an order-of-magnitude improvement in processor performance. As reported by Erez Lieberman Aiden, senior author of the research paper, “the discoveries were possible, in part, because of Rice’s new PowerOmics supercomputer, which allowed his team to analyze more 3-D folding data than was previously possible.” A high-performance computer, an IBM POWER8 system customized with a cluster of NVIDIA graphical processing units “allowed Aiden’s group to run analyses in a few hours that would previously have taken several days or even weeks.”

Another example involves IBM’s Watson and NVIDIA’s Tesla K80 GPU system[4]. Watson[5], of course, is IBM’s leading cognitive computing offering which runs on IBM OpenPOWER servers. NVIDIA’s new system allows Watson’s Retrieval and Rank API to work at 1.7 x its normal speed. Wait a minute you might say. Where is the order-of-magnitude change here? 1.7 is impressive, but it’s no order-of-magnitude change.

Almost as an afterthought, IBM mentions that the GPU acceleration also increases Watson’s processing power to 10x its former maximum. So there we have another tenfold improvement in performance arrived at by marrying other technologies to Power.

Finally, Louisiana State University published a white paper[6] stating that Delta, its OpenPOWER-based supercomputer, accelerates Genomics Analysis by increasing performance over their previous Intel-based servers by 7.5x to 9x. Not quite an order-of-magnitude, but close. 

The announcement includes more examples demonstrating the potential of the OpenPOWER philosophy, OpenPOWER Foundation and Power Systems to achieve dramatic results across multiple industries. The fundamentals of the POWER architecture lead us to anticipate continued improvements in Big Data processing. Such developments will accelerate the growth of the Internet of Things. It will also drive fundamental changes in the possible types of processing, just like those happening with Cognitive Computing. 

Tuesday, November 17, 2015

Do Oracle's latest SPARC comparisons reveal more than intended?

by Bill Moran and Rich Ptak

At Open World 2015, Oracle announced its latest version of the SPARC microprocessor. In this blog, we focus on Oracle's performance claims versus others. We know that all vendors like to highlight their system’s performance advantages over competitors. Oracle is no different. Typically, claims are based on benchmarks tailored to a specific workload or are standardized. Standardized benchmarks have more or less rigidly enforced guidelines. The Oracle announcement claims to have advantages based on standardized benchmarks. Oracle (like any vendor) makes every effort to make their system look as good as possible. That is to be expected. We found no evidence of cheating. We do think their results call for commentary.

A few words about benchmark testing. Some years ago, there was a benchmark expert named Jack; he held a PHD in mathematics. He wanted to bet $100 that he could write a benchmark proving any system better than any other system. It didn’t matter which system was faster nor how different they were. He could ‘fix’ the winner. We didn’t doubt he could do that and didn’t bet. The point is if one completely controls the benchmark, one controls the result. That is why Industry standard benchmarks, e.g. SPEC[1], TPC[2], exist. However, some have more restrictions, e.g. TPC requires audit to certifiers and dictates how price/performance is calculated. This makes them very expensive and less likely to be run. In between TPC and Jack’s creation, SPEC’s less onerous rules make a good compromise. Care still needs to be taken when interpreting results.

Benchmark 1 SPECjEnterprise:
Oracle’s first performance point is based on results from the SPECjEnterprise2010[3] test. Table 1 is what the Oracle press release[4] presents. We added the last column
                  
System Tested
Result
Benchmark
Status[5]
Date of Test
SPARC T7-1
25,818.85
SPECjEnterprise2010 EjOPS
Unsecure
10/23/2015
SPARC T7-1
25,093.06
SPECjEnterprise2010 EjOPS
 Secure
10/23/2015
IBM Power S824
22,543.34
SPECjEnterprise2010 EjOPS
Unsecure
04/22/2014
IBM x3650 M5
19,282.14
SPECjEnterprise2010 EjOPS
Unsecure
02/18/2015
Table 1 SPEC JEnterprise 2010 results
Oracle did include the two best “IBM” results. However, the test date shows that the IBM Power result is 16 months old. Does this make any difference? We don’t know. But, it is quite conceivable that if the test were run with a newer system, the results would be better. The IBM X3650 result is newer. But that system was sold to Lenovo making the comparison irrelevant.

Other points to consider when evaluating the data include:
  1. SPEC benchmarks have no rules controlling calculation of price/performance, nor are system prices provided. Therefore, it is impossible to calculate the system price/performance. Comparing a $100K system with $500K one makes no sense without knowing the relative costs.
  2. For a generic benchmark like SPEC, it isn’t known how close it reproduces or reflects real workload performance. There is no guarantee that the advantages hold in production environments. A benchmark with system A running faster than system B, does not assure A outperforms B running a real workload.
  3. The “Status” column scores brand new Oracle security features announced at Oracle World and described in the press release (Footnote 5). Ellison also discussed them in his Open World kickoff talk[6]. Oracle claims these new security features are low cost. The results include when the features are turned on (secure), and when turned off (unsecure). Somewhat arbitrarily, IBM/Lenovo systems are labeled “unsecure”. It isn’t surprising IBM hasn’t implemented security features just announced by Oracle. But, that is no indication the systems are unsecure. We disagree with labeling it as such.

One final observation, browsing Spec jEnterprise benchmark results, one could conclude that Oracle’s performance degraded over the past several years. Why? The most recent Spec jEnterprise 2010 result in Table 1 is 25,818 EjOPS. But data from March 26, 2013 has Oracle reporting 57,422 EjOPS! Conclusion, performance degraded by some 50%!! It doesn’t make sense to us. But, that’s what happens when context is ignored and benchmark results are taken literally. We’ll leave it to Oracle to explain this one.

Another Benchmark: Hadoop Performance

Table 2 shows another Oracle benchmark in the press release.
System
Processors/Cores
Benchmark
Status
Oracle SPARC T7-4
4 Processors
32.5 GB/min per chip
Unsecure
Oracle SPARC T7-4
4 Processors
29.1 GB/min per chip
Secure
IBM Power8 S822L
8 node cluster
3.5 GHz – 6 Core
7.5 GM/min per chip
Unsecure
Table 2 Hadoop Terasort Benchmark
The Hadoop Terasort benchmark accompanies the Apache Hadoop distribution[7]. An examination of the results include both good news and bad news for Oracle. The good news is that the result seems to show that Oracle outperforms IBM by a factor of 4. But, there is no date given for this result. Were both tests run at the same time? Or is the IBM result, once again, older? As discussed, it makes a difference. Other context data is missing. Without system costs, there is no way to judge how realistic the comparisons are. The results do have a “gee whiz” factor but lack substance.

The bad news is a bit more subtle. Elsewhere, Oracle claims implementing their security features is very low cost. This result raises some questions as it appears performance degrades by about 10% with security turned on. Finally, the critique about labeling the IBM system unsecure still holds.

Another Benchmark: SAP performance

Perhaps the most useful commercial benchmark is the SAP benchmark.  Oracle has submitted a result for this benchmark as recently as last month (October). Table 3[8] shows result for the latest Oracle and IBM submissions.  
Vendor
SAPS
System
OS
Date
Oracle
168600
SPARC T7-2
Solaris 11
10/23/15
IBM
436100
E870
AIX 7.1
10/3/14
Table 3 SAP Benchmark Results
SAPS are the key performance metric; that it is closely related to a real SAP workload[9] adds further credibility. We can’t claim it proves that IBM does a better job than Oracle running all SAP workloads. However, it is an additional data point. More data as described earlier would provide better context for a decision.

One more comment, during his Open World keynote talk[10], Larry Ellison strongly emphasized that Oracle never sees SAP or IBM in competition for business in the Cloud. He repeated this multiple times. The Oracle PR department needs to know about this. The Wall Street Journal of 11/5/2015 had a front page ad[11] by Oracle detailing performance advantages versus SAP (in the cloud). The claim is that the Oracle database runs twice as fast as the SAP Cloud when compared with HANA. (NOTE: Ads only appear in the print version of the WSJ). If Larry is accurate about never seeing SAP in the cloud in competitive situations, the ad wastes money.

However, Oracle has written a white paper[12] to document this benchmark. Note the legal disclaimer at the top of the white paper. Oracle claims in the document that SAP has tried to conceal Hana performance so Oracle is running the benchmark to clear up this issue. We think that this situation is a minor version of the “benchmark wars” of the past. Frankly, we have neither the time nor the space to attempt to sort the whole issue out at this time. However, it does reinforce our point about the care needed to interpret benchmark results.

The Final Word

We’ve pointed out some concerns with Oracle’s claims including highlighting some contradictory claims regarding their competitors and competition. In fairness, Oracle usually does just present a benchmark result letting the reader draw their own conclusions. (Okay, they would nudge the reader toward a conclusion.)

We’ve tried to present a bit more context around Oracle’s benchmark results. We’ve also pointed out that benchmark data must be treated with care. Clearly, benchmarks using real production workloads (or a subset) running on multiple systems with configuration and cost details included are most credible.  Other comparisons can be significantly cheaper but should be less trusted. Be wary of unsubstantiated, poorly documented claims whether in whatever source. Better decisions will result. 

One final word, we recently received (November 16, 2015) a press release which included product information and performance claims. It discusses OpenPOWER Foundation member activities with IBM’s Power Systems. It has great information. It also provides great examples germane to this paper. For instance, the last sub-paragraph describes OpenPOWER Server providing Louisiana State University a 7.5x to 9x performance increase over a competitor’s server doing Genomics Analysis. It is footnoted with server details, and a link to an LSU white paper with additional details about the systems and benchmarking.  We think you’ll appreciate the difference.



[5]  Oracle rates just announced ‘security’ features available only on their systems. Obviously a 6 month old IBM or Lenovo system wouldn’t include these features. See comments later in the text,
[10] You will find Ellison’s  talk at https://www.oracle.com/openworld/on-demand/index.html
[11] Oracle provides a URL in the ad – corrected to the following: https://www.oracle.com/corporate/features/oracle-powers-sap.html. The copy of the WSJ ad appears on the right side of the page.

Monday, November 2, 2015

Compuware’s Topaz™ Runtime Visualizer – on the way to DevOps Nirvana!

By Rich Ptak

Compuware recently announced new features for its Topaz for Program Analysis product. Agile processes, hard work and commitment allowed key component, Topaz Runtime Visualizer, to go from conception to product reality in just 84 days! It demonstrates mainframe agility, relevance and ability to move at the speed of a digitized market. 

Read what we  have to day about this latest release from Compuware on our webpage at:  http://www.ptakassociates.com/content/