Adam Gartenberg's Blog

Business Analytics and Optimization, IBM and Social Marketing

The need for speed

Speed - processing speed, that is - is in the news this week, with IBM breaking benchmarks and advancing performance on three fronts.

Shattering Two-Tier SAP SD Benchmark
First up, earlier this week IBM announced that a 256-core Power 795 system with DB2 nearly doubled last month's record, achieving the highest result ever published on the two-tier SAP Sales and Distribution standard application benchmark - a result of 126,063 SAP SD benchmark users. (Press Release:  IBM Again Shatters World Record on Two-Tier SAP Sales and Distribution (SD) Standard Application Benchmark.)  (Emphasis mine; see press release for supporting data)

The IBM system easily eclipsed Oracle/Sun results on the two-tier SAP SD standard application, handling more than three times the number of SAP SD users than a 256-core Sun SPARC Enterprise M9000 (Oracle's largest system) (2) and the 128-core Oracle result on the two-tier SAP SD-Parallel standard application benchmark published in September running four clustered 32-core Sun Fire X4470 servers with Intel's Xeon X7560 chip. (3) The IBM benchmark is more than four times higher than HP's 128-core Integrity SD64B result (HP's largest) on the two-tier SAP SD standard application benchmark. (4).

Highest TPC-C Benchmark Performance Ever Achieved
Then, yesterday IBM reported that it had scored the highest TPC-C benchmark performance result ever achieved by an x86-64 processor-based server. The IBM System x3850 X5 server handled 2,308,099 tpmC (transactions per minute C) at price/performance of $.64 USD / tpmC - a result that shows a 27% performance boost over HP's results.  (Press Release:  IBM Sets Benchmark Record for x86-64 Transaction Processing, Bests HP by 27%)

The TPC-C benchmark simulates an order-entry environment of a wholesale supplier -- entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses, and accordingly what the results show is that deploying IBM technology can result in more orders entered, faster monitoring, distribution, and delivery.

As always, DB2 Program Director Conor O'Mahony dives deeper into this results over on his blog (SAP Benchmark, TPC-C Benchmark).  You can find the actual TPC-C Benchmark result on the website.

Doubling Processing Speed for Data-Intensive Workloads
Finally, today at the Supercomputing 2010 conference in New Orleans, IBM unveiled a new storage architecture design that can double the processing speed of data-intensive workloads (such as digital media, data mining and financial analytics) and "shave hours off of complex computations without requiring heavy infrastructure investment." (Press Release:  Made in IBM Labs: New Architecture Can Double Analytics Processing Speed)

Created at IBM Research – Almaden, the new General Parallel File System-Shared Nothing Cluster (GPFS-SNC) architecture is designed to provide higher availability through advanced clustering technologies, dynamic file system management and advanced data replication techniques. By "sharing nothing," new levels of availability, performance and scaling are achievable. GPFS-SNC is a distributed computing architecture in which each node is self-sufficient; tasks are then divided up between these independent computers and no one waits on the other.

At the conference, IBM won the Storage Challenge competition for presenting the most innovative and effective design in high performance computing with the best measurements of performance, scalability and storage subsystem utilization.