Wednesday, October 12, 2016

BMC Engage 2016 – guiding Enterprises in digital transformation

By Rich Ptak

BMC’s annual Engage event held at the Aria Resort and Casino in Las Vegas attracted over 2500 customers, executives, staff and analysts. In 300+ technical sessions, 90+ customer presentations and multiple keynotes, BMC, clients and 170+ ecosystem partners discussed and demonstrated solutions targeting enterprise digital transformation. Here’s what we took away from the event. 

Multiple speakers detailed the emerging challenges facing enterprises, as well as society, as they undertake the transformation and transition to digitized operations. Multiple commentators label this “The Fourth Industrial Revolution.” We (and others) think this shortchanges the depth and extent of the changes taking place. We believe it merely hints at the extent of the impact. 

Setting the scene

BMC Chairman and CEO, Bob Beauchamp began the conference with a concise summary of the growth and performance of their digital business. Privately owned, BMC doesn’t reveal specific numbers. However, trends in a number of performance metrics point to strong customer acceptance. For FY16 (April ’15 to March ’16), these include: 

·         900 net-new customers
·         30% year-over-year growth in new bookings with each quarter exceeding the previous one
·         24% sales pipeline growth
·         Selection by Forbes as one of America’s Best Employers  

All of this provides convincing evidence that privatization has been good for BMC customers, partners and employees. Let’s see what’s behind all this. 

First is the extraordinarily rapidity of results in app-driven, digitized markets. One example is the disruptive speed of new business models, seen in banking having their “Uber moment” and confronting Uber itself as autonomous vehicles enter the market. Second is the extraordinarily rapid revenue impact of a successful product. It took only 10 days after the introduction of Pok√©mon GO for Nintendo’s market cap to leap from $21B to $42B. A phenomenal increase for any product, let alone a video game. In addition to market impact, transformation is driven by an extraordinary number of technologies entering the market. We’ll talk about them in the next section. 

These few examples of digitization-driven impact dramatically illustrate why BMC believes their customers must “Go Digital! Or DIE!”  Okay, BMC states it a little less dramatically as, “Go Digital or Go Extinct!” – either way, disruptive, existential threats that require action do exist. Enterprises, of all sizes, are realizing they need help to define, plan and execute to move forward. 

The technology drivers

Executives acknowledge that undertaking the journey to become a digital enterprise is inevitable. Successfully navigating the way to digital requires significant new ways of thinking, as well as quick adoption and use of disruptive technologies. These include ones recognized and in use today (e.g. mobile Internet, cloud technology, Internet of Things, virtual reality, Big Data and analytics). Along with rapid advancements in base technologies, such as artificial intelligence, natural language exploitation in combination with newly commercially viable solutions in such areas as advanced robotics, bots, Blockchain, autonomous vehicles, etc. The sheer volume creates an unprecedented number of disruptive changes occurring at remarkable speed across every market segment.  

BMC Digital Enterprise Management (DEM) for the transformation

With last year’s introduction of its DEM initiatives, BMC positioned itself as a capable, willing partner to help enterprises undertake the transformation. Robin Purohit, BMC’s Group President of Enterprise Solutions Organization stated it this way: “Our mission is to equip our worldwide customers with innovations and solutions they need to start the digital transformation journey, stay on course, and be successful in digital business.”   

A large ambition. One that will be welcome news to numerous C-level executives and IT staff who realize: “The digital imperative is clear: go digital or go extinct.” We’ve heard repeatedly from these teams that they are looking for a partner to help them advance down a path to digitization. The question is: “Can BMC deliver what they need?”  

BMC’s overview tends to indicate they can. As seen in the initiatives designed to aid customers in seven strategic areas. Three are integrated solutions targeting the following:
1.    Digital Workplace – BMW provides a faster, better dealer support experience
2.    Secure Operations (SecOps) – Aegon/Transamerica benefits with better security
3.    Service Management Excellence – Wegmans improves services with data analytics 

Then, customers documented successes achieved with innovative BMC solutions for:
4.    Agile Application Delivery – Target described their experience in speeding app improvements and development
5.    Big data – Malwarebytes detailed improving customer services with faster analysis of greater volumes and kinds of customer data
6.    IT Optimization – Swiss Re talked about optimizing IT operations
7.    Multi-Sourced cloud operations – a Ministry of Defense representative described how they simplified operations involving multiple, different cloud environments 

BMC DEM Solutions and Products for customer success

Customers ranging from the largest Fortune 100 to mid-size and entrepreneurs provided further evidence of how BMC services and products help fuel successful transformations. After sampling the over 80 customer and partner presentations and demos available, we’ve come to the conclusion that BMC definitely delivers results.

They do so with operational integration efforts involving their own products and applications to facilitate communications and cooperation between developers and LOB staff. The overall goal is to enable “service management excellence.” One example integrates BMC BladeLogic and BMC Remedy for simplified and improved automated change management. Integration details are provided on the
BMC website[1] along with customer stories. As we’ve mentioned before, customer results will vary. However, it is always worth investigating the successes (as well as the mistakes) of others.  

Proven in BMC’s own transformation

According to McKinsey & Company research[2], “less than a quarter of organizational-redesign efforts succeed. Forty-four percent run out of steam after getting under way, while a third fail to meet objectives or improve performance after implementation.” That’s one reason we were excited when BMC presented results of the five-step process they followed in their own internal Digital Transformation:

1.    Organizing for Digital – organizational and operational changes support digital transformation.

2.    Delivering with speed and agility – increase work environment use of technology and automation (Data Center consolidation, Global Command Center, Unified Communications) for cost savings.

3.    Optimizing workloads – give people meaningful work, automate the rest.

4.    Communication value through Technology Business Management (TBM) – measure and report Digital Service Management (DSM) progress in easily understood terms.

5.    Managing software assets and risks – optimize costs through (pro-)active management. 

As result, BMC went from 62,000 sq. ft. for 36 Data Center/Labs using 1.6 MW power with a $6.8M operating expense to 7,500 sq. ft. for 4 Data Centers using 640 KW power with operating expenses of $2.4M. BMC is sharing both the expertise in applying products and experience in implementation services to help their customers determine what they can achieve.  

One Last Thing

Engage 2016 included much more of interest, including announcement of an Innovation Suite to address escalating app development interest by management and business analyst types. Due in November, it uses an array of the latest development tools (slack, JIRA, Bamboo, docker, GitHub, Jenkins, CHEF, etc.) linked to existing BMC products to allow what is essentially ‘drag ‘n drop’ app creation. Intriguing when you consider the potential to accelerate the move from conception to product delivery! And, we haven’t even touched on the announcements around mainframe products and solutions. We’ll cover all that in a separate piece with more added in future pieces.  

BMC as a private company is proving its ability to act aggressively and effectively to address the most pressing challenges facing its clients, including Digital Transformation. BMC distinguishes itself with its comprehensive, understandable vision of a digital future. They developed and offered DEM as a blueprint for implementation. Uniquely, BMC also raised the issue of the wider societal implications of Digital Transformation and how these will impact the future of the Enterprise and IT in the Enterprise. We will be writing more about that topic in the future.

[2] Steven Aronowitz, Aaron De Smet, Deirdre McGinty, “Getting organizational redesign right,” McKinsey Quarterly, n.d. (accessed October 10, 2016) -

Tuesday, September 27, 2016

The newest IBM Power Systems – more of everything for the hottest environments!

By Bill Moran and Rich Ptak

IBM recently introduced three new Linux-based (LC) Power Systems targeting the hottest workload environments. These POWER8-based, enhanced models are configured to satisfy system cost, performance and processing demands of Big Data, cognitive, GPU, dense computing and memory intensive, high throughput processing. When compared to a Dell system, the newly announced IBM S822LC for HPC achieved 2.5 times the performance with costs of hardware and maintenance 52% lower! Let’s review the details.

IBM’s LC family servers are designed and cost optimized for “scale-out” multi-server cloud and cluster configured environments to satisfy customer preferences for clouds over expanding on-premise data centers.

IBM’s new lineup of LC models includes:

  1. The S822LC for Big Data
  2. The S822LC for Commercial Computing
  3. An S822 LC for High Performance with a new version POWER8 chip and a very high speed link between the CPU and onboard GPUs.
Other family members include:

  1.  An “entry level” S812LC targeting customers with new memory intensive, Big Data workloads.
  2.  An S821LC with 2 POWER8 Sockets (processors) in a 1U form factor for computing in dense database, virtualization and container environments.
We created this table to highlight key features of the different models:

                  Model             # of CPUs       #sockets       Max Cores       # GPUs       Max threads 
8 or 10
16 or 20
S822LC for
Big Data
S822LC for
S822LC for
High Perf.

Ten-core systems have a 2.92 GHZ version of POWER8, while the 8-core systems have a 3.32 GHz chip. All include what IBM calls a 9x5, 3-year warranty with next day service.

IBM’s website[1] has additional details on other system characteristics that may be important to existing or planned applications.

Some Key Considerations

Complementing the scale-out systems are scale-up systems, IBM E870 and IBM E880. These may be more appropriate for some applications. We do not discuss those here.

The S822LC for High Performance system has characteristics worth mentioning. There is the water cooling option which allows a turbo high speed mode to be used extensively. Also, it uses a new version Power8 chip with a special link to the GPUs in the system significantly speeding up the connection between the GPU and the CPU. IBM reports the old GPU-to-CPU connection speed via a PCIe link was 32GB/sec. The new NVLink clocks out at 80 GB/sec. This leads us to a discussion of system performance.

Performance Background

IBM is very clear they believe the X86 has hit a barrier regarding Moore’s law predictions of future performance improvements. Moore’s law relates to technological performance enhancements over time. See this note.[2] As long as the law applies, price/performance improvements were possible. However, physics is invalidating the law for some existing technology. IBM (and others) believe system architecture changes, not raw hardware speeds are the more likely source of necessary future performance improvements[3].

Building on this philosophy, IBM is making changes and adding interfaces to Power Systems to drive greater performance. A recent example is CAPI, which we have written about elsewhere, drives large improvements in applications using in-memory databases, support of many more threads per core than comparable X86 system and allows more to be done, faster. Adding Graphical Processing Units (GPU) NVLink technology to S822LC for High Performance are other examples of improvement.

Of course, such improvements can only benefit those applications able to take advantage of them. IBM identified those applications (emerging and existing) to gain market advantage by providing systems optimized (cost, price and performance) for such applications. This is the strategy to design and optimize systems for significant market segments.  

Performance Data

IBM has released performance and price performance data matching the latest Power Systems to comparable Intel Systems. Details appear at this URL.[4]

Summarizing IBM’s results, the best performing Power8 system, IBM S822LC for HPC, achieved 2.5 times the performance of a comparable Dell system with 52% lower hardware and maintenance costs. The S822LC for Big Data managed 40% better performance than a Comparable HP system with 31% lower hardware and maintenance costs. It appears that with comparable hardware and number of cores, Power8 systems will outperform Intel-based systems and also have a price performance advantage.

There are caveats about these results. The benchmarks are not industry standard. They are not supported by the TPC or Spec. IBM has made the effort to be transparent by documenting what they did. In the past, when we investigated IBM benchmarks of this type; we found them to be honest and accurate. We believe someone could repeat the benchmarks and get the same results. Having said that, it still remains a fact that any vendor-run benchmark will be suspect in the minds of some.

The IBM results are very useful to make a potential purchaser aware of potentially significant advantages of Power Systems. We recommend potential customers examine Power Systems to determine the potential for benefit in their environment.

Other Considerations

Intel holds the dominant position in the generic server market. We believe customers benefit from competition in an open market. We therefore support other options whether ARM-based or from AMD.

POWER8 provides a realistic alternative. We hope it flourishes. We find the growth of the OpenPOWER Foundation to over 260 companies encouraging. Note, we are not saying to blindly choose a non-Intel alternative. We do believe sensible customers should carefully evaluate all options to determine the best architecture for their business. 

IBM Power Systems possess some significant advantages for specific application types and to leverage new technologies, e.g. Big Data, analytics, AI-Cognitive Computing (Watson), etc. where customers are now investing. Take a look, and decide for yourselves. 

[2] An interesting article about Moore’s law (actually more an observation than a law) and its current state is in Wikipedia. Our opinion is that it supports IBM’s position. See

Thursday, September 22, 2016

Red Hat leverages Linux strengths to address Digital Transformation

By Bill Moran and Rich Ptak

Let’s start with an admission that in recent years we have not been following Red Hat in any detail. We had considered them a niche Linux player., and paid little attention thereafter. That, we now realize, was a mistake.

We heard the first inklings at a 2015 Red Hat event, but circumstances prevented any significant follow-up. When Red Hat scheduled an Analyst Event in NY city, we followed up to get the details of its much larger vision and ambitions for the data center. These extend far beyond just new Linux versions. 

Red Hat’s executive speakers at the conference clearly demonstrated that the company had transformed themselves to claim a position of real strength in addressing the challenges of digital transformation currently facing most enterprises today. To us, digital transformation is about adopting a full range of new and emerging technologies that enterprises must adopt to succeed. In particular, it is digital technologies impacting the data center. These include implementation and integration of mobile, a mix of cloud architectures, agile development, cognitive computing, etc. 

Red Hat plans to leverage a solid base of solutions built on its product portfolio, Red Hat Enterprise Linux (RHEL), Red Hat Virtualization, Red Hat OpenStack platform, Red Hat Satellite, etc., along with other emerging technologies. They believe (with good reason – see Figure 1 below) that existing Red Hat customers will want their private cloud or a public cloud to be built on RHEL. They also believe no single cloud version or architecture is likely to satisfy all customer requirements.  

Figure 1 Red Hat's suite of Cloud offerings

 Amazon has carved a solid, growing niche in the development and test world. Common sense indicates that customers with production RHEL will be reluctant and will resist wholesale conversions to Amazon Linux. At the same time, they may well use Amazon Linux for testing or development while in Amazon’s cloud.   Red Hat’s open strategy deals with this situation as their tools, services and management offerings all allow a customer to work in multiple clouds and modes of operation. 

In fact, Red Hat expects that future enterprise customers will have to operate in several modes. These can be characterized as bare metal, virtualized, private cloud, public cloud, and hybrid cloud. Customers will have applications operating in each mode. Red Hat will provide customers the tools and management capability to handle this increasingly complex situation with all based on Red Hat Enterprise Linux.

As a fundamental strategy, this makes excellent sense. Red Hat is building on their strength. Another critical point Red Hat can make. They have many years’ experience as a leading open source player. They understand how the workings of the collaborative development process. When they acquire companies, which they have done quite strategically, they convert any proprietary products to open source. Their experience helps these conversions to succeed. They have made the major investment to rewrite products to make them compatible with their philosophy and existing offerings. 

Red Hat knows that typical open source products go through many releases. Enterprise customers must have production-ready products, including open source ones. They are willing to pay to assure the products are reliable. Red Hat has grown into a multi billion-dollar company based on their ability to supply support and maintenance to assure open source products meet enterprise requirements. 

Here is how the Red Hat methodology works. There is broad interest today in Open Stack technology. So, Red Hat sees a business opportunity and offers support for it. Open Stack developers typically produce a release every six months. However, enterprise customers cannot replace production software every six months. They want production-ready software they can install and use for a significant period, possibly multiple years. Red Hat will analyze the open source code. They will educate their staff so that they can provide maintenance and other support on the release for an extended period.  Red Hat’s educated staff provides the support and reliability that enterprises want.

We did some research into Red Hat’s key product, RHEL. We spoke with a very experienced Linux app developer, who uses RHEL all the time. His opinion is that RHEL is an excellent solution for production systems. He would not use anything else.  Also, any Linux expert would find that the Red Hat offering is exactly what they would want to use.  In some special situations, they (and he) might use another version of Linux. We have already mentioned Amazon Linux as a good example of this situation.  

Based on all that we’ve heard and found out, Red Hat is in a strong position to succeed in the efforts to broaden their market. Even recognizing the limitations of an anecdotal sample, our sense is that they understand the market they are targeting.

Finally, we found Red Hat’s event to be well-organized and informative. Their executives quite ably communicated exactly and what and why the company was doing to succeed in the market.  The Red Hat staff was knowledgeable. We expect to be writing more about Red Hat’s strategy and their portfolio of products and services in support of that strategy. 

We suggest that anyone considering a Digital Transformation should make the effort to investigate and understand Red Hat and its offerings. It seems to us that they would make an excellent partner for this work. Their established record of success with Linux products and services as well as they history in the Open Source world gives them greater credibility than many other in what is becoming a very crowded vendor space. 

Friday, August 12, 2016

Compuware ISPW speeds and simplifies source code management for this Telecom Service Provider!

Source Code Management (SCM) has been a challenge for mainframe development and operations staff for a long time. For too many years the limitations of the VSAM file structure dictated how and what could be achieved with process-based code management.

Check out our Case Study (on our Content page ) describing how a major US Telecom Service Provider improved accuracy, increased productivity and sped up its source code management by simplifying management with Compuware's ISPW..........

Thursday, June 16, 2016

Cognitive Computing – fitting the platform to the application

By Rich Ptak

Recently, it has been popular to assert that IT professionals and business staff need no longer concern
themselves or consider IT infrastructure. Among other reasons, the claim is that with the growth of Cloud computing and commoditization, infrastructure no longer matters. The assertion being that a general purpose architecture provides all the processing flexibility and power needed to deliver a range of services.

We are convinced that this view is short-sighted and wrong as it ignores the changing dynamics of computing as Moore’s Law runs down, and technology evolves. It focuses on traditional performance metrics while ignoring the realities disrupting IT, how it is designed, implemented and realized in operations and applications.
It trivializes the difficulties in evolving and delivering systems that are able to meet the requirements of the high speed, data-intensive, highly scalable, adaptable workloads associated with evolving technologies of genomics, nanotechnology, etc. It denies or wishes to ignore the need for and interest in open, standards-based systems-oriented infrastructure able to intelligently adapt and optimize for evolving workloads. 

Identifying the Future of Infrastructure

There are enterprise and IT professionals who recognize and understand the implications of the extraordinary demands placed on infrastructure as a result of the combination of today’s competitive market and evolving technologies.

With the IBM IT Infrastructure Point of View[1] (POV) website, IBM offers these professionals “thought leadership” opinions and insights that promote and support the role of IT leaders in planning the use of technology to achieve organizational success. The target audience are those who not only understand the demand, but also seek to add to their knowledge in order to better prepare themselves and their organizations to meet future challenges.

Today’s Digital enterprises are being challenged to meet escalating expectations of clients and customers for extraordinary performance, dynamic scalability, robust adaptability and rapid innovation in the delivery of services and products. Such demands cannot be met with infrastructure and systems compromised to provide common denominator needs. Nor, can it be met with the ‘static’ configurations and fixed architectures of the very recent past.

Meeting the evolving demands of data and compute intensive digital enterprise, requires a systems infrastructure for server and storage operations that can be intelligently optimized. The infrastructure must be optimized to deal with emerging, evolving styles of computing. It must be flexible enough to integrate and interact with emerging technologies while still able to interoperate with existing operating environments. It must also be intelligently and cognitively adaptable to meet the emerging demands of whatever workload it takes on.  

IBM’s Point of View on IT Infrastructure for Cognitive Workloads

With the explosion of Cognitive computing, its hybrid cloud platforms, Power and z Systems, storage solutions and experience in leading edge technologies and solutions, IBM is uniquely positioned to work with clients to help to shape the future of their computing operations. Enterprise IT must not only get the most from their existing infrastructure but must also act to leverage new cognitive capabilities and take advantage of emerging technologies.

Today’s systems-oriented solutions depend upon server and storage technologies that can be combined with software driven cognitive abilities, such as IBM’s Watson. Cognitive computing has the potential to understand, reason, learn and adapt to changes in their operational environment and workloads. IBM is working with clients, customers and partners to push the boundaries of what is possible with an IT infrastructure optimized for cognitive workloads. 

In recognition of all this, IBM is publishing a Point of View (PoV) about IT infrastructure and cognitive workloads. It describes in significant detail how IBM, in conjunction with its partners, will advise and work with customers to aid them in their efforts to organize and plan in order to gain the most advantage from their IT systems and storage infrastructure.

As would be expected, key to this approach are the IBM solutions portfolio of z Systems, Power Systems, IBM Storage, hybrid cloud services and software-driven cognitive computing (ala Watson).  
This strategy is built around three principles:   
  1.     Design for Cognitive Business – to allow action at the speed of thought,
  2.    Build with Collaborative Innovation – to accelerate technology breakthroughs,
  3.     Delivery through a Cloud Platform – to extend the value of systems and data. 
IBM has multiple projects (both under-way and completed) where cognitive computing has provided the key factor in achieving competitive advantage, financial performance and enterprise success. The involve enterprises and organizations in a wide variety of markets. The projects have accelerated time to insights with infrastructure deliberately designed and architected for unstructured data with companies in banking, oil and gas exploration, and academia. They have sped-up development of new solutions while cutting the time-to-market. They have provided infrastructure optimized to run specific workloads with unique business requirements for customers ranging from government agencies to healthcare services.

We won’t steal any more of IBM’s thunder. We suggest that you visit the IBM IT infrastructure site to review the details. In our opinion, IBM appears to be well ahead of its competition with its comprehensive, customer-centric view and vision. They are also uniquely positioned to speak on this topic. They have both the cutting edge technology and significant real-life implementation experience with products to demonstrate their ability to deliver on their visions for the future of cognitive computing supported by intelligent infrastructure.

Tuesday, June 14, 2016

Acceleration, Collaboration, Innovation - IBM's roadmap for its POWER architecture

By Bill Moran and Rich Ptak

Vendors know the wisdom of publishing a product roadmap. Users want to know the planned future
of the products that they might invest in. They also want insight into how the vendor sees the product evolving.

So, IBM has reason to present the POWER architecture’s future to potential customers and partners. Having successfully persuaded many companies to sign up for OpenPOWER systems IBM must address questions concerning the future of the product’s architecture. IBM laid out its architectural strategy along with some specifics on its future. We discuss key takeaways from that presentation.

NVLink will be available in systems later this year and be carried forward in future systems. Notice that NVLink and CAPI are both specialized technologies for boosting certain kinds of performance. Combined with various architectural changes, they compensate for the rundown of Moore’s Law. Expect to see more such technologies in the future.

The current POWER8 system architecture is based on a 22 nm chip with 12 cores. Announced in 2014, IBM plans to continue with that base until mid-2017. The major enhancement to current version is the addition of NVIDIA NVLink. This acts as an extremely high –speed interconnect link between the chip and an NVIDIA GPU. The link delivers 80 GB per second in each direction. This is 5 to 12 times faster than today’s fastest available link.  The NVIDIA GPU accelerates floating point operations and other numerically intensive operations that are common in cognitive computing, analytics, and high performance computing.

Featuring partner-developed microprocessors in a roadmap is unique to IBM. It dramatically underscores the vitality of OpenPOWER activities. To our knowledge, no other hardware vendor has achieved anything like this!

IBM and its partners will build systems to take maximum advantage of this link which allows parallel processing using NVIDIA GPU's. IBM identified two additional partners, Zoom Netcom and Inventec Corporation. Zoom is a China-based system board developer. Inventec is a Taiwan-based server and laptop company. We expect both of these companies to be working on systems for their focus areas.

In mid-2017, IBM will begin rolling out Power9, a 14 nm chip versus today’s 22 nm Power8. IBM will first introduce a 24 core scale-out system. This will be followed sometime later with scale-up versions. There was no statement on the number of cores in scale up systems. The POWER9 systems will feature a new micro-architecture built around 24 newly redesigned cores and including a number of high-speed cache and memory interconnects including DDR4 direct attach memory channels, PCIe gen4 and custom accelerators from IBM and its partners.

In the time period between 2018 and 2019 IBM expects its partners to announce chip offerings based on IP from both Power8 and Power9, based on 10 to 7 nm technology. Partners will be targeting offerings to their own specialized market segments.

While IBM avoided any claims, we expect these systems’ shrinking chip technology will have some dramatic effects (upward) on demand. It’s also clear the partners expect to gain significant competitive and business advantages from their efforts.

Power10, expected sometime after 2020, is the next large step into the future. IBM provided no details on features or performance. Typical for a product at least 4 years away.

IBM will offer two Power9 families. Initially the focus will be scale-out systems with a maximum of 24 cores. Later, scale-up systems will be added, presumably with a larger number of cores. They will share a common architecture.

This road map shows that IBM along with other members of the OpenPOWER Foundation are developing Power at an increasing rate. Remember, IBM’s POWER group faces a number of unique and new challenges. No other major vendor has ever attempted to develop new hardware in collaboration with numerous partners ala the OpenPOWER Foundation.

Also, since selling its chip production facilities to Global Foundries, IBM’s POWER people must negotiate with an outside company. We believe that IBM’s POWER team are doing an exceptional job in coping with these difficulties. If they deliver on the items in this road map, the architecture should remain competitive. Chips developed by other companies provide both a roadmap highlight and effectively demonstrate Power Foundation’s strength.

Here is our simplified version of the road-map (with acknowledgement to IBM):

        Today                2H 2016               2017                   TBD                2018-2019           2020+
12 cores
12 cores
24 cores
? cores

Scale out
New Arch
Scale up
Partner developed.

22 nm
22 nm
14 nm
14 nm
10-7 nm