Pages

Thursday, September 22, 2016

Red Hat leverages Linux strengths to address Digital Transformation

By Bill Moran and Rich Ptak

Let’s start with an admission that in recent years we have not been following Red Hat in any detail. We had considered them a niche Linux player., and paid little attention thereafter. That, we now realize, was a mistake.

We heard the first inklings at a 2015 Red Hat event, but circumstances prevented any significant follow-up. When Red Hat scheduled an Analyst Event in NY city, we followed up to get the details of its much larger vision and ambitions for the data center. These extend far beyond just new Linux versions. 

Red Hat’s executive speakers at the conference clearly demonstrated that the company had transformed themselves to claim a position of real strength in addressing the challenges of digital transformation currently facing most enterprises today. To us, digital transformation is about adopting a full range of new and emerging technologies that enterprises must adopt to succeed. In particular, it is digital technologies impacting the data center. These include implementation and integration of mobile, a mix of cloud architectures, agile development, cognitive computing, etc. 

Red Hat plans to leverage a solid base of solutions built on its product portfolio, Red Hat Enterprise Linux (RHEL), Red Hat Virtualization, Red Hat OpenStack platform, Red Hat Satellite, etc., along with other emerging technologies. They believe (with good reason – see Figure 1 below) that existing Red Hat customers will want their private cloud or a public cloud to be built on RHEL. They also believe no single cloud version or architecture is likely to satisfy all customer requirements.  


Figure 1 Red Hat's suite of Cloud offerings

 Amazon has carved a solid, growing niche in the development and test world. Common sense indicates that customers with production RHEL will be reluctant and will resist wholesale conversions to Amazon Linux. At the same time, they may well use Amazon Linux for testing or development while in Amazon’s cloud.   Red Hat’s open strategy deals with this situation as their tools, services and management offerings all allow a customer to work in multiple clouds and modes of operation. 

In fact, Red Hat expects that future enterprise customers will have to operate in several modes. These can be characterized as bare metal, virtualized, private cloud, public cloud, and hybrid cloud. Customers will have applications operating in each mode. Red Hat will provide customers the tools and management capability to handle this increasingly complex situation with all based on Red Hat Enterprise Linux.

As a fundamental strategy, this makes excellent sense. Red Hat is building on their strength. Another critical point Red Hat can make. They have many years’ experience as a leading open source player. They understand how the workings of the collaborative development process. When they acquire companies, which they have done quite strategically, they convert any proprietary products to open source. Their experience helps these conversions to succeed. They have made the major investment to rewrite products to make them compatible with their philosophy and existing offerings. 

Red Hat knows that typical open source products go through many releases. Enterprise customers must have production-ready products, including open source ones. They are willing to pay to assure the products are reliable. Red Hat has grown into a multi billion-dollar company based on their ability to supply support and maintenance to assure open source products meet enterprise requirements. 

Here is how the Red Hat methodology works. There is broad interest today in Open Stack technology. So, Red Hat sees a business opportunity and offers support for it. Open Stack developers typically produce a release every six months. However, enterprise customers cannot replace production software every six months. They want production-ready software they can install and use for a significant period, possibly multiple years. Red Hat will analyze the open source code. They will educate their staff so that they can provide maintenance and other support on the release for an extended period.  Red Hat’s educated staff provides the support and reliability that enterprises want.

We did some research into Red Hat’s key product, RHEL. We spoke with a very experienced Linux app developer, who uses RHEL all the time. His opinion is that RHEL is an excellent solution for production systems. He would not use anything else.  Also, any Linux expert would find that the Red Hat offering is exactly what they would want to use.  In some special situations, they (and he) might use another version of Linux. We have already mentioned Amazon Linux as a good example of this situation.  

Based on all that we’ve heard and found out, Red Hat is in a strong position to succeed in the efforts to broaden their market. Even recognizing the limitations of an anecdotal sample, our sense is that they understand the market they are targeting.

Finally, we found Red Hat’s event to be well-organized and informative. Their executives quite ably communicated exactly and what and why the company was doing to succeed in the market.  The Red Hat staff was knowledgeable. We expect to be writing more about Red Hat’s strategy and their portfolio of products and services in support of that strategy. 

We suggest that anyone considering a Digital Transformation should make the effort to investigate and understand Red Hat and its offerings. It seems to us that they would make an excellent partner for this work. Their established record of success with Linux products and services as well as they history in the Open Source world gives them greater credibility than many other in what is becoming a very crowded vendor space. 

Friday, August 12, 2016


Compuware ISPW speeds and simplifies source code management for this Telecom Service Provider!


Source Code Management (SCM) has been a challenge for mainframe development and operations staff for a long time. For too many years the limitations of the VSAM file structure dictated how and what could be achieved with process-based code management.

Check out our Case Study (on our Content page http://www.ptakassociates.com/content/ ) describing how a major US Telecom Service Provider improved accuracy, increased productivity and sped up its source code management by simplifying management with Compuware's ISPW..........

Thursday, June 16, 2016

Cognitive Computing – fitting the platform to the application

By Rich Ptak


Recently, it has been popular to assert that IT professionals and business staff need no longer concern
themselves or consider IT infrastructure. Among other reasons, the claim is that with the growth of Cloud computing and commoditization, infrastructure no longer matters. The assertion being that a general purpose architecture provides all the processing flexibility and power needed to deliver a range of services.

We are convinced that this view is short-sighted and wrong as it ignores the changing dynamics of computing as Moore’s Law runs down, and technology evolves. It focuses on traditional performance metrics while ignoring the realities disrupting IT, how it is designed, implemented and realized in operations and applications.
It trivializes the difficulties in evolving and delivering systems that are able to meet the requirements of the high speed, data-intensive, highly scalable, adaptable workloads associated with evolving technologies of genomics, nanotechnology, etc. It denies or wishes to ignore the need for and interest in open, standards-based systems-oriented infrastructure able to intelligently adapt and optimize for evolving workloads. 

Identifying the Future of Infrastructure

There are enterprise and IT professionals who recognize and understand the implications of the extraordinary demands placed on infrastructure as a result of the combination of today’s competitive market and evolving technologies.

With the IBM IT Infrastructure Point of View[1] (POV) website, IBM offers these professionals “thought leadership” opinions and insights that promote and support the role of IT leaders in planning the use of technology to achieve organizational success. The target audience are those who not only understand the demand, but also seek to add to their knowledge in order to better prepare themselves and their organizations to meet future challenges.

Today’s Digital enterprises are being challenged to meet escalating expectations of clients and customers for extraordinary performance, dynamic scalability, robust adaptability and rapid innovation in the delivery of services and products. Such demands cannot be met with infrastructure and systems compromised to provide common denominator needs. Nor, can it be met with the ‘static’ configurations and fixed architectures of the very recent past.

Meeting the evolving demands of data and compute intensive digital enterprise, requires a systems infrastructure for server and storage operations that can be intelligently optimized. The infrastructure must be optimized to deal with emerging, evolving styles of computing. It must be flexible enough to integrate and interact with emerging technologies while still able to interoperate with existing operating environments. It must also be intelligently and cognitively adaptable to meet the emerging demands of whatever workload it takes on.  

IBM’s Point of View on IT Infrastructure for Cognitive Workloads

With the explosion of Cognitive computing, its hybrid cloud platforms, Power and z Systems, storage solutions and experience in leading edge technologies and solutions, IBM is uniquely positioned to work with clients to help to shape the future of their computing operations. Enterprise IT must not only get the most from their existing infrastructure but must also act to leverage new cognitive capabilities and take advantage of emerging technologies.

Today’s systems-oriented solutions depend upon server and storage technologies that can be combined with software driven cognitive abilities, such as IBM’s Watson. Cognitive computing has the potential to understand, reason, learn and adapt to changes in their operational environment and workloads. IBM is working with clients, customers and partners to push the boundaries of what is possible with an IT infrastructure optimized for cognitive workloads. 

In recognition of all this, IBM is publishing a Point of View (PoV) about IT infrastructure and cognitive workloads. It describes in significant detail how IBM, in conjunction with its partners, will advise and work with customers to aid them in their efforts to organize and plan in order to gain the most advantage from their IT systems and storage infrastructure.

As would be expected, key to this approach are the IBM solutions portfolio of z Systems, Power Systems, IBM Storage, hybrid cloud services and software-driven cognitive computing (ala Watson).  
This strategy is built around three principles:   
  1.     Design for Cognitive Business – to allow action at the speed of thought,
  2.    Build with Collaborative Innovation – to accelerate technology breakthroughs,
  3.     Delivery through a Cloud Platform – to extend the value of systems and data. 
IBM has multiple projects (both under-way and completed) where cognitive computing has provided the key factor in achieving competitive advantage, financial performance and enterprise success. The involve enterprises and organizations in a wide variety of markets. The projects have accelerated time to insights with infrastructure deliberately designed and architected for unstructured data with companies in banking, oil and gas exploration, and academia. They have sped-up development of new solutions while cutting the time-to-market. They have provided infrastructure optimized to run specific workloads with unique business requirements for customers ranging from government agencies to healthcare services.

We won’t steal any more of IBM’s thunder. We suggest that you visit the IBM IT infrastructure site to review the details. In our opinion, IBM appears to be well ahead of its competition with its comprehensive, customer-centric view and vision. They are also uniquely positioned to speak on this topic. They have both the cutting edge technology and significant real-life implementation experience with products to demonstrate their ability to deliver on their visions for the future of cognitive computing supported by intelligent infrastructure.

Tuesday, June 14, 2016

Acceleration, Collaboration, Innovation - IBM's roadmap for its POWER architecture

By Bill Moran and Rich Ptak


Vendors know the wisdom of publishing a product roadmap. Users want to know the planned future
of the products that they might invest in. They also want insight into how the vendor sees the product evolving.

So, IBM has reason to present the POWER architecture’s future to potential customers and partners. Having successfully persuaded many companies to sign up for OpenPOWER systems IBM must address questions concerning the future of the product’s architecture. IBM laid out its architectural strategy along with some specifics on its future. We discuss key takeaways from that presentation.



NVLink will be available in systems later this year and be carried forward in future systems. Notice that NVLink and CAPI are both specialized technologies for boosting certain kinds of performance. Combined with various architectural changes, they compensate for the rundown of Moore’s Law. Expect to see more such technologies in the future.

The current POWER8 system architecture is based on a 22 nm chip with 12 cores. Announced in 2014, IBM plans to continue with that base until mid-2017. The major enhancement to current version is the addition of NVIDIA NVLink. This acts as an extremely high –speed interconnect link between the chip and an NVIDIA GPU. The link delivers 80 GB per second in each direction. This is 5 to 12 times faster than today’s fastest available link.  The NVIDIA GPU accelerates floating point operations and other numerically intensive operations that are common in cognitive computing, analytics, and high performance computing.



Featuring partner-developed microprocessors in a roadmap is unique to IBM. It dramatically underscores the vitality of OpenPOWER activities. To our knowledge, no other hardware vendor has achieved anything like this!

IBM and its partners will build systems to take maximum advantage of this link which allows parallel processing using NVIDIA GPU's. IBM identified two additional partners, Zoom Netcom and Inventec Corporation. Zoom is a China-based system board developer. Inventec is a Taiwan-based server and laptop company. We expect both of these companies to be working on systems for their focus areas.

In mid-2017, IBM will begin rolling out Power9, a 14 nm chip versus today’s 22 nm Power8. IBM will first introduce a 24 core scale-out system. This will be followed sometime later with scale-up versions. There was no statement on the number of cores in scale up systems. The POWER9 systems will feature a new micro-architecture built around 24 newly redesigned cores and including a number of high-speed cache and memory interconnects including DDR4 direct attach memory channels, PCIe gen4 and custom accelerators from IBM and its partners.

In the time period between 2018 and 2019 IBM expects its partners to announce chip offerings based on IP from both Power8 and Power9, based on 10 to 7 nm technology. Partners will be targeting offerings to their own specialized market segments.

While IBM avoided any claims, we expect these systems’ shrinking chip technology will have some dramatic effects (upward) on demand. It’s also clear the partners expect to gain significant competitive and business advantages from their efforts.

Power10, expected sometime after 2020, is the next large step into the future. IBM provided no details on features or performance. Typical for a product at least 4 years away.


IBM will offer two Power9 families. Initially the focus will be scale-out systems with a maximum of 24 cores. Later, scale-up systems will be added, presumably with a larger number of cores. They will share a common architecture.

This road map shows that IBM along with other members of the OpenPOWER Foundation are developing Power at an increasing rate. Remember, IBM’s POWER group faces a number of unique and new challenges. No other major vendor has ever attempted to develop new hardware in collaboration with numerous partners ala the OpenPOWER Foundation.

Also, since selling its chip production facilities to Global Foundries, IBM’s POWER people must negotiate with an outside company. We believe that IBM’s POWER team are doing an exceptional job in coping with these difficulties. If they deliver on the items in this road map, the architecture should remain competitive. Chips developed by other companies provide both a roadmap highlight and effectively demonstrate Power Foundation’s strength.

Here is our simplified version of the road-map (with acknowledgement to IBM):

        Today                2H 2016               2017                   TBD                2018-2019           2020+
Power8
Power8
Power9
Power9
Power8/9
Power10
12 cores
12 cores
24 cores
? cores


CAPI
NVLINK + CAPI
Scale out
New Arch
Scale up
Partner developed.

22 nm
22 nm
14 nm
14 nm
10-7 nm

Tuesday, May 24, 2016

Java on the Mainframe, big problems ahead? Not if BMC can help it!

By Rich Ptak

Today’s dynamic, mobile-obsessed, service-driven market is proving both beneficial and problematic for data center operations. While conventional “wisdom” has it that it’s distributed and mobile devices have been the big winners. In truth, it increasingly applies to mainframe environments. And, Java on the mainframe is playing a significant role, maybe larger than is known.

The mainframe remains an active, effective and in-demand player in today’s DevOps, agile and mobile-oriented world. Why? Because much of the critical data, information and assets that support the most used applications found in banking, financial services, retail, travel and research, resides and is analyzed there. Mobility-obsessed operations remain linked to and dependent upon mainframe operations.

That’s not to say problems don’t exist. Transaction volumes (often non-revenue producing) have exploded. Unpredictable traffic loads and patterns, complexity of multi-platform integrations, demands for instant response time, etc. have made it more difficult to manage, disrupting maintenance and operations. Yet, they are expected to deliver modernized, mobile applications faster and at lower costs.

One response was to put Java on the mainframe. Its features make it highly attractive. It is tailored for rapid development cycles. Designed for mobile/web applications development, it is platform independent. It integrates easily with a variety of applications, operating environments and data bases. The Java Native Interface (JNI) on z allows easy interaction with embedded program logic coded in multiple languages (COBOL, PL/1, ASM) and environments (CICS, WAS, ISM, DB2, USS, Batch, MQ, TCP/IP). In agile computing and DevOps, Java dominates among programmers and developers as the preferred environment.

Java’s Hidden Threat

Unfortunately, few mainframe experienced staff have extensive Java familiarity or expertise. This means the potential for major problems lurk in the background. Java was not designed to operate in the mainframe’s shared environment. Java does have built-in code to monitor AND manage resources. For instance, it manages memory space with a process of ‘garbage collection’. It identifies memory actively being used, gathers and compacts it, then frees the rest. It does no check for the impact on other programs. In a mainframe environment, these have the potential to seriously disrupt operations, freezing some jobs, delaying completion others.

However, during this activity Java pauses ALL in-flight transactions not just those of a single app. Nor does it check for the impact of its action on applications or technologies. Compounding the problem, there was no integrated view across the system technologies for monitoring and management. In fact, some Java can be running in the data center without all staff aware of it.

With increasing acceptance of Java on the mainframe, BMC’s recent mainframe survey reveals in-use or planned-to-use in 61% of DB2 apps, 57% CICS and 49% IMS apps[i]. This is a serious situation. Tools to manage Java itself exist. But, there is no fully integrated tool to monitor and manage the impact of what it is doing. BMC addresses this lack with MainView for Java Environments (MVJE). Let’s look at what it offers.

BMC’s MainView for Java Environments

 MVJE provides much needed functionality. It does not monitor Java code per se. It monitors the infrastructure to monitor the effects of Java activity. Specific functionality includes:

  • Automatic discovery of  z/OS JVM (early users were often surprised at the amount actually in-use),
  • Monitors real-time metrics for z/OS Java runtime environment to detect the impact of Java activities, e.g. CPU usage, Garbage collection metrics, memory usage data, etc.,
  • Analysis to detect workload impact of Java-initiated management activities (combined with Unix System Services (USS) it can initiate activities to address potential problems e.g. thread use problems),
  • Optimize operations as a result of integration with MainView monitoring for cross-technology analysis, 
  • Customizable dashboard views of Java Runtime metrics.


Much JAVA code is zIIP eligible; automated discovery along with monitoring zIIP offload assures no zIIP eligible code is missed. The additional data on infrastructure impact, resource usage, performance, etc. helps to avoid problems even as it speeds diagnosis and eventual resolution. This reduces the need for cross-functional “War-Room” meetings used to identify, diagnose and resolve Java-caused problems that impact application availability and performance.

MVJE quickly discovers and monitors JVMs. It pinpoints Java’s resource usage, so application performance and availability continue to meet service levels.  IT teams can quickly identify the root cause of problems, reducing MTTR, and improving productivity. MVJE monitoring of  zIIP offloading, helps to lower MLC. 

All normal benefits associated with use of BMC’s MainView product in terms of single, integrated view of systems activities. The risks associated with unmonitored, unmanaged technology are eliminated. More efficient monitoring and assured effective use of zIIP’s help to reduce MLC and capital costs. Intelligent automation allows proactive resolution of problems, again saving costs and improving overall system performance. Complexity is reduced as users can customize dashboards and reports to meet their specific information needs.

 The Final Word

We’ve discussed how Java’s built-in resource and memory management, operating in the background, unmonitored and unmanaged, can increase costs slow processing and waste resources.  BMC’s MVJE, is the first full system monitoring solution to address these risks.

System admins can now gain actionable insight into Java’s impact on infrastructure, resource usage and operations. Java becomes another well-monitored and managed technology.

Beta customers appear to be very satisfied with the product. A number revealed that they had experienced significant savings and improved performance as a result of using MVJE.  We’re not surprised.

BMC is providing existing MainView customers a free Java Discovery. We look forward to interviewing some customers after some experience with the product. We expect some will be surprised at the result as they believed themselves to be ‘Java-free’. We also believe that it will result in a significant number of sales. BMC has once again demonstrated their connection with their customers and their commitment to being a lead in mainframe solutions.




[i] Source: 2015 BMC Mainframe Research Results

Friday, May 20, 2016

Got something to say about the Mainframe? Check this out: BMC launches 11th Annual Mainframe Research Survey

 Got something on your chest about the mainframe? Familiar with your organization’s Mainframe
environment it? Why you use it? Benefits? Where it is going? It’s future? Have you ever wanted to tell a major vendor (and the world) about the mainframe in your enterprise? Here’s your chance.

From May 24th until June 6th, BMC is collecting data for its 11th survey of the trends in mainframe usage.  Already one of largest industry surveys with over 1,200 mainframe professionals and executives participating, BMC is seeking to attract an even larger number of participants.

The research results will be used by vendors, technical and executive users, industry analysts, media, etc. to make significant decisions and draw conclusions on just about everything mainframe. The report will influence investments, product (new and enhancements), hiring, functionality, etc.

The mainframe is a critical backbone with impact across industries and markets from mobility to analytics to complex modeling and the ongoing transformation of digital business.


So, if you’re technical IT staff involved in mainframe management or operations. Or, if you’re part of a Mainframe IT team as an executive, manager or technical architect recommending general management or operation practices. This is your chance to take 20 minutes to contribute to the conversation, and influence the future of the mainframe.


Starting May 24th, you can take the survey here!

Tuesday, May 10, 2016

Datto Drive: SMB desktop data protection at a hard to refuse price!

By Rich Ptak


Datto provides enterprise-grade backup, restore and recovery services in its privately-owned 200+ Petabyte cloud. Founded in 2007, its 600+ employees build products and support customers from nine (9) data centers and seven (7) offices located around the world. It performs over one million backups every day protecting millions of endpoints. In addition to their private cloud, all devices they use and provide are Datto’s own products.

Datto is all about Business Continuity and Data Recovery (BCDR) for SMB. Their success to date had been built on their use of a private hybrid, Datto Cloud, for backup/restore, advanced storage, instant virtualization (locally and remote), screenshot verification (to remotely verify backup data integrity) and on-prem file sync-and-share (FSS). All delivered through an international network of thousands of Managed Service Providers (MSP).

They expect their next big step forward, Datto Drive, to carry them deeper into the SMB market with in-cloud FSS and BCDR. Before we provide more details on the product, why should you want to know those details?

In its introductory year, Datto is making available:
·         One million Datto Drive accounts for free to:
o   SMB’s (business accounts only, no personal users)
o   For one year
o   With one terabyte of data storage (all managed by Datto in the Datto Cloud).
·         After one year, the offering changes to:
o   $10 per month per domain (NOT per user, the price holds no matter how many users)
o   Service delivered through a Datto MSP partner (which they’ll help you find, if necessary)
o   Premium versions for larger storage volumes and services are available. Premium services are available (for a fee) during the first year.   

Given that competitors price is higher on a per user basis, let’s see what Datto functionality includes.

Datto Drive

Datto Drive brings highly affordable sync-and-share and full backup/restore and disaster recovery for desktop and mobile devices to SMB’s. They are targeting the less than one third of the SMB market not currently working with MSPs today, who are sorely in need of comprehensive, enterprise-grade FSS and BDCR services.

For the price, Datto Drive offers enterprise-grade FSS built on ownCloud open-source technology. They provide the superior security of Datto’s hybrid cloud with advanced capabilities and functionality in permission management, tracking and tracing. There has been no proven data loss since its founding in 2007.

Datto Drive supports virtually every type of file (video, image, audio, text, etc.). File sharing, control and management can be done from any supported device (desktop, deskside, mobile). It permits real-time collaboration for sharing, exchange, editing, etc. across domain users.  Sync and share capabilities are already available for most existing operating systems, i.e. iPhone, Android, iPad, Windows, Linux and Mac. The ownCloud technology means that thousands of value-add apps are available. Finally, it also includes backup for Microsoft 365, OneDrive and SharePoint files.

Final Word


There’s a lot more functionality and things to like about Datto Drive and the rest of the product portfolio. We suggest a visit to Datto.com to see all that is available. When competitors, such as Dropbox and box are charging $15/user/month for similar services, this offer appears to us to be hard to resist. SMB owners should also move quickly to snag one of the 1 million domain accounts. It’s our opinion that they’ll disappear quickly.