Lower TCO Helps Justify SOA Modernization Strategy

March 15, 2010 Data & AI

Commenting on a recent Attachmate Survey, Gregg Willhoit explains how lowering the total cost of ownership on the mainframe has helped IT pros justify mainframe modernizations with a service oriented architecture (SOA) approach. This podcast runs for 4:34.

//

Gregg Willhoit:

There have been many advances in mainframe TCO with regard to modernizing applications or architecting SOA-based architectures for a legacy application. The development tools are significantly better than they were five, six years ago. Today it doesn’t really -- typically doesn’t require any programming to expose existing legacy applications as Web services. This includes both top-down, bottom-up, and meet-in-the-middle approaches.

It was interesting to note in the Attachmate article that bottom-up is still the most often used method for developing a Web service for legacy assets. I’m almost positive this is going to change over the next five years. I think bottom-up scenarios are very useful. However, they’re, I think, kind of the early adopter type of implementation. They require kind of less of a commitment to a SOA-based implementation, and are somewhat simpler to build.

And what I mean by bottom-up is, take an existing CICS program, which should basically have the COMMAREA, which is mapped by a COBOL copy book, and then generating your contract, or your wizdl based upon that copy book, as opposed to beginning with a wizdl and then generating the appropriate data structures for accessing the existing programs, which would be wizdl first.

So I was kind of surprised from the Attachmate document that most people were still using bottom-up. So this tells me that we’re still in the nascent phase of SOA implementations of legacy application. Because it clearly is an early adopter type of approach.

So shifting on from what makes advances in TCO with regard to SOA?enabling legacy applications obviously IBM’s efforts to reduce some of the costs of -- hardware and software costs with regard to SOA have had a significant impact. And primarily of course this is -- this has to do with the implementation of the specialty engine concept, which includes the zap, the zip, and the IFL.

And each of these processors are specialty engines. This allows vendors, as well as IBM, to do more computationally type intensive processing on engines which are not used for charge-back or are basically some -- almost free from the standpoint of sort of licensees anyway. So the advances that IBM has come out with, with regard to specialty engines and exposing more and more system services to be able to use specialty engines I think is -- has been significant.

IBM in the past few years has come out with things like XML System Services to allow for the parsing of XML that -- to run on a specialty engine. Of course Java can run on specialty engines. And then there are vendors like us which basically we run all of our code on specialty engines.

And so when customers are considering TCO, and SOA?enabled of legacy applications vendors like us, we basically offload all of the code, of our code, that has to do with SOA processing to specialty engines, making the decision to move to an SOA architecture much easier. Because not only can you do so with GUI-based development tools to sort of ease the process or the building of these SOA architectural -- architecture-based applications, but also the actual run-time cost. So it’s kind of a win/win scenario these days.

Gregg Willhoit