David Hodgson on the role of mainframes in Web 2.0 computing
- By Rutrell Yasin
- Aug 28, 2009
High-end mainframe computers have long been integral to most government agencies’ information technology infrastructures. As agencies move toward more Web-based services, companies such as CA have increased their efforts to make mainframes more manageable and secure in Web 2.0 computing environments. David Hodgson, a senior development manager at CA’s Mainframe Business unit, has worked with large systems for more than 30 years and has seen the pendulum swing from mainframes to distributed computing and now back to the concept of centralized computing. GCN spoke with Hodgson recently about the role of the mainframe in emerging environments.
GCN: How long have you worked with mainframes?
David Hodgson: I started out in England as an assembler programmer with a financial institution in 1978. I’ve been associated with mainframes since then — most of the time with software vendors. In thinking back, change was very slow for a long time in the mainframe world, mainly because of the past successes. The IBM infrastructure and the IBM networking protocol Systems Network Architecture (SNA) was so successful at what it did in terms of providing incredibly fast response times and secure environments that I think it very quickly matured and didn’t have the need to change.
I think it was really the network aspect of things that has driven so many changes when the Internet exploded during the 1990s and TCP/IP really became the ubiquitous network protocol. Before then, SNA was probably the biggest network protocol used in big industries around the mainframe. X.25 was a packet protocol that was used in government that matched up to SNA. But when TCP/IP became the victor and the underlying structure of the Internet — predating that, people were swapping out green screen [terminals] for personal computers — that was the first step [toward change].
They kept the SNA architecture. They kept the controllers. But PCs were attached instead of the green screens. So people were used to a PC environment. Then came along the Internet and TCP/IP and the whole way of using the PC outside of the mainframe, and that was the thing that unleashed the big change. During the ’90s, because of the attractiveness of PC connectivity and the ability of IT departments to very rapidly deliver new applications off the mainframe, it appeared at first it was cheaper to do that. The mainframe sort of seemed threatened and in decline.
There was a lot of talk that the mainframe was dead.
IBM has done a wonderful job of trying to fight that. It was predicted that the last mainframe would be turned off before the end of the century. Clearly that didn’t happen. But a lot of things did happen. SNA is almost completely gone now. IBM Network Control Program, the front-end controllers under big nodes in the network, they’re completely gone. With the introduction of the TCP/IP stack on the mainframe — that is one of the first things IBM did to provide an alternative to SNA and keep it connected to the distributed world — one of many things they did to the architecture and to the ability of the mainframe to keep it modernized.
There was an inertia built into the whole mainframe community not to change and adopt new technologies. Coupled with the idea that the mainframe was too expensive and the distributed world was more affordable, that set up the trend that we saw through the ’90s and start of this century That squashed a lot of the development and role the mainframe could have and should have played in virtualization, cloud computing and Web 2.0.
What is interesting now is that we are seeing the reversal of many of these things and the pendulum swinging back the other way. There’s a lot more interest in, “Oh, the mainframe is not going away, and it is a critical part of our IT infrastructure,” for people who have a mainframe. So now how do we incorporate it into our Web 2.0 strategy, into a service-oriented architecture? We can see now that since 2000 how it has been incorporated. And in the last few years, with other drivers such as energy costs, there is a great deal of interest in making it more deliberately a part of an IT strategy.
What do you think about the role of Linux on the mainframe?
The trend that is interesting is that people are seeing that the mainframe platform has evolved. Truly there isn’t anything you can’t do on the mainframe, whether it is Java or Web application hosting — whatever it is you might want to do in a distributed platform, you can do now on the IBM z/OS mainframe. There is a growing trend to using the mainframe platform and z/Linux on the mainframe. That’s interesting, too, and indicative of the slowness of change where you might not have anticipated it — z/Linux came out in 2001, I think. CA rapidly embraced z/Linux. We ported maybe 50 of our distributed products to the z/Linux environment. We went arm-in-arm with IBM on this, and really, we made zero dollars off of that. And a lot of people for a long time within CA felt burned. It’s just been very slow.
But we have seen in the last couple of years some very serious early adopters of z/Linux, and more people now seriously considering it. If they have a virtualization strategy, then thought No. 2 is, “Wouldn’t it be best to have that virtualization strategy on the mainframe?” And they usually mean the z/Linux side of the host rather than z/OS. As a trend, we love it. From a mainframe business unit point of view, we want to embrace the new platform, and we have some creative ideas — we’re interested in managing the environment.
How do you manage and help people reduce operational and labor costs around that new environment? The mainframe environment will never totally replace the distributed environment. For now on, you’re always going to have a mix.
Are folks that haven’t had mainframes before buying big iron?
We are seeing new mainframe customers. IBM reported in 2008 something like 52 new mainframe customers. And 30 of them were the Linux-only customers. There are two interesting parts to those numbers. One, you’re getting the z/OS customers in emerging economies like China, Korea and India that want to set up their banking institutions in a similar way to the U.S. and the West. But the other part is that people are now buying mainframes as an entry point into a large-scale computing strategy around z/Linux and serious virtualization. That’s very interesting. IBM has the entry-level, business-class box that is around $100,000. So it is very affordable to go out and buy your first mainframe and run z/Linux and make it a serious part of a medium-size business.
Has service-oriented architecture given the mainframe a new breath of life?
It definitely has in terms of the ability now for applications to be rehosted to the mainframe, whether it is z/Linux or applications running on z/OS. From CA’s perspective, we don’t provide applications, we provide management software. You’ve had some exposure to our mainframe 2.0 strategy and the mainframe software management piece. We’re taking that further, building up for an announcement at the Share, [IBM mainframe users’ conference, held Aug. 23-28 in Denver], and we hope to deliver the first pieces of it next May.
We’re working with customers to come with a SOA-based [solution]. We’ve been referring to it as a user interface, but it’s a new workplace for how you would manage the mainframe. It involves taking our existing management products and exposing the functions that they provide as Web services and then consuming those Web services in this new management user interface in such a way that we can leap the mainframe forward.
A lot of these products still have green screens, or some of them have Web interfaces, but they are different from what I’m describing. The downside to the mainframe that we see is that the workforce is older, and we need to make it viable for the emerging 20-something workforce.
How do you do that? We think this management workplace that we are going to build on a SOA architecture is the answer. It will entirely run on the mainframe. It will be written in Java and C, and all you’ll need is a browser, more like an [Apple] Mac interface. We’re going to use Web 2.0 types of concepts and collaboration software. So, for instance, while you are working as a database administrator, you might need to talk to the performance guy. You might want to instant message the performance person to interact dynamically from your workplace. You might be able to blog about things or changes that are going on. We’re trying to get this look and feel that people are used to in terms of using Web tools and bring that to the mainframe. It is a natural thing for the emerging workforce.
GCN: How do you see the mainframe and the management of the mainframe playing in this world of cloud computing?
Hodgson: On-demand and cloud computing are very much dependent on the concept of virtualization underneath it. CA as a company has been thinking on the distributed side of software-as-a service offerings. It is a little harder for our management products to adapt to a software-as-a service model. But the mainframe itself [can be used] as a vehicle to allow things like efficient and secure and large scale hosting of applications for other people and business processes for other people.
Then if you think about a cloud, you don’t care where [services] comes from, you just want this service. Then the mainframe has to stand out there as one of the most viable ways of doing that. You’ve seen the pendulum going from centralized computing to distributed computing, and you have lots of those inside your enterprise. Now you want to virtualize, so there are not so many physical moving pieces and you say, “I don’t mind where I get the function from, I only care about the function.” Then you say, "What is the cheapest, secure and reliable way of getting that service?" That has to set the scene for a move back to centralized computing.
The mainframe plays very strongly, particularly for people who have a mainframe already, but increasingly for people who might be a purveyor of these services or thinking of organizing their own IT strategies entirely this way.
So I think we’ll see the mainframe as the underlying physical manifestation of the cloud.
Rutrell Yasin is senior editor for GCN covering cloud computing.