The advance of virtualization puts the focus on infrastructure needs
While the realized benefits of virtualization are still debated, the technology itself is gaining ground in cash-strapped agencies as a way to gain efficiencies and extend IT services to more users. The focus now is on managing the complexities of the infrastructure that supports it.
Although virtualization has been around as a concept for a long time, it’s only begun to proliferate in government as a result of mandates such as the Office of Management and Budget’s Federal Data Center Consolidation Initiative (FDCCI), launched early in the Obama administration’s first term. Budget pressures and other requirements, such as the need for agencies to boost teleworking, are now putting a focus on other technologies such as desktop and application virtualization.
It’s early days yet, and the reported benefits of virtualization are still a matter for argument. In a recent article in FCW, for example, Mark Forman, former administrator for e-government and IT at OMB, called the projected benefits from server virtualization in agency data centers “smoke and mirrors.” Savings of anywhere from $2.4 billion to $5 billion from data center closings will only happen if agencies also tackle the associated operational and management complexities, he said.
Nevertheless, the raw numbers are starting to stack up. By mid-November 2012, federal agencies had closed 382 data centers as a result of FDCCI and expected to close or consolidate about 1,200 of the 2,900 identified data centers by 2015. In a survey published early last year, MeriTalk found some 82 percent of federal agencies saying they had implemented server virtualization, and IT professionals expected virtualized workloads to almost double by the end of 2015. Most agencies also reported plans to implement at least some level of desktop virtualization.
Some agencies are being more aggressive than others. The Census Bureau, for example, instituted a “virtualization first” policy in 2011 that puts the onus on IT users to justify why their needs cannot be met through virtualization.
“It stipulates that all requests for servers be satisfied via a virtual guest as opposed to a discrete bare-metal solution,” said Brian McGrath, the bureau’s CIO. “That doesn’t mean we virtualize everything, but we virtualize first unless there’s a sound security or technical reason to justify us going with a traditional, discrete bare-metal server.”
Nevertheless, he said, since the policy was put in place, Census has virtualized nearly 80 percent of all new Windows and Linux-based server builds.
Data sensitivity is what drives the decision to virtualize at NASA’s Goddard Space Flight Center, according to CIO Adrian Gardner. If data is deemed too sensitive, then it’s isolated on a physical server. But “in the grand scheme of things, virtualization has to be part of our core moving forward because we do want to reduce the footprint of our data centers but also increase capabilities while reducing the impact on our environment,” he said.
In the one data center he has direct control over as CIO, Gardner said he’s already virtualized about 80 percent of the servers and applications. Overall, the goal is to reduce the 13 data centers at Goddard down to two “hardened” centers — for the earth and science distributed data archive and for the supercomputer facility, and a virtualized center that will be moved into a containerized compute pod.
Ongoing budget pressures will only enhance the attraction of virtualization, Gardner added.
“I don’t have a choice,” he said. “If I stay the same course and make no changes, then I know my operations and maintenance tail is going to eat me alive, and virtualization is one of the tools we’ll be using to handle the tight fiscal climate.”
With the move to virtualization, however, come infrastructure issues that also need attention. Just as in the physical environment, for example, security is also important in the virtualized environment and perhaps even more so because the opportunity for mischief is increased. An attacker can disrupt access to one application on a server in the physical world but has access to many others in the virtualized universe because many applications reside on a single machine.
“From a risk standpoint, in the virtual environment, I’ve got one machine that can be compromised that provides the key to the entire kingdom,” said Jim Smid, chief technology officer at Iron Bow Technologies. “I can do a lot of things with the entire enterprise through that one machine, so security becomes very important for the virtual infrastructure.”
Contention is also a major issue with virtualization. It relates to physical networks as well, of course, but those networks are designed to handle multiple, simultaneous communications. But virtual machines can be created almost on the fly, and many more of them can be sending requests and instructions back and forth across the network than the fixed, physical servers would. Those contention issues also tend to accumulate at the input/output of shared storage systems.
“I would say that the interaction of virtualized servers with storage is not well understood,” said Leena Joshi, senior director of solutions marketing at Splunk, a data management company. “When you virtualize servers, you actually mask the storage behind it. You could be attached to network-attached storage, iSCSI or Fibre Channel on the back end, and your virtual machines would not be able to tell what they are attached to.”
Those problems can be even greater with desktop virtualization because there are likely to be so many more virtual desktop images that have to be handled at any one time and that are making demands on the network and the storage systems. But unlike server virtualization, desktop virtualization is still new to government agencies and so the learning curve is still pretty steep.
“Server virtualization is much more well-known, and there are many more ways we know can work to add more capacity for that,” said Jose Padin, systems engineer manager at Citrix Systems. “It’s a known factor because the requirements of the servers are known. Once we learn what the requirements are for virtualized desktops, then matching the capacity to the needs will be just as simple.”
However, it might be that the virtual world itself will also be getting more complex, which will ratchet up the level of understanding that’s needed to manage it. The current view of virtualization is to create a single, virtual platform that can provide different services to multiple users, but some see the need for a more diverse set of offerings.
“Right now you’ve got general-purpose virtualization for a general-purpose community of products, and that’s good for consolidation of low-utilization platforms,” said Peter Doolan, group vice president and chief technologist at Oracle Public Sector. “But then there are classes of applications as you move up the food chain that are more important, and that’s when you have to start looking at different types of virtualization.”
When you talk about virtualization now, you mostly think about hypervisors and a software layer that mimics a generic hardware substrate, he said. But in the future, as people begin to understand that there are various ways to solve their problems, you will start to see things such as virtualization of SQL itself, where SQL text can come from any machine.
“I think you are seeing that people realize that there’s not one ring that will rule them all and that there are different classes of virtualization starting to emerge,” Doolan said. “Virtualization will increasingly be seen as a portfolio of capabilities, and that will need people who understand how all the pieces come together.”