3 ways to make virtualization work
Server virtualization is in full swing at many federal agencies, some of which are gearing up to implement desktop and client virtualization pilot projects.
But are agencies getting the most out of their virtualization investment?
Seventy-three percent of the 377 federal information technology managers who participated in a CDW Government survey conducted in April 2009 said virtualization is an integral component of information technology efficiency and improving costs.
Virtualization allows a single physical server to run multiple guest operating systems as a way of making more efficient use of the hardware, which frees data center space and achieves greater IT operational and energy efficiencies.
Although 79 percent of the respondents said they are deploying virtualization, only 50 percent said they are succeeding.
Efforts to implement virtualization have run into familiar barriers, including limited budgets, security concerns and the need for appropriately trained staff to manage the migration to virtualized environments, the survey states.
Still, some agencies are seeing improvements in IT resource utilization, cost savings and energy efficiency.
On average, federal IT managers think it will take three years to implement client, server and storage virtualization. Many have been engaged in server virtualization projects for a number of years and are considering client and desktop virtualization projects this year.
Part of the formula for success is evaluating capacity planning and other infrastructure assessment tools that can give IT managers a sense of their resource utilization and help them decide which applications to virtualize.
But challenges remain.
USDA agency consolidates far-flung servers via virtualization
Agency: Agricultural Marketing Services' (AMS) Information Technology Service at the Agriculture Department.
Challenge: Reduce 255 servers to 22 in two centralized data centers. Many of the servers are spread across various program areas.
Solution: Capacity planning, disaster recovery, network and storage assessment tools.
The Agriculture Department’s AMS has been on a mission for a little more than two years to bring down the number of servers under its jurisdiction from 255 to 22.
USDA senior management issued a memo in 2007 that states that all servers had to be placed in a centralized data center, said Jaime Canales, senior network infrastructure architect on the Enterprise Infrastructure Team at AMS' Information Technology Service.
AMS handles work from USDA divisions for dairy, fruit and vegetable, livestock and seed, poultry, and cotton and tobacco. Each division has its own servers.
AMS must centralize all of those servers into the ITS data center in Washington, D.C., and replicate them at AMS’ data center in Denver.
In 2007, Canales’ team used a capacity planning tool to evaluate how many servers could be virtualized — and found that almost every machine could be virtualized. They started with 35 servers in the ITS’ security zone or demilitarized zone.
AMS’ IT team divided those 255 servers in three phases. The department deployed that strategy using VMware’s ESXserver virtualization technology.
The first phase involved the virtualization of about 35 servers in security or demilitarized zones. Those servers were virtualized into four HP 580 servers. The HP ProLiant DL580 servers and Insight software make it easy to deploy, manage, and migrate virtualized servers with VMware, officials said.
Phase 1 was finished in early 2009. Next, AMS' IT team moved to replicate only the critical servers in that zone to the Denver data center. AMS used HP’s Continued Access to replicates its storage-area network from Washington to Denver and vice versa.
The second phase of the virtualization program involves infrastructure servers, such as Microsoft Exchange and SQL Server databases, in Washington, about 120 servers total. Phase 3 will focus on everything else throughout AMS.
Coming up with the money has been a challenge. AMS had a budget for Phase 1. Although the project grew in scope, the IT team managed to complete the phase, Canales said.
The biggest challenge that AMS faces will be migrating servers from the various regional program offices to the data centers in Washington and Denver. People are anxious about losing servers at their sites, so Canales must assure them that they will have access to their applications via remote connections, such as terminal service sessions.
Another challenge will be backup operations. “I don’t want to be running backup on files for a server in Fresno,” he said. Although the servers will reside in data centers in Washington and Denver, each division's systems administrator will back up the division's files.
Canales’ team is considering Symantec’s Netbackup recovery tool as a technical solution to the backup conundrum, he said.
Chargeback will be another problem, as each USDA division migrates to the virtual environment. Both HP and VMware offer chargeback products that Canales is evaluating.
To lead an effective virtualization move, IT managers need a good capacity planning tool to gauge overall server use. They also need to figure out what applications and servers can be virtualized. Next, they need a storage assessment tool to determine backup needs. And they also must evaluate network and disaster recovery assessment tools, he said.
AMS has completed a network assessment for Phase 1. But the IT team will need to do another round of assessments when moving servers from the program areas to the central data center.
For instance, if a server in Manassas, Va., has a 100 megabits/sec connection, it might need a 1 gigabit/sec connection to move from a physical to virtual server to facilitate faster replication of data to VMware on the virtual side, Canales said.
Canales plugged away on Phase 1 with only one other team member. They will receive backup on the project's next phases with the addition of two more team members.
Los Alamos lab takes the next virtual step, toward infrastructure as a service
Agency: Energy Department’s Los Alamos National Laboratory.
Challenge: Rein in server sprawl and resolve the need for better utilization of IT resources while creating energy savings.
Solution: Capacity management tools, infrastructure as a service, virtualization.
One of the selling points of virtualization is energy savings, which the Energy Department can attest to.
Officials at the department’s Los Alamos National Laboratory have performed metrics that indicate the lab is saving 873,000 kilowatts of power per year since they decommissioned three data centers by implementing virtualization and reducing the lab's data center footprint by almost 50 percent in some areas.
Four years ago, the laboratory created a virtual environment in which officials decommissioned 100 physical servers and deployed 300 virtual machines on 13 physical host servers. Los Alamos has achieved $1.4 million in cost savings thorough virtualization, said Anil Karmel, a solutions architect at the lab.
Los Alamos officials said they hope to achieve greater operational and energy efficiency starting this summer when they complete development of an infrastructure-as-a-service computing model. That platform will allow IT administrators to dynamically provision virtual servers for the lab’s scientists on demand.
To achieve virtualization's benefits, including operational and energy efficiency, organizations need capacity planning tools, a way to make virtualization easy for the users, and a place to measure and publish savings within an infrastructure-as-a-service tool, Karmel said.
Los Alamos has completed the first phase of its virtualization effort by deploying a capacity management strategy.
“You need to implement capacity planning tools in your organization,” Karmel said. "It is very difficult to know what you need if you don’t know what you have. LANL implemented Novell Recon to gather metrics on its systems and size the resultant virtualization platform."
Capacity planning tools help IT managers measure resource utilization of systems at their data centers. Administrators can determine how much energy their facilities use and how much energy they would save by implementing virtualization technology.
“At Los Alamos, we have done the capacity planning piece,” Karmel said. "We have made the technology investment in virtualization.” Los Alamos has implemented VMware technology as its virtualization platform. Now “we are moving toward developing our infrastructure-as-a-service offering, wherein we can measure and publish the energy savings.”
There are a lot of elements to infrastructure as a service, Karmel said. An agency must create a self-service Web portal to allow users to request systems, he said. Through the Web portal, users and IT staff can determine how much the systems will cost, how much savings the platform can generate, and how that platform integrates with other infrastructure components.
“We’re working toward ensuring that all those moving parts work in harmony to deliver a seamless user experience to our customers in the laboratory,” Karmel said. “This will enable them to be more effective at what they do, as opposed to having to wait for a physical server to be provisioned,” he said.
Infrastructure as a service is still in its infancy. Developers have not created tools using a plug-and-play approach that would let organizations automatically generate a Web portal. As a result, Los Alamos is internally applying a mix of tools and technology, Karmel said.
For instance, IT developers need industry-standard collaboration tools along with virtualization technology to develop an infrastructure-as-a-service model. For example, Los Alamos is using Microsoft SharePoint Server to develop a portal that will access a virtualization back end, such as VMware. That allows Los Alamos to use SharePoint's workflow capabilities to automate the creation, deployment and management of an infrastructure-as-a-service model.
“Today, there isn’t a single toolset you can point to, so to develop those systems in-house you have to look at what is available,” Karmel said. SharePoint is a collaboration tool that can help achieve an overall view.
Additionally, life cycle management and chargeback are key components of a successful infrastructure-as-a-service model. With virtualization, it’s easy to turn a new system on but not as easy to turn it off. Newly provisioned systems should have an expiration date that a system owner renews periodically. Reducing the number of active systems on the network by implementing life cycle management and chargeback has the added benefit of enhancing not only an agency’s energy efficiency but also its security posture. Coupling resource delivery with user needs helps deliver an agile, cost-effective and secure solution.
Although most of the focus of green IT is on data centers, the next logical target is desktops, Karmel said. By moving computing power from the desktop to the data center, organizations can further reduce their power consumption. IT managers can achieve the kind of green IT savings they get in the data center by using the same virtualization platform but employing different technologies to deliver virtual desktops to users, he said.
Pa. attorney general's office make a case for virtual desktops
Agency: Pennsylvania Office of Attorney General.
Challenge: Upgrade and expand its four-year-old virtualization effort, extending virtualization to desktop PCs.
Solution: Taking a hybrid approach, depending on which applications can run in a virtual environment.
Pennsylvania’s Office of Attorney General, having seen the benefits of an initial plunge into virtualization, is looking to upgrade that effort and, at the same time, move into desktop virtualization.
The AG office’s information technology team and contractors overhauled the department’s servers and storage systems about four years ago, using VMware for virtualization and NetApp storage solutions, said George White, chief information officer for the office.
The team also has deployed Dell blade servers and Cisco Nexus switches, which are the glue that holds everything together. The department was the first in Pennsylvania to move to virtualization, White said.
The outside firm that did the return on investment of that first virtualization project determine that the department “realized a 72 percent ROI (return on investment) on an original investment of $1.6 million and we achieved our payback in 30 months,” White said.
By coupling server and storage virtualization, the department reduced data storage and telecommunication costs and implemented more consistent backup, recovery and restore processes, he said.
The department eliminated all of its tape storage, which could take two to three hours of resource time, and, in some cases, required limited shutdown of systems. Additionally, the department reaped about a 40 percent reduction in energy usage.
“So from our perspective it was the right thing to do and worked out quite well for us,” he noted.
The new overhaul is about 60 percent complete and will likely be finished by the end of March.
“One of the things we are taking on is desktop virtualization,” White said. “We are starting a pilot.”
The department will probably end up with a hybrid approach that will include some traditional desktops for its roughly 850 employees. “We can probably get a third of the devices moved over to virtual desktops, if not 50 percent,” he said. “We believe a large portion of our population will be satisfied using a virtual desktop infrastructure.”
The department’s IT team is still evaluating some of its applications to make sure they work in a virtual environment. Some of its case management applications, for example, were not meant to work in a virtual environment.
“When it comes to streaming traditional applications like office products, those are more straightforward,” White said. “Because we didn’t build some of these applications, we just don’t know until we start to test if they will work correctly.”
Printing is another issue that could pose a challenge. “We’re finding you might get the applications fine, but once you go to print you run into technical glitches,” he said. But these are issues that can be worked out.
“Primarily, it is just taking these business applications and thoroughly testing them to make sure they will work in a virtual environment,” White said.