If virtual servers can't stand the heat, stay out of the kitchen
A measured approach to consolidation can keep machines from boiling over
- By Shawn McCarthy
- Feb 24, 2010
Most government agencies have at least some level of system virtualization in place. Some run well and some are experiencing problems. So here’s an important question: When it comes to using virtual servers, is your agency more like a home workshop or more like a home kitchen?
Your answer could indicate whether virtualization is working well for your operations, or if you are already overloading your servers. Here’s how the workshop vs. kitchen metaphor can help paint a clearer picture.
3 ways to make virtualization work
4 tips for virtualization deployments
Let’s say your kitchen and your basement workshop each have just one electrical circuit.
In your workshop, you may have several saws, drills, lathes, grinders and who knows what else, but it’s likely that only one or two people visit the workshop at the same time. You can work there all day, but since you can’t run all of your power tools at the same time, your circuit is seldom overloaded no matter how many things you have plugged in.
Your kitchen, on the other hand, could sit unused most of the day, but when the time comes to prepare a meal, several appliances suddenly need power. If you fire up your toaster, coffee maker, mixer and electric frying pan at the same time, you could very well shut down your circuit. The problem here isn’t the kitchen’s total power demand for the day, it’s the sudden power demand that occurs at a specific time.
So say that, instead of electrical circuits, we’re talking about total server capacity. Some companies that sell virtual server systems to government offices boast that a single server can actually host multiple virtual systems. Depending on design and capacity, that could be as many as 100 virtual machines on one virtualized server. But such ratios don’t mean much if you end up overloading the total capacity of the one physical machine.
If you are in charge of IT capacity planning for your organization, it’s important to know what will be hosted and how that might affect the claims made by virtualization vendors. Your ultimate consolidation level will affect the return on investment that you calculate as you move to new virtualized servers. So it’s important to estimate correctly. Is your data center more like a sporadically utilized home workshop model? Or more like an occasionally very busy kitchen model?
If you are hosting servers and associated applications that are only used a few times each day, with minimal processing needed, then you may very well be able to launch all of the virtual servers that your physical machine is capable of supporting. Many organizations start their march toward virtualization with things like test or development servers, logging systems or print servers. These tend to use about 5 percent of total server capacity and they are good initial candidates for virtualization.
The key to properly planning for virtualization is to do detailed capacity planning, based on a review of current and anticipated CPU usage, then moving on to things such as storage, input/output and memory requirements. This can mean a lot of time poring over server logs and meeting with end users, but it’s worth the effort. Companies such as VMWare, Hewlett-Packard and Microsoft offer monitoring tools to help kick off such analysis.
Your lowest capacity servers are, of course, prime candidates for virtualization. So start with them.
But before moving up the food chain to busier servers, take the time to scrutinize operations on your first batch of virtualized systems. Do applications remain stable? Can you test them under virtual workloads, allowing you to ramp up traffic to see how the system performs? Did you plan correctly? Or are things such as memory and input/output overtaxed?
Determine your most important measurements and build a control panel capable of monitoring and recording a few weeks worth of performance. Look for spikes and problem areas. Talk with other government data center managers about how they are using virtualization and evaluating their systems.
If you’re fortunate, you’ve planned well, but if not, make adjustments to this set of servers and make note of your lessons learned before moving into the hot kitchen that could include the virtualization of higher capacity systems.
If you are hosting moderately resource-intensive applications, such as transaction processing systems or busy Web servers, you might want to launch them on only about 30 percent of the available virtual machines that can be supported by your system. Otherwise, these systems will demand too much in the way of CPU usage, memory, storage and bandwidth.
If you are running a very resource-intensive application – say, data visualization or video feeds – hosting your application on a traditional dedicated (not virtualized) server might be your best option. You’ll be able to make this decision after first moving low-capacity server then moderate-capacity servers into virtual environments and studying their performance. By the time you get to your busiest servers you will have more experience with planning for virtualization.
For most data centers, moving to virtualized systems holds great promise for performance and cost savings. But this transition must be tackled in a logical manner, using real-world measurements and realistic expectations. Understanding capacity planning and slowly managing your transition and growth will keep you from jumping out of the workshop and into the fire of a kitchen that just might be running a little too hot.
Shawn McCarthy, a former writer for GCN, is senior analyst and program manager for government IT opportunities at IDC.