data center

How flash memory answers data centers' need for speed

Flash memory, like the name implies, is quick. With no moving parts, it’s closer to system memory than disk storage when it comes to saving and reading data. But that speed comes at a price that has kept flash out of many government data centers – until lately.

Today, storage technology firms are offering hybrid systems that allow agencies to adjust the mix of data access speed and storage capacity they need without having to trade down on either resource or break the budget.

Most users working on a standalone PC probably wouldn't notice the speed of flash compared to a traditional disk drive. It might take seconds to open a file using a disk drive, or half a second with flash memory, but for the most part the experience is going to be the same.

Where speed starts to matter is in large storage array systems requiring hundreds of operations per second or in data centers handling hundreds of thousands of calls to memory at the same time.

Such environments are where flash memory is built to shine.  Yet other than for special applications like massive credit card processing engines or cybersecurity analysis, it's seldom used in the conventional data center environment. The reason? Even with declining costs, flash memory is just too expensive for most government data centers to deploy in mass. And while it delivers in speed, it underperforms as a basic storage medium.

In the world of high-end data centers, this trade-off  is measured in terms of Input/Output Operations Per Second, (IOPS), or how many times a single drive can access information per second before user queries are delayed waiting for those in front to clear.

Data center trade-offs

"When building out a data center or a SAN, there are two main performance factors that need to be considered," said Rob Commins, marketing vice president for Tegile Systems, a maker of flash-driven enterprise storage systems.

"There are IOPS for performance and then capacity for storing data. What was starting to happen is that people were having to add more and more drives to simply get the IOPS they needed, and once they got there, they found a whole lot of wasted storage capacity," Commins said.

That's because even rapidly spinning disk drives don’t make a dent in IOPS performance compared to flash memory. As a general rule of thumb, the fastest traditional drives spinning at 15,000 RPMs -- the current maximum before drives begin to break apart -- are only good for between 180 and 200 IOPS each. And slower drives, such as those that spin at 7,500 RPMs, only deliver between 90 and 140 IOPS each.

In contrast, a  typical flash drive leaves a standard drive in the dust, offering 3,000 to 3,500 IOPS.

In the past, options for expanding capacity in government data centers was limited by traditional drives. "The big thing in government now is consolidation," said Christian Shrauder, the federal CTO for Fusion-io, which develops solid state, high performance I/O systems. "But to build up the infrastructure required adding [disk drives] for performance, not capacity."

Going hybrid

Even with flash technology coming down in price, it will still be a while before data centers start moving to all flash. Chris McCall, senior director of ioControl product marketing for Fusion-io thinks that going all flash would be a mistake, because not every application is going to need it.

"The thing to remember is that flash is deployed in lots of different ways," McCall said. “I would say that hybrids are the way many people will go now, with spinning disks relegated to cold storage."

A hybrid array combines flash memory with traditional disk storage, with flash mostly used for applications that demand a high number of IOPS and traditional spinning disks used for storage.

Jason Smith, the IT specialist for the city of Oviedo, Fla., knows firsthand what happens when data centers that rely on traditional spinning disks fail, and how a hybrid system can save the day.

"We have a fairly small data center and were in the process of upgrading our users to a virtual desktop system to improve efficiency," Smith said.

"We had 75 clients targeted for the VDI rollout, but we only got 20 deployed before we hit the wall. We started to get flooded with odd user complaints about frozen systems, black screens and slow performance. We had to scale back the program, put the desktops back on desks and figure out what was wrong."

The problem for Oviedo was that it had used a vendor's calculations for how many IOPS each virtual desktop would need. That number was a little low, at just 50 IOPS per client, and it might have worked in a normal situation.

However, as Smith discovered, it did not take into account the "boot storms" that occurred at the beginning of each day, the simultaneous installation of software, patches or workloads with  multiple people performing needed tasks on a schedule. The performance needs were peaking out at over 3,000 IOPS, more than enough to overload the city's traditional disk-based storage area network, even though there was still plenty of actual storage capacity.

"When we looked at our options, we were faced with having to put more spindles in place just to handle the IOPS needs of the VDI," Smith said. "Or we had to buy a much bigger SAN which we didn't need for capacity, just performance."

Smith estimated that a SAN big enough to handle the city's VDI needs would have cost between $60,000 and $75,000 before cooling and power costs were even factored in.

Instead, the city installed an ioControl hybrid system from Fusion-io. For about $50,000, Oviedo had well over the 3,000 IOPS required, thanks to flash drives providing extra operations per second.

The ioControl system uses traditional drives for storage, so the city doesn't have a capacity problem either. "We've added a mail archive to the system, virtualized [Microsoft] Exchange, put our GIS on there and will soon be adding external Web services," Smith said. "And we no longer get complaining phone calls from users."

Reader Comments

Mon, Mar 10, 2014 Perfman3 CA

Hybrids will be viable options for many government applications, but all flash arrays (AFAs) will also be needed for the most performance sensitive application workloads. The key is understanding the I/O profiles of the workloads themselves and then using these I/O profiles to generate workload models that can be used to evaluate hybrid offerings versus AFA vendor offerings and to determine the optimal configuration mix of HDDs and SSDs. For larger data centers, IT architects should look into tools like Load Dynamix or SwiftTest who provide storage workload modeling and load generators that can determine the performance characteristics/limits for any storage system specific to your workload profiles. No one wants to spend 2-5X per GB for storage when they don't have to.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above