Future Data Centers Will Also Need A Storage Rethink

STORAGE IS A CONSTANT in any discussion about data center design, but it often gets pushed to the periphery and comes up only when other things have been settled. But next-generation data centers will be responsible for handling far higher data volumes than they do now, and storage is a key issue.

In the past, it’s been relatively easy to simply throw extra storage capacity into a data center as needed by adding new disk drives. Current demands on data centers, however, such as the IT-as-a-Service that clouds provide, are rapidly making traditional storage systems obsolete.

Newer storage infrastructures must scale almost without limit, said Eric Slack, an analyst at consultant Storage Switzerland. On top of that, they need to be extremely flexible, efficient, economical and consistent.

“Combining the workloads generated by different users or applications from different departments [or different organizations] on the same infrastructure is particularly demanding of any shared storage environment,” he wrote in a recent blog post.

Storage supporting the next-generation data center needs to expand on-the-fly, be configured for each user and then reconfigured as often as necessary, he said. Additionally, it needs to be efficiently managed and provide consistent performance daily, and “that’s not something current generation storage systems can do.”

Data centers will need storage architectures that place data close to the compute resources when it’s needed, said Daniel Bizo, an analyst at 451 Research, and then move it to low-cost storage when it’s not, and do this without human intervention or complex management settings. That sounds simple enough, he said, “but doing that efficiently at scale is a nontrivial engineering challenge.”

One emerging technology trend that could manage that is the convergence of server and storage systems, both physically and logically, Bizo said. So, expanding the shared storage complex into the server layer through managed shared caching or tiering is one answer. Another is “full convergence,” in which compute resources and shared networked storage are collapsed into, and controlled on, a single physical system.

In theory, these hyper—converged systems should provide a good modular approach to the next-generation data center with the added advantage of simpler management. Instead of needing separate management regimes for servers, storage and virtualization, you can manage one thing. And you get extra capacity and performance by simply adding more hyper—converged appliances.

They are not at the right point now, Bizo said, because they are somewhat limited in the flexibility of compute—to—storage ratios. But he thinks advancements in interconnect technology, such as Intel’s recently announced silicon photonics, which could provide compute-to-storage data transfer speeds of between 25 and 50 gigabits per second, will break down the current physical limitations of hyper—converged systems. They could then be able to scale to capacities as high as 1 petabyte per node, he said.

Slack believes that spinning disk drives, even ones as fast as the current generation, won’t be fast enough for next—generation data centers, particularly for the storage that will be needed close to the compute resources. Drives made with solid—state flash storage, which has improved enormously in both price and reliability, will provide the answer.

But even that won’t be enough, he said, because next—generation data centers won’t be able to rely on storage administrators working the systems to keep users happy.

Next—generation data centers “also need automation, such as automatic load balancing, self-healing flash drives and [application programming interface]—driven management routines that can be configured for each user,” he wrote in a blog.