A data center without hard drives: That's Stanford's RAM plan.

Computer researchers at Stanford University have announced that they would like to eliminate hard drive storage for their computer system and store everything in computer memory. They are calling it the “RAMCloud.” It just goes to prove that you can spice up anything with the word “cloud,” whether it has anything to do with cloud networks or not.

Their argument essentially is that, even though capacity of hard drive storage has increased dramatically over the last four decades, their performance hasn’t increased as much and has fallen behind the needs of large-scale Web applications. Although many have proposed various solutions to this, including the use of solid-state drives and so forth, the folks at Stanford have proposed a whole new class of storage that uses dynamic random-access memory exclusively.

Using DRAM would have much lower latency than any other type of storage. It would also have other benefits, such as quick recovery. In another paper, the Stanford researchers claim that an average-sized server using RAMCloud could recover from a crash in 1.6 seconds.

Of course, there are two major potential problems with using DRAM. One is the cost. DRAM has a much higher cost per data unit than pretty much any other type of storage available. However, since this storage class would primarily be used in huge Web-based retail applications and so forth, they could probably afford to splurge a bit.

The second issue is really the clincher. DRAM is designed to work when it has power supplied to it. Whatever is stored on it will quickly degrade without power. So, to use it in this capacity, they will need a huge amount of uninterruptible power. And of course, if the worst actually comes to pass and the data center is left powerless for an extended period of time, the center's employees will need a copy of their data on some more conventional media so that they can restore it to the DRAM systems. But network centers have gotten pretty good at maintaining constant power level for a while now, so this wouldn’t be anything new.

All in all, it’s a pretty neat idea that could dramatically improve the performance of scalable systems.

About the Author

Greg Crowe is a former GCN staff writer who covered mobile technology.

inside gcn

  • Global Precipitation Measurement of Florence

    USDA geotargets the press

Reader Comments

Mon, Oct 24, 2011 Southeast US

Geographically separated automatic failover systems connected by high speed private networks for "synchronicity" would help to maintain a 24/7 availability as long as power outages were localized enough that at least one or two backup failover systems are powered at all times. With UPS and emergency gensets, that should be entirely possible for the likes of Google and Amazon.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group