A massive pluggable NAS

At yesterday's meeting of the High Performance Computing Technology Forum (Formerly the Baltimore-Washington Beowulf Users Group), Lincoln Tidwell, Federal Solutions Architect for Hewlett Packard's Enterprise NAS Division, talked about his company's recently-released HP StorageWorks 9100 Extreme Data Storage System.

Think of the 9100 as a very, very large network attached storage (NAS) device, Tidwell urged the audience.

This storage system scales up to a rather hefty 820 terabytes of capacity, by increments of 1T Serial Attached Storage (SAS) drives. The system is able to carve out individual fill systems as large as 125T each. It features a throughput of up to 3 gigabits/second. And average cost is less than $2 per gigabyte, on the whole.

Best of all, when you roll the unit in from the loading dock you'll have as few as six cables to plug in'two power cords and four 10 Gigabit Ethernet interconnects.

"So standing this up in you're lab will not be a tedious effort," he said.

Tidwell pitched this system to scientific and high productivity environments with large amounts of data to manage. (There's a bit of a backstory to Tidwell here. No mere sales manager, he actually helped developed the software to run this system while he was at PolyServe, a company that HP purchased in 2007.)

Micheal Fitzmaurice, a technical specialist at World Wide Technology who heads up the Tech Forum, noted that it would be very difficult to acquire elsewhere so much NAS storage in a deliverable single unit. The storage of most high performance computer environments tend to cobbled together a number of low-cost servers and storage arrays. Because of this arrangement, administrators and their minions tend to lose untold time trying to get all the rag-tag components to work together. Buying an integrated solution may actually prove to be cheaper in the long run.

The use of many independent disk drives brings in an advantage of reliability. Sure, disks themselves are prone to failure, but since the system arranges the disks in RAID 6 configuration, it is possible to keep shuffling data about so no down time or data loss actually occurs. And the administrators could swap out the broken disks during normal business hours.

Tidwell estimated that, with all 820 disks running, the system would spend about 200 hours a year rebuilding RAID groups, due to failed drives. So say you had 10 file systems. At any given time when data from a bad disk is being copied to a good disk, nine file systems would be running at full speed and the tenth would be running at 50 percent capacity.

"So over the years, this system will have a 100 percent uptime,' he said. 'On the worst day of the year, 1/10th of file systems would be running a little bit slower. That's pretty good.'

For more information, click here.

About the Author

Joab Jackson is the senior technology editor for Government Computer News.

inside gcn

  • IoT security

    A 'seal of approval' for IoT security?

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group