high performance computing (Gorodenkoff/Shutterstock.com)

User-friendly high-performance computing, thanks to MIT Lincoln Lab

High-performance computing systems might seem like the perfect tool for big data jobs, but they're not readily available and are notoriously difficult to use – especially for who haven't learned to write batch programming scripts for HPC systems the way engineers had been taught to do. As a result, most researchers have depended on high-end desktop systems, but today's data-intensive research means those systems can no longer deliver adequate performance.

To make it easier for researchers to access HPC resources, the Lincoln Laboratory Supercomputing Center at MIT has developed tools and training to bring HPC capabilities to desktops and laptops.  The effort draws on lessons learned from the development of the MIT SuperCloud, a unified platform of four large computing ecosystems: supercomputing, enterprise computing, big data and traditional databases.

In "Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers," the authors explain how the lab addressed its goal of accommodating more and varied users who needed both performance and usability.

To meet its goal, the lab broadened the definition and architecture of an "interactive" HPC system so that users had a desktop experience in which they could essentially hit a key to run their jobs, rather than having to work with batch processing workloads.

The HPC environment itself included a uniquely configured scheduler that enabled on-demand computing services. Users were limited by the number of cores their jobs required, which ensured compute resources were always available. The scheduler and central file system were set up so that parallel-computing jobs launched in less than 20 seconds on hundreds of cores, "thereby providing the interactivity with job launches that users were used to on their desktop," the authors wrote.

A middleware layer with new user-friendly tools for launching jobs was created to support new data analysis and machine learning applications. New users were onboarded with individualized tutorials that included detailed explanations of parallelization strategies for their application.

Although "the challenges to deploying interactive on-demand HPC environments are both technical and institutional," the authors wrote, "the strategies presented here are general and easily adapted for any center where productivity and the rapid prototyping and testing of algorithms and analytics are key concerns."

Read the full paper here.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at smiller@gcn.com or @sjaymiller.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.