monitoring supercomputer (Sandia National Labs/YouTube)

Diagnosing performance problems in supercomputers

Researchers are working on a framework to automatically monitor and diagnose performance issues with supercomputers.

A team of eight from Sandia National Laboratories and Boston University has been working for more than a year to replace traditional, manual methods of problem-finding and -solving. The result -- the Lightweight Distributed Metric Service (LDMS) -- is a low-overhead, low-latency framework for collecting, transferring and storing metric data on a large distributed computer system. It recently won the Gauss Centre for Supercomputing’s Gauss Award for best technical paper at the International Supercomputing conference.

Supercomputers have many components, and the more they have, the likelier it is that the failure of one will trigger a domino effect that disables the whole system, said Vitus Leung, a Sandia computer scientist and paper co-author.

“For a regular computer like you might have in your office or laptop at home, you have far fewer components, so the likelihood of failure of any one component, which would disable the entire machine is much smaller,” Leung said.

But for high performance computers, the complexity of applications and hardware configurations makes it difficult to know how efficiently the machine is operating. Plus, the lack of high-fidelity, synchronized data hides network problems like congestion, contention for resources or memory imbalance.

To study potential problems behind application performance variations -- among the most significant challenges in high-performance computing systems, according to the report -- the research team used supervised machine learning, writing programs to recreate known anomalies that would likely affect a Cray XC30m supercomputer at Sandia and the Mass Open Cloud system at BU.

They also did “healthy runs,” Leung said, without anomalies. “That provided us with data -- actually a lot of data,” he said.

Using LDMS, the supercomputer collected much more data -- more than 700 metrics per second per computer, while the cloud collected about 50 metrics at two- or three-second granularity. The difference, Leung said, results from the “noisiness of the data on the BU cloud, because it’s not nearly as dedicated. You don’t know who else is running on your machine at any given time…. The conditions are much more diverse and not so stable.”

Collecting the data is one part. Using it is the other. “We can actually learn from known causes of performance variations using machine learning and create some models," said Ayse Coskun, an assistant professor at BU and lead principal investigator on the project. "Later on when I see the same problems, perhaps I can recognize that this was the problem that was causing the variation, and in a future step, following the outcome of this framework, I can do better organization of my machine, do better system management, maybe also change the application or change my scheduling policy,” she said.

The team took statistical characteristics of the data, and filtered it down to about 10 percent of the raw data and fed that to various machine learning algorithms. One algorithm known as Random Forest -- a collection of decision trees -- proved “near-perfect in detecting these anomalous situations,” Leung said.

“It prioritizes your metrics for decisions,” he said. “Some of the previous state-of-the-art techniques that we compared to didn’t do this prioritizing. They tried to develop an independent set of metrics that would cover the entire space that the metrics were covering. On the one hand, it’s good; it covers the entire space. But on the other hand, that coverage doesn’t lead to good diagnoses.”

He likened it to a physician diagnosing a patient using everything known in medicine vs. the expertise to pinpoint the problem.

The team found that only 1 percent of the data collected was useful to making a decision about a problem.

Using LDMS to diagnose problems in high-computing platforms will help systems administrators allocate resources and schedule jobs to maximize performance. Developers will gain insight into improving runtime performance, and system designers will have information to help them build more efficient machines.

Eventually, the team will look at unsupervised machine learning. “At some point, we’re going to have to move into the unknown, so to speak, deal with the anomalies we don’t know, we don’t have a history,” Leung added. “That will provide new challenges as we continue in this line of work.”

Supercomputers’ performance variations have become more prominent and problematic in recent years, Coskun said, which could affect the scientific community that relies on the systems. An IEEE Spectrum article last year cited cosmic rays and radioactive solder as culprits behind problems with supercomputers at other national labs recently.

“Broadly, these large-scale, high-performance computing systems are very critical for scientific progress,” she said. “The Department of Energy, for example, has a number of supercomputers where all the scientific computing applications workloads are run, and these include lots of different topics from physics to chemistry to geography to other computational sciences. People write simulators or applications to do scientific discovery or to evaluate something they are doing, and these are typically complex applications that run on many servers, many nodes, many computers, essentially, for a long time.”

LDMS is freely available as open source software so it can run on an IBM, Cray or Linux clusters, Sandia officials said.


About the Author

Stephanie Kanowitz is a freelance writer based in northern Virginia.

inside gcn

  • Global Precipitation Measurement of Florence

    USDA geotargets the press

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group