Pete Ungaro | Cray gets its groove back
GCN Interview with Pete Ungaro, Cray chief executive officer
- By Rutrell Yasin
- Sep 08, 2007
Cray is'focused on getting past 1,000 processors, where innovation really matters. - Pete Ungaro
WPN Photo by Ingrid Barrentine
Peter Ungaro, chief executive officer at Cray, has been on a mission to help the company regain its focus since he took the helm of the supercomputer maker two years ago. Cray led the supercomputer revolution back in the 1970s and 1980s but faltered in the mid-1990s as it ran into financial trouble and faced stiff competition from server providers, such as IBM and Hewlett-Packard. IBM built six of the 10 fastest computers on the Top 500 Supercomputers list released in June; two were Cray machines. Ungaro spoke recently about Cray's strategy in the high-performance computing arena and the coming wave of adaptive supercomputing.GCN: Why did Cray lose its luster, and how did it get it back?Ungaro:
Cray, all through the '80s and into the early '90s, was the name in supercomputing. The company went through a number of financial challenges and actually ended up getting acquired by Silicon Graphics and then acquired out of Silicon Graphics around 2000. The main thing the company struggled with has been focus. One of the things the company has done from time to time is spread out very broadly in the marketplace and lose its focus in high-end supercomputing. The thing we worked on a lot is refocusing the company at the high end and building a cost structure for the company and a business model for the overall company where it could be successful in leading in the high end of the supercomputer world.GCN: How is that effort going? Ungaro:
It is going pretty good. Over the next year, we're going to roll out three brand-new products into the marketplace, all focused at the high end of the market. We really focus on systems that are about $1 million and up in price. I think the vendors that do [symmetric multiprocessing] systems did a great job of scaling the systems up [from] 10 to 100 processors. And there are a number of companies building commodity-based clusters. I think that was great to get to the next order of magnitude in scaling, maybe from about 100 processors to 1,000 processors.
Cray is really focused on getting past 1,000 processors, where innovation really matters. And Cray has a lot of technology and expertise in this market to really innovate to solve the problems of scaling up these very large systems. So we've been doing that not just with our mainline system, which we call the XT4 today, but we also plan to bring some other processing technology to market to expand that at the very high end.GCN: What are the first things we are likely to see from this initiative? Ungaro:
The key thing for us in this first phase is to make life easier for the user and administrator of very large systems. So what we will do is have a common place to log in to this system and all of the storage and data stored in one common place across the machine. And you'll be able to manage the system as one large, hybrid system.
Later, we'll bring in tighter integration from the hardware and software side so the system will actually start to understand how best to run and adapt the application. Today, the user would have to code their application differently depending on which type of supercomputer they're using. If they're using a traditional commodity [Advanced Micro Devices] processor or if they're using a vector processor, they would have to write their codes and applications very differently to take advantage of those different processing types. What adaptive supercomputing is going to do over time is adapt the system for the user.GCN: What does it mean to adapt the system for the user?Ungaro:
It means that, through compiler technologies and other software technologies, the system will understand the application and run it in the fastest way possible within the resources of the system. If you have parts of your application that would run better in vector processors, the system would run those on a vector processor. If other applications would run better on a traditional scaler processor, you would run it there.
In fact, this was the cornerstone of our proposal to [the Defense Advanced Research Projects Agency] in their high-productivity computing systems procurement. Cray was one of two companies awarded with the final phase of that, which is building a prototype of this adaptive supercomputer we call Cascade.
We were awarded that at the end of last year. It's a $250 million [research and development] contract ' the largest supercomputer contract, at least that we are aware of, that's ever been awarded.GCN: What's your strategy for Cray to have more of a presence in government agencies? Ungaro:
The U.S. government is our largest single customer. We enjoy a partnership with the federal government both from selling products and working together to understand their requirements at different agencies. We [work] jointly on R&D programs to build our next generation of supercomputers that attack these high-end problems. A great example of that would be within the Sandia National Laboratories, where a few years ago, we signed a contract to build a supercomputer called Red Storm that Cray eventually brought to market as the XT3. Now we're in the next generation of that product, the XT4, which is the second-largest supercomputer in the world. So we not only think of the U. S. government as an important customer of ours, we form a very tight partnership with many of the agencies and laboratories at both the Energy and Defense departments and other departments across the government in building our next-generation product.GCN: Do you think the government is allotting enough R&D money for supercomputing?Ungaro:
Clearly there is room to do more. High-performance computing is very important to increased industrial and defense-oriented competitiveness for the U.S. I always think there is potential there. But I do think the government has shown that it is very committed to this.
In the Department of Energy, they have the Leadership Computing Initiative going on [that works in conjunction with] the Oak Ridge National Laboratories leadership computing center. We have a very large contract with Oak Ridge National Laboratories to develop a supercomputer that is scalable to a petaflop in performance, which is about four times the performance in any supercomputer available today.
Other agencies like the National Science Foundation and NASA and the National Oceanic and Atmospheric Administration all have commitments at one level or another to supercomputing. We need to continue to do next-generation R&D because there is a lot of basic research that needs to go on many years prior to fielding these large supercomputers.
Rutrell Yasin is is a freelance technology writer for GCN.