NIST scientists model grid computing
- By William Jackson
- Oct 08, 2004
Grid computing promises to redefine IT in-frastructures by optimizing existing resources and providing access to more processors as needed.
But it's too early to say whether grids will revolutionize computing.
'Right now we're in a transition phase, when the technology is coming out of the lab and being adopted commercially,' said Chris Dabrowski, a computer scientist at the National Institute of Standards and Technology.
Kevin Mills, a senior NIST research scientist, added that some people 'think grid will never work.'
Large-scale computing grids must monitor task progress, detect node or communication failures, and reallocate resources on the fly.
'The pieces have to be monitored on a distributed basis, and there has to be an ability to recover from failures,' Mills said. 'Can the protocols defining a grid really handle that?'
No one knows yet. So Dabrowski and Mills are embarking on a long-term project to simulate the complex interaction of computers linked in a grid.
'We're just beginning to develop the models,' Dabrowski said. They hope to complete preliminary work next year.A step beyond
Wide adoption of grid computing still is five to 10 years down the road, 'which makes it a good time for us to get involved,' Mills said.
Grid computing would share computing resources, much as parallel processing harnesses multiple computers at a site into a single machine, or distributed computing uses com- puters remote from each other.
But grid computing goes a step beyond distributed computing by tying together heterogeneous, geographically dispersed IT resources. Standards-based grid protocols would create a single pool of resources, although each node would have its own resource manager and the users would not see a single system view.
'Users can submit jobs that are distributed throughout the network,' Dabrowski said. A user with a large job could break it into component pieces, or an intermediate computer could do that. In either case, an intermediate computer would schedule the job on the grid's individual nodes to make best use of available resources.
The Global Grid Forum is still working on the Open Grid Services Infrastructure, a set of technical specifications.
'Most of the grids today are within sin- gle enterprises' and consist of a few hundreds or thousands of nodes, Dabrowski said.
As standards firm up, the scope of the grids will increase. Mills said development probably will follow two models.
The first model would supply computing resources as a utility service, so that users need not maintain an infrastructure to supply all the resources needed. The second model would be service-oriented, coordinating all the services within an enterprise to boost performance and speed discovery and recovery.
Mills and Dabrowski said it makes sense to find potential problems with large-scale implementations before the standards are set.
'It's more cost-effective to find these things early,' Mills said.
Grids are complex, and complex things can behave in unexpected ways. Diversity'grid computing's great strength'also makes it vulnerable to failures, virus attacks and sudden imbalances, the NIST researchers said. Diverse systems can be resilient only if they are well understood.
Dabrowski and Mills are trying to define clearly the relationships between grid size, application complexity and reliability. 'There is no simple formula,' Mills said.
The models will begin with relatively modest grids of thousands of nodes and tens of thousands of processors. Eventually they will scale up to tens of thousands of nodes and millions of processors.
'At that point, we might have to move to a grid' even to build the model, Mills said.
William Jackson is freelance writer and the author of the CyberEye blog.