GCN Tech Blog

By GCN Staff

Blog archive

The prescient Amdahl

This week GCN has a feature story on how programmers can write their programs to get the most out of multicore processors.

The problem is only partially about how to break up the problem in such a way that it uses all the processor cores you have available. That's complicated enough, but there is an additional trick'you must break up the problem in such a way that any gains in efficiency aren't eaten up by the overhead it takes to manage the execution across numerous cores.

This trade-off was perhaps first articulated by IBM computer architect Gene Amdahl. Amdahl observed that the performance gains expected by breaking a task into multiple simultaneously-executed parts will be offset by the additional overhead required to manage this new, more complex, way of executing the problem. Engineers now refer to this balancing act as Amdahl's Law.

"Amdahl's Law expresses the law of diminishing returns: The incremental improvement in speedup gained by an improvement of just a portion of the computation diminishes as improvements are added," stated John Hennessy and David Patterson on their 2006 textbook 'Computer Architecture."

While most developers may have to start to think about concurrency, this is old news for those writing for supercomputer systems.

One of the sidebars to main story is about how high-performance and clustered computer systems tackle this problem through the Message Passing Interface, a library of calls for Fortran, C and C++ applications.

For this article we sat in on a session about MPI given by Matthias Gobbert, a mathematics professor at University of Maryland Baltimore County and an administrator for UMBC's Center for Interdisciplinary Research and Consulting.

As Gobbert noted, a nice aspect about MPI is that it doesn't greatly alter a programmer's environment. The library is primarily a set of bindings, available for C, C++, and Fortran among other languages. The program code remains a single file, even if different processes are carved off for different processors to tackle.

The first step of rendering a program MPI-capabilities is to simply include a header in the program code. For C, the header would be "#include ," which lets the compiler know to include this MPI library. Then the programmer looks for those functions that can work as stand-alone units of work. The strings of data such functions produce are encapsulated with MPI sending and receiving commands, MPI_Send and MPI_Recv, respectively. The send command, for instance, instructs the processor to send an output string to another process, which will capture the string with its receive command.

"It's your job to make they match up correctly," he said of the MPI_Send and MPI_Recv's.

Once the programmer is finished, the application is compiled with an MPI wrapper that runs in conjunction with the compiler for the native language of the program itself (many Linux distributions offer mpicc and mpiCC, which are both MPI wrappers for C compilers). The MPI commands are handled by the MPI compiler while the non-MPI aspects are handled by the native compiler.

Posted by Joab Jackson on Jan 09, 2008 at 9:39 AM


inside gcn

  • pollution (Shutterstock.com)

    Machine learning improves contamination monitoring

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

resources

HTML - No Current Item Deck
  • Transforming Constituent Services with Business Process Management
  • Improving Performance in Hybrid Clouds
  • Data Center Consolidation & Energy Efficiency in Federal Facilities