How to write apps for multiple cores: Divide and conquer

 

Connecting state and local government leaders

The move toward multicore processors — and, in the supercomputer world, multiple processors — does not immediately guarantee great performance increases. The IT industry is just beginning to figure out how to rewrite computer programs to get the most from this new architecture.

How do you find a bug in a program when that program runs on more than 200,000 processors?

As incredible as that scenario might sound, it is becoming a routine problem for the Energy Department's Lawrence Livermore National Laboratory (LLNL), home of the 212,992-core BlueGene/L supercomputer. To help spot bugs, laboratory researchers, along with researchers from the University of Wisconsin, have developed a new software program, named the Stack Trace Analysis Tool.

"What we are finding is that today's architectures require novel [debugging] techniques," said Livermore researcher Gregory Lee, who presented a paper about the new software at the SC 08 conference, held in Austin, Texas, last fall.

Although LLNL is an extreme example, most agencies, with their fleets of servers and desktop machines, are facing the same challenge: how to make the most of multiple processing cores within a single chip.

Most computers, except for economy-minded netbooks, now come with multicore processors, CPUs made up of multiple processors side-by-side. Earlier this month, for instance, Advanced Micro Devices unveiled its latest line of desktop PC processors, both of which have two cores per processor: the Athlon II X2 250 and AMD Phenom II.

This move toward multicore processors — and, in the supercomputer world, multiple processors — does not immediately guarantee great performance increases, though. And the IT industry is just beginning to figure out how to rewrite computer programs to get the most from this new architecture.

"I sense there will be a shift in the computer industry as the core count keeps increasing," said Eng-Lin Goh, chief technology officer at SGI. "There will be more emphasis on the software side: The onus will be on trying to scale your code."

From one to many

Since the Ballistics Research Laboratory-funded Electrical Numerical Integrator and Calculator (ENIAC) machine went live in 1946, computers have always completed one instruction at a time.

Of course, since the vacuum tubes used for ENAIC were replaced with transistors on microchips, the speed at which chips carry out their instructions has doubled approximately every 18 months, a trend dubbed Moore's Law, first observed by Intel co-founder Gordon Moore.

But until recently, computers have only presented the illusion of being able to juggle multiple tasks, such as allowing users to play Solitaire while they check their e-mail, thanks to the sheer speed of the processors. A processor running at 2 GHz executes a new instruction 2 billion times a second.

And so, until now, a software programmer's task was relatively easy: Write a series of sequential instructions for processors to execute one after another. If a developer wrote a program that runs too slowly, the joke goes, he or she would merely need to go on vacation for several months, and upon return, the newest chip would run the program with sufficient speed.

Those days have ended, however. In the past few years, Intel and other chip makers found that the more cycles a chip executed per second, the hotter it would get. After chips hit about 3.7 GHz earlier this decade, the heat became unworkable, at least without advanced cooling techniques. So chip makers have switched their strategy: They are advancing their products not by making them faster but by putting more cores onto each chip.

"You are getting more and more cores, but the clock speed has stagnated," Goh said. As a result, "the [emphasis] has shifted from the software developers waiting on the CPU to become faster to one where it is the software developers' work to keep trying to scale their programs from one core to more cores."

For programmers, this multiplicity of cores complicates how they should write programs. Think of the difference between executing a job on a single core and multiple cores as the difference between one person painting a room and a group of people painting a room, said James Reinders, who works at Intel's software products division and wrote a book on parallel programming, "Intel Threading Building Blocks."

One person can do the job sequentially, starting at one side of the room and working through to the other side. Coordinating the efforts of multiple people, however, requires some planning, especially if the job needs to get done in less time.

"The challenge is that we have not, in general, designed our applications to express parallelism," Reinders said. "We haven't designed programming languages to make that easy."

Parallel programming requires attention to two areas, Reinders said. One is decomposing the problem in such a way that it can be run in multiple chunks, side by side. "The art of figuring out how to divide things up so that they are independent is a big challenge," he said. One operation can't be dependent on another operation that hasn't been completed yet. The second area requiring attention is scalability. The programmer does not know how many processors his or her creation will run on, just that it should run on as many processors as are available for the task.

At the JavaOne conference in San Francisco earlier this month, eBay engineer Sangjin Lee sketched out a simple example of how things could go awry by using multiple cores side-by-side. He showed the audience the code for a simple task: Counting the frequency of each word in a file filled with words.

For a single processor, the job is easy: Go word-by-word through the file. On the first occurrence of a word, the program adds that word to a list of all the words, alongside the number "1" to indicate how often that word has appeared in that file. For the subsequent appearance of the word, the program will increment that number by 1.

But for multicore processors with four cores working on the list, this approach could cause confusion if the program is not written with sufficient nuance. For instance, say the program is broken down into two identical programs to run simultaneously on two different cores. The threads on the two different cores might come across the same word at the same time, and both will create an index entry of that new word at the same time. That could crash the computer or result in duplicated entries. Also, if two threads update a word entry at the same time, an incorrect total could result.

Breaking up the problem so that you still get correct results is complicated enough, but there is an additional trick: You must break up the problem in such a way that any gains in efficiency aren't eaten up by the overhead it takes to manage the execution across numerous cores. In other words, you must manage the painters in the room efficiently enough that the time taken to plan their activities doesn't eat up all the time saved by having multiple workers on the job.

This trade-off was perhaps first articulated by IBM computer architect Gene Amdahl. Amdahl observed that the performance gains expected by breaking a task into multiple, simultaneously executed parts will be offset by the additional overhead required to manage this new, more complex way of executing the problem. Engineers now refer to this balancing act as Amdahl's Law.

"Amdahl's Law expresses the law of diminishing returns: The incremental improvement in speedup gained by an improvement of just a portion of the computation diminishes as improvements are added," wrote John Hennessy and David Patterson in their 2006 textbook, "Computer Architecture."

Brainstorming

Fortunately, help is on the way — from both the public and private sectors. Two approaches seem to be taking shape. One is creating entirely new languages from scratch that are better suited to a multicore environment. The other is to write additions to existing languages that can automatically review code and look for places where the program can be broken into sections that can run simultaneously.

The Defense Advanced Research Projects Agency has been working on the issue through its High Productivity Computing Systems program, at least for what is called coarse-grained parallelism, or programs that run across many processors. It has funded the development of a number of new languages that developers could use to write such programs.

One DARPA language created under this model is Chapel, which Cray is developing. Chapel was designed to "reduce the gap between parallel languages and the mainstream" languages, Cray engineer Brad Chamberlain said at the JavaOne conference.

IBM is creating another DARPA-funded language, called X10, that can run on a Java Virtual Machine, making it usable across multiple platforms. Again the focus is on familiarity. The plan was to "start with a sequential programming model that works" and add more elements of concurrency and distribution, said IBM researcher Vijay Saraswat.

Perhaps the most radical approach being tried by the development community is that of using functional languages. Although functional languages, such as Haskell, have been around for decades, they have not been widely used, probably because they are harder to master. In functional languages, programs are broken into a composite of functions, said Vlad Vinogradsky, a senior architect evangelist for Microsoft's public-sector group. Functions "calculate a return value, and that is all that they do," he said. This approach assures that if a value is changed, it won't inadvertently change another part of the program.

Microsoft is working on a functional language called F#. Sun Microsystems, with DARPA's help, also is working on a largely functional language, called Fortress, that follows those same rules.

While DARPA and others create entirely new langauges, other developers are working on modifying existing ones.

At the SC 08 conference, other researchers showed off how they were extending popular languages for parallel duties. For instance, George Washington University researcher Bill Carlson is developing Unified Parallel C (UPC), an extension of the C programming language for parallel environments. University of California Berkeley researchers are working on a dialect of Java, called Titanium. Elsewhere, John Mellor-Crummey presented his work on Co_Array Fortran, an extension of Fortran 95 that is also being prepared for the next version of that language.

Private companies have also taken the initiative to tackle multicore processors. One is, not surprisingly, Intel, which has developed an extension to C++, called Threading Building Blocks. To build TBB, Intel developers rewrote the aspects of C++ that might lead to unpredictable results when used in a multiprocessor or multicore environment, such as memory management.

In use, developers just include a link to the TBB library files in their code headers, and the TBB functions will be compiled into the code. Intel offers an extension to Visual Studio, called Intel Parallel Studio, that supports TBB. Using TBB, programmers don't need to worry about writing for multiple or multicore processors.

Reinders offered an example of how a TBB-enhanced C++ app could work. Say a program is running across all four cores of a quad-core processor. But when another program, such as a virus-checking program, is loaded onto one of the cores, performance of the part of the program running on that one core slows, which in turn slows the entire program. TBB would see that slowdown and move that portion of the program off that core and onto the other three.

The Java community is also trying to make it easier to write Java programs for multicore processors. At the JavaOne conference, Jonas Bonér of Swedish consulting firm Crisp AB, described a number of different approaches.

One is called software transactional memory (STM), which acts like a transactional database. In a database, if a deadlock — when two threads vie for the same resource and lock the system — occurs, the database rolls back to an earlier state, and both parties try to execute their actions again, perhaps on a staggered schedule. Each operation is atomic, meaning that if the entire sequence isn't completed, the program returns to the state it was in before it started the transaction.

Although Java does not offer STM, another language that runs on Java Virtual Machine — a dialect of Lisp called Clojure — does. "Using Clojure, unless the entire set of transactions are completed, [the program] throws an exception," Bonér said. Atomic operations can be constructed within Java, but it requires more expertise on the part of the programmer.

Another method is message-passing concurrency, which seemed to be the most-discussed alternative at this year's JavaOne conference. In that approach, programs are broken down into a series of individual lightweight processes, which communicate with one another through messages. The components, which can also be called actors, have no concurrent operations and instead perform all their operations sequentially.

That messaging approach "raises the abstraction level," Bonér said. “It is easier to reason about. It is easier to see where the problem might be.”

One of the biggest problems with breaking up Java among multiple cores is that threads that are supposed to execute in a certain order might run in a different order because they are running side-by-side, which could lead to incorrect results.

Message passing would eliminate that problem. As an example, Bonér presented a program written in the Scala programming language that emulates a ping-pong game. The game consists of two actors, or lightweight processes. The sole function of each actor is to direct the ball back to the other actor. Breaking down each process into individual components maintains the orderly progress of the game.

Whatever the approach, the goal of writing programs that can run concurrently on several processors or processor cores remains elusive. "Concurrency is complicated. As an industry we are still coming to grips with the programming model," said Sun Microsystems engineer Brian Goetz at the JavaOne conference. "I think there is an incremental path to get there, but I do think we need to change the way we think."

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.