Run time vs. static analysis: They're both good
- By William Jackson
- Aug 06, 2007
A bake-off between engineers using different methods of code analysis at last week's Black Hat Briefings demonstrated the relative strengths of static and run time analysis, said Brian Chess, chief scientist and founder of Fortify Software Inc. of Palo Alto, Calif.
'You can't say one technique is superior to another,' Chess said. 'They have strengths in different places.'
Two engineers from Fortify used tools they had developed for using the different types of analysis in the Iron Chef Black Hat competition at the computer security conference held in Las Vegas. Imitating the popular competitive cable cooking show on the Food Network, each 'chef' was given a piece of mystery code to analyze for vulnerabilities for an audience of 700 hackers, researchers and security professionals.
Static analysis, which has been around since the 1980s, is an efficient tool for finding known flaws. Run time analysis is a quality assurance method that examines the behavior of code as it is running to determine if it violates security rules. The demonstration emphasized the need for secure code development rather than trying to add security to programs after they are created.
Chess, speaking at the Black Hat event, said the increasing complexity and functionality of software, especially interactive Web applications in what is being referred to as Web 2.0, make the current practice of adding on security unsustainable and advocated secure software development practices.
Each competing team found about 20 vulnerabilities in the code being analyzed, but only one of those vulnerabilities was found by both teams. A panel of judges gave the laurels to the run time team because it came up with exploits during its analysis that left no doubt about the nature of the vulnerabilities it had discovered. But the results were not conclusive, Chess said. He pointed out that with static analysis, the team could point out each line of code in which it had found a problem. Although the judges at Black Hat, which leans heavily toward hackers and researchers, gave the nod to run time analysis, a group of software developers looking at the same results might have preferred static analysis, he said.
However, neither team could find more than half of the vulnerabilities, indicating that neither method by itself is likely to be adequate for code review.
The audience was given the opportunity of examining the same piece of code at the same time as the competitors, and a handful took up the challenge. One audience team that reported its results at the close found almost as many vulnerabilities as the engineers, Chess said.
'It was a pretty impressive showing by three people working together,' he said.
William Jackson is a Maryland-based freelance writer.