Reality Check

Blog archive
Watchmaker closely examining a watch

Is unit testing of software a waste of time?

I’m currently working on a project that uses the JUnit framework for unit testing our software.  Unit testing is a method of testing software that focuses on testing small units of code like a single method or single class, which is a template for an object in object-oriented programming. 

Many modern integrated development environments will generate a stubbed out version of a unit test case for any class selected in a project.  Instead of throwing code over the wall to the testers at the end of development, the agile software development process considers testing, especially unit testing, as integral to the development process

The agile concept of writing unit tests first or taking a “test-first” approach was expanded and refined with the advent of the test-driven development process. TDD involves a very short development cycle where the test is created first, then the production code is written, refactored and cleaned until it passes the test. 

TDD and automated test frameworks (like JUnit) reinforced these new processes while riding the agile wave.  For years, no one dared question the new religion that myriad consultants pitched with zeal – until now.  Recently, David Heinemeier Hansson (the creator of Ruby on Rails), stated that TDD is dead

This “admission” followed several years of debate caused by a seminal paper by software guru James Coplien entitled, “Why Most Unit Testing is Waste.” At first blush, this appears to be yet another example of extreme programming (the progenitor of agile) overshooting its mark and needing to be pulled back from the edge by cooler heads. But Coplien’s paper makes several important points:

1.  Beware of testing for testing’s sake. One problem with unit testing is it cannot cover all the code to be tested, and therefore the tendency is to test what is easy to test.  Instead tests should be designed at the right level, which may mean system tests instead of unit tests.  It is easy to write a lot of useless tests, but good tests must be based on business requirements.  As Coplien says, “If this test fails, what business requirement is compromised?” 

2. Useless tests increase maintenance costs. The larger the code base (including both test and production code), the larger the maintenance costs.

3. Turn unit tests into assertions. Modern programming languages, like Java, allow assertions that enable developers to test assumptions in the code (like the assumption that “the value of X is 5”).  Assertions can also be turned off with a compiler switch and removed from production code although Coplien recommends they be kept in production code to automatically file a bug report on behalf of the end-user.

4. Good testing is hard and begs skepticism. Coplien concludes with the admonition to “be skeptical of yourself: measure, prove, retry.”  He bemoans the sloppy, fast-fail culture that exhibits overconfidence in the risk mitigation that unit-testing provides.  Being able to write unit tests fast and run them continuously does not improve risk mitigation. As Coplien says, “automated crap is still crap.”

Coplien’s well-reasoned paper caused a lot of soul searching in the agile community and David Heinemeier Hansson’s opening keynote at RailsConf 2014 (and blog post) stating that TDD is dead has spun up the debate and controversy.  Coplien debated TTD proponent Bob Martin on TDD.  More recently, Martin Fowler hosted a series of recorded hangout conversations between David Hansson and Kent Beck. 

Finally, Twitter exploded with the issue under the hashtag #tddisdead

So, what is the ramification for government IT managers of this new debate on the practice of unit testing and its agile incarnation, TDD? 

First and foremost is that the integration of testing into the coding part of the development lifecycle is a good thing.  Over-the-wall, Hail Mary passes to the testers are a practice that deserved an opposite reaction. However, don’t swing too far in the opposite direction by fostering a brute-force, test-first and test-everything approach. 

Instead, focus testing on key algorithms, system-level regression tests and well considered risk mitigation.  Focus testing on the failure and impact of key business requirements.  That is where the program will get the most bang for the buck in both the quality of the software and the maintenance budget.  So, while test-first may be on the ropes, automated testing is alive and well.  Long live testing!

Michael C. Daconta ( or @mdaconta) is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data.

Posted by Michael C. Daconta on Jun 18, 2014 at 7:27 AM

Reader Comments

Tue, Jul 14, 2015 James Coplien Helsingør, Danmark

Almost a year ago. Wow — has it been that long? Seems like just yesterday. But a few have taken the issue to heart, have actually done some thinking instead of green-bar-fever iterating, and I think we're a bit better for it all today. But, geez, there's a long way to go.

Wed, Jul 9, 2014 JOJO

IMHO unit testing is never a waste of time. Who wants software to be unreliable and unpredictable? What may be MORE a waste of time is over-use of modularization that has become popular. At least in the Java programming that I was involved in. My generation learned in school that modularization the best. But I’m sure we are finding out that too much may be as bad as too little.
Objects, functions and modules slow down code and have the added result that you may end up doing a TON more unit tests...because you have a ton more units. These each have their own, in some cases redundant behaviors and their own inputs and outputs. Some of these functions may not add anything useful to functionality, but are just part of a framework.
You should always test a module to make sure that it will not break in unexpected ways...but there has to be thought in how and when you make modules in the first place. You don't need a module for everything, or a module to make an idea look conceptually good to humans. That will not only waste time once the thing is compiled and is running...but it apparently will also waste time while testing and re-testing these supposed units.
Because of frameworks and out-of-the-box Java architectures, there are a lot of programmers who are not encouraged to think...but to follow rules or templates. IMO this was to save money by trying to get an army of programmers to more quickly produce products. Too many times we are not encouraged to think about what we are developing at a technical level.
The programmer has to test the possibly hundreds or thousands of modules that were in some cases auto-generated for their code. I’ve used frameworks and that is what happened for one in particular. Many of the functions were created as part of the framework by the IDE my management told me to use. Did this make the product work better or faster? Definitely not faster. There also ended up being a lot more code than I’d have written myself.
In my humble opinion, t is not the fault of testing; it may be a flaw in the design of some software systems. Testing of units is important, but there has to be a happy medium between monolithic code and a million tiny modules.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above


HTML - No Current Item Deck
  • Transforming Constituent Services with Business Process Management
  • Improving Performance in Hybrid Clouds
  • Data Center Consolidation & Energy Efficiency in Federal Facilities