Reality Check

Blog archive
Watchmaker closely examining a watch

Is unit testing of software a waste of time?

I’m currently working on a project that uses the JUnit framework for unit testing our software.  Unit testing is a method of testing software that focuses on testing small units of code like a single method or single class, which is a template for an object in object-oriented programming. 

Many modern integrated development environments will generate a stubbed out version of a unit test case for any class selected in a project.  Instead of throwing code over the wall to the testers at the end of development, the agile software development process considers testing, especially unit testing, as integral to the development process

The agile concept of writing unit tests first or taking a “test-first” approach was expanded and refined with the advent of the test-driven development process. TDD involves a very short development cycle where the test is created first, then the production code is written, refactored and cleaned until it passes the test. 

TDD and automated test frameworks (like JUnit) reinforced these new processes while riding the agile wave.  For years, no one dared question the new religion that myriad consultants pitched with zeal – until now.  Recently, David Heinemeier Hansson (the creator of Ruby on Rails), stated that TDD is dead

This “admission” followed several years of debate caused by a seminal paper by software guru James Coplien entitled, “Why Most Unit Testing is Waste.” At first blush, this appears to be yet another example of extreme programming (the progenitor of agile) overshooting its mark and needing to be pulled back from the edge by cooler heads. But Coplien’s paper makes several important points:

1.  Beware of testing for testing’s sake. One problem with unit testing is it cannot cover all the code to be tested, and therefore the tendency is to test what is easy to test.  Instead tests should be designed at the right level, which may mean system tests instead of unit tests.  It is easy to write a lot of useless tests, but good tests must be based on business requirements.  As Coplien says, “If this test fails, what business requirement is compromised?” 

2. Useless tests increase maintenance costs. The larger the code base (including both test and production code), the larger the maintenance costs.

3. Turn unit tests into assertions. Modern programming languages, like Java, allow assertions that enable developers to test assumptions in the code (like the assumption that “the value of X is 5”).  Assertions can also be turned off with a compiler switch and removed from production code although Coplien recommends they be kept in production code to automatically file a bug report on behalf of the end-user.

4. Good testing is hard and begs skepticism. Coplien concludes with the admonition to “be skeptical of yourself: measure, prove, retry.”  He bemoans the sloppy, fast-fail culture that exhibits overconfidence in the risk mitigation that unit-testing provides.  Being able to write unit tests fast and run them continuously does not improve risk mitigation. As Coplien says, “automated crap is still crap.”

Coplien’s well-reasoned paper caused a lot of soul searching in the agile community and David Heinemeier Hansson’s opening keynote at RailsConf 2014 (and blog post) stating that TDD is dead has spun up the debate and controversy.  Coplien debated TTD proponent Bob Martin on TDD.  More recently, Martin Fowler hosted a series of recorded hangout conversations between David Hansson and Kent Beck. 

Finally, Twitter exploded with the issue under the hashtag #tddisdead

So, what is the ramification for government IT managers of this new debate on the practice of unit testing and its agile incarnation, TDD? 

First and foremost is that the integration of testing into the coding part of the development lifecycle is a good thing.  Over-the-wall, Hail Mary passes to the testers are a practice that deserved an opposite reaction. However, don’t swing too far in the opposite direction by fostering a brute-force, test-first and test-everything approach. 

Instead, focus testing on key algorithms, system-level regression tests and well considered risk mitigation.  Focus testing on the failure and impact of key business requirements.  That is where the program will get the most bang for the buck in both the quality of the software and the maintenance budget.  So, while test-first may be on the ropes, automated testing is alive and well.  Long live testing!

Michael C. Daconta ( or @mdaconta) is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data.

Posted by Michael C. Daconta on Jun 18, 2014 at 7:27 AM


  • Defense
    The U.S. Army Corps of Engineers and the National Geospatial-Intelligence Agency (NGA) reveal concept renderings for the Next NGA West (N2W) campus from the design-build team McCarthy HITT winning proposal. The entirety of the campus is anticipated to be operational in 2025.

    How NGA is tackling interoperability challenges

    Mark Munsell, the National Geospatial-Intelligence Agency’s CTO, talks about talent shortages and how the agency is working to get more unclassified data.

  • Veterans Affairs
    Veterans Affairs CIO Jim Gfrerer speaks at an Oct. 10 FCW event (Photo credit: Troy K. Schneider)

    VA's pivot to agile

    With 10 months on the job, Veterans Affairs CIO Jim Gfrerer is pushing his organization toward a culture of constant delivery.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.