What agencies can learn about software testing from the auto industry


What agencies can learn about software testing from the auto industry

Software is becoming an increasingly integral part of almost all equipment. Given software's significant role in helping accomplish agency missions, how can decision makers more accurately assess their own software-testing capabilities?

While some agencies may not be convinced of the importance of testing, those that see  opportunities for improving speed and efficiency from software testing laboratories still struggle to accurately evaluate how effective their labs’ testing capabilities really are.

What’s often needed is a set of meaningful metrics that can shed light on labs’ actual performance to help agency manager make decisions about running a software lab. Valid metrics are central to making command-level decisions from a holistic, business perspective. Unfortunately, the current engineering- and technology-based metrics usually used today are more appropriate for testing, say, a fourth-generation unmanned vehicle than the cost, efficiency, and performance of running a lab.

As it turns out, the most meaningful metrics in software testing labs come from operations with similar processes, such as automotive factories. Metrics on capacity, efficiency, effectiveness and capability allow decision makers not only to measure and improve each software lab’s cost and performance, but also to effectively manage all labs as they fast become more strategic throughout government.

Common in auto manufacturing, these measures easily transfer to other industries and can work well for measuring lab operations. Just as car factories input parts, assemble them and output completed vehicles, a software integration lab inputs software code, runs tests against the code and puts out a report on the code's validity. It makes sense that the leading metrics are similar, too.

These metrics give agency and program leaders valuable benefits, including:

  • Transparency. A clear, communicable set of metrics lets leaders quickly and accurately assess performance and capacity. And direct, fact-based comparisons allow them to compare each lab’s performance against other labs.
  • Cost savings. Equal, meaningful metrics highlight the cost-saving opportunities within the current environment.
  • Risk mitigation. Measuring current and future lab capacity allows for more accurate estimates of cost and potential schedule delays.
  • Negotiation support. The metrics help the program office more accurately size and negotiate requirements for contracting labs.

Let's take a closer look at these key metrics and how they link to improving software testing:


Capacity, measured in test points, is a lab’s throughput per hour, demonstrating its ability to execute raw work, including integration, verification and registration tests. It calculates how much work could get done in total units if a lab runs 24 hours a day, seven days a week.

Test points can easily be converted into derivative metrics, such as shift, daily and yearly capacity. As the best proxy for lab size, capacity shows whether a lab can handle additional work should planners want to shift more testing there.

Because test points are the basic unit of lab production, comparing dollars per test point is the core indicator of cost in a lab. Using this comparison, decision makers can determine, for example, how much it costs to run a test or how much it costs to find a defect.


Efficiency explains how well a lab does the work. If a lab can do 100 tests in a day, but only 50 come out correctly, then its efficiency metric would be quite low.

Efficiency is measured using “on-condition,” which means that a test was successfully executed according to the checklist and setup procedures determined by the system engineers and that it does not need to be repeated. This metric calculates the percentage of tests executed correctly – not whether the software being tested passed or failed the test – and is determined by dividing test points on condition by total test points attempted.  

Lab capacity and efficiency are tightly linked and are often measured together to provide a clear understanding of their combined effect. Baselines derived from this combination gives decision makers information such as how a given action would change a lab’s throughput, how a different action would affect a lab’s cost per hour or cost per defect and how yet another action would impact a lab’s efficiency or capacity.


Effectiveness describes how well a lab can discover errors. If a software testing lab’s primary purpose is to find defects or certify code, a measure of effectiveness could be the number of test points to defects, calculated by defect found divided by test points attempted. This measurement of a lab’s testing procedure shows how many tests must be run before a lab starts finding errors in the testing procedure. The accuracy of this metric depends on several issues, including the quality of the code.


Capability factors in the skill set of a lab’s workforce and the functionality of its equipment. This metric is used to compare how well each lab can test specific areas of the software and is the function of three factors:

  • Knowledge considers the expertise in the product, function and technology areas.
  • Competency assesses work behaviors and skills required to perform the work.
  • Capacity measures the availability and readiness of a lab’s human and infrastructure resources.

Because capability is also directly affected by a lab’s equipment, this composition must be analyzed in any lab-to-lab comparison.

Capability also plays a major role in overall management decisions because it has an implicit effect on the other three metrics. Therefore, its impact on each must be understood before making changes to the workforce's size, experience or skill set.

Together, these metrics give government leaders the information they need to develop a baseline of current operations and put context around the decisions they make. With this baseline, they can know the effect of making small changes, such as how adding capacity will affect a lab’s costs, how reducing costs will change the lab’s capability and efficiency, and how hiring employees with different skill sets will change the on-condition efficiency. They also will be able to answer questions about whether labs are effective, whether they have talent or skill deficiencies, and  whether significant changes need to be made to improve overall software testing. And, perhaps most importantly, they will know whether their throughput and quality meet the demands of their individual agencies and programs.

Finally, these metrics will cut through the confusion leaders now feel and give them the concrete measures they need for making decisions, not just on technical performance and operations, but also on fiscal performance. As the government’s capabilities in developing software mature, these metrics will become even more vital to the agencies’ overall efforts to drive efficiencies and savings in their programs.

About the Author

Christian Hagen is a partner in A.T. Kearney’s Strategic Information Technology Practice and is based in Chicago.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/Shutterstock.com)

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected