Reality Check

Blog archive
Tortoise and hare on healthcaregov

Media got it wrong: failed despite agile practices

This article has been updated to correct a reference to the typical duration of a sprint.

Many websites including the New Yorker, Washington Post and MedCity News have proclaimed that would not have failed if it had just used “modern” software practices, called agile development, just like commercial companies do. The convenient meme being that the poor, backwards government just is not as up to date as the commercial world.

Unfortunately, these sites must have been simply parroting the shtick of agile consultants because the front-end GUI and back-end data services hub were both developed using agile processes.  It is a shame that harsh reality had to ruin a nice and convenient silver bullet.

Other views

Why Agile can work for complex systems like

Development of large, complex systems requires a method that also allows a progressive discovery of project scope and dependencies, along with identification and mitigation of key risks. Read more.

Lesson from A launch is no time for a beta test

The past month of problems and de-bugging on the Affordable Care Act portal has not been so bad, if you think of it as a beta. Read more.

I have seen some of the developer documentation, and it clearly discusses sprints, user stories and incremental testing — all of which are hallmarks of an agile process. A sprint is a fixed duration of time (normally two weeks) where specific work is completed. A user story is a short description of a feature from the end-user’s perspective that follows this template:  “As a <type of user> I want <to perform some task> so that I can <achieve some goal/benefit/value>”. 

The bottom line is that those people who claimed that all would be well if had used an agile process are wrong.  The reality is that the developers did use agile, and the project failed miserably. Before the agile practitioners, fans and consultants get in an uproar with the chant, “but they did it wrong,” let me examine some of the facts of what they did and compare them to some of my recent experiences in requirements analysis and the design of data integration hubs. 

1. User stories vs. requirements decomposition.  I am currently involved in a Defense-related big data project where we are analyzing and decomposing customer requirements into system requirements. The process we are using begins with a detailed set of customer requirements that include both functional requirements (what the software should do) and performance requirements (metrics it should meet). This is painstaking work, as we analyze, debate, discuss and finally decompose hundreds of end-user requirements. Sometimes this is done with the customer, and sometimes we generate requests for information to clarify a vague requirement. The goal is to generate a systems requirement document and then to further decompose those system requirements into software and hardware requirements. This is a robust process with dedicated and talented engineers that is working well. This will take a few months, but when it is done we will know exactly what we have to build.

I have done this process many times and if it is managed well, it works well. In contrast, the data services hub documentation refers to user stories for its requirements. Let’s think about that for a moment with our designer hats on. A complex back-end data services hub — a piece of software with zero actual, living, breathing end-users — has to be described in terms of “user” stories. Does something sound off-key to you? It should, because user stories are great for user interfaces but poor, confusing and often misconstrued for non-observable behavior. Evidence of this is widely available in the many debates and questions on sites like Stack OverflowCodeProject and blogs.  Yes, I’m sure you can bend and twist user stories to address non-user based functionality, but should you?  In my opinion, a software requirements document with UML (Unified Modeling Language) use-case diagrams would have served the data services hub more effectively.

2. Design documents. For the state of Florida, I led a team in designing and developing a data integration hub that recently successfully passed its first milestone. The software design document was reviewed by the customer, Gartner, and the key software infrastructure vendor. Such due diligence has paid off in keeping us on time and within budget. In contrast, what are the agile design artifacts and what due diligence was undertaken to ensure the developers followed best practices? Again, we see a site that failed a test of 200 to 300 people  when it was supposedly designed for 50,000 simultaneous users (and even that number is a gross underestimate). Where are the design and architecture documents that show how the system was built to be scalable? Oh, agile processes don’t like design documents … hmmm, that could make due diligence difficult (or even impossible).

3. Asynchronous vs. synchronous. Recently, CGI reported that the “hub services are intermittently unavailable.”  In examining some of the Business Service Description (BSD) documents, we see that key interfaces (like verify income and verify citizenship) were designed as synchronous instead of asynchronous interfaces. This is strange because many frameworks and platforms, like Google Web Toolkit, Android, AWS Flow Framework, Play and many others, promote asynchronous calls as a best practice (and some mandate it). 

Additionally, modern cloud-native applications built for massive scalability and elasticity should be based on loose-couplingmessaging  and asynchronous calls. Frankly, for a site like this that requires high levels of reliability and scalability, a synchronous API design for the data hub is inexcusable. There was enough time, enough money and enough political muscle (as the president’s signature achievement) to get it right (even given intransigent partners).

Let me close by clarifying why I think agile is a good thing, even though I don’t agree with all its practices. Agile is part of the evolution of the software development process that gets some things right and some things wrong. I like many of the more moderate parts of agile, such as small iterations, test-driven development and refactoring. However, I advocate a more balanced, in-between approach, especially in relation to requirements and design. 

I look at agile as a Stage 2 technology in accordance with Robert Heinlein’s three stages of technology: “Every technology goes through three stages: first, a crudely simple and quite unsatisfactory gadget; second, an enormously complicated group of gadgets designed to overcome the shortcomings of the original and achieving thereby somewhat satisfactory performance through extremely complex compromise; third, a final stage of smooth simplicity and efficient performance based on correct understanding of natural laws and proper design therefrom.” 

The key point is that agile is a reaction to the waterfall method and, as with most reactions, the pendulum swung a bit too far. Thus we can expect a more moderated third-stage technology to get the balance right. In relation to, an agile process was implemented and the software was a national failure. This does not mean agile was the primary cause of that failure but it is not unreasonable to assume it played a part. My hope is that we can learn from this mess and through it forge a better software development process that strikes the right balance between the extremes.

Michael C. Daconta ( is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data.

Posted by Michael C. Daconta on Nov 01, 2013 at 9:45 AM

inside gcn

  • data to combat homelessness

    Iowa City turns to data for holistic health care, justice solutions

Reader Comments

Wed, Jun 7, 2017 Jason Foust

I just got done listening to Jeff Sutherland's book on Scrum and I am in the process of implementing his methodologies into an Air Force software project I am working on. (I wonder if this is the same Jeff Sutherland who commented on this thread. If it is, I would love to pick your brain on how we can better implement your methodologies in our processes.) One of the challenges we have had was to move the mindset from the way Mr. Daconta advocates to an Agile methodology. The idea of being able to deliver when the customer wants, while satisfying the demands of the governments acquisition process is challenging, but thanks to some of the efforts of Mr. Daconta and his work with cloud computing, we are finding the way. From the government side, we are passing our audits for CMMI/AS9100 and meeting cost, schedule, and requirements. I would be very interested to hear what your EVM metrics were.

Wed, Sep 2, 2015 nandhini chennai

Thanks for your details and explanations..I want more information from your side..I Am working in http://w DOT ww.excelanto.comshould you need for any other clarification please call in this number.044-6565 6523.

Mon, Apr 7, 2014 Eric Florida, USA

I would love to see a rewrite of this article now that you've received some feedback, an a sort of Agile way. Oh, and maybe some solid investigation and reporting of what really went on in the project.

Wed, Mar 5, 2014 Nerd United States

Agile is a license to suck. "Mistakes will happen so we won't bother planning." Yeah, good plan. If you want to build slipshod, ramshackle, Frankenstein code, then Agile is the definitely way to go. The new developers I've met in the past 5 years wouldn't know the difference anyway. Man they are fundamentally terrible. I blame no child left behind. With Agile, the story of a site failing to meet its deliverables is the norm, not the exception. Seriously, just ask around. If somebody tells you otherwise, they're selling you something, or they haven't been in the business long enough to know what "not sucking" looks like. This entire industry has taken a serious nosedive into incompetence since the fanboys got hold of the reins.

Tue, Feb 11, 2014 Jeff Sutherland Cambridge Innovation Center

The Agile Manifesto values working software at the end of every sprint and this project never had working software at the end of any sprint during development or after deployment. Nor did it have any shippable product at any time during or after development. The author shows he knows very little about agile development.

Show All Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above


HTML - No Current Item Deck
  • Transforming Constituent Services with Business Process Management
  • Improving Performance in Hybrid Clouds
  • Data Center Consolidation & Energy Efficiency in Federal Facilities

More from 1105 Public Sector Media Group