Avoiding common hang-ups when implementing DevOps pipelines


Avoiding common hang-ups in DevOps pipelines

While many federal agencies have begun adopting a DevOps approach to software development, some are still struggling to develop proper DevOps pipelines.

This is not surprising; getting a basic continuous integration (CI) and continuous delivery (CD) process to work correctly is difficult and takes time. Ideally, there is always some type of source control management (SCM) solution, build server and application platform for app deployment. Hooking these components together can be nontrivial.

So let’s take a look at a few simple strategies agency developers can employ to expedite their efforts by creating and maintaining sound DevOps pipelines.

Remember: not everything is automated

While the goal behind DevOps is to accelerate innovation and processes, not every aspect of a DevOps pipeline should be automated. In fact, there should be clear and defined manual steps in the pipeline, but they should be done with minimal manipulation to keep the process moving.

For example, moving artifacts between isolated environments should be a manual process. However, many developers get hung-up on the belief that when artifacts are moved from one environment to the next, no additional testing needs to happen -- just deploy and be done with it.

That’s the wrong perspective. Drift can happen between environments, and unless each environment is created by code there is no guarantee that the tests run in one environment will generate the same results in another. The infrastructure team should incorporate manual processes into their pipelines and test as necessary. In disconnected environments, it will be necessary to run the same checks in each environment.

Avoid incomplete pipelines

In my consulting adventures on DevOps, I’ve often heard, “We’ve already got a solid solution.” Yet sometimes, that’s not exactly true. In fact, many “solid solution” pipelines have been incomplete and broken since the first phase of their lifecycles.

Ideally, when a developer pushes code up to the SCM, a build is kicked off that eventually leads to some artifact being tested and deployed to the development environment. This requires manual steps that include running the build, testing, administering notifications and coordinating the entire process.

Done incorrectly, this process can lead to undocumented or fragmented intellectual assets and an environment where repeatability becomes extremely difficult. More often than not, specialized teams need to be brought in to manage the process. This results in more money, build time and man hours, completely undermining efficiency efforts.

To avoid this hang-up, development teams are advised to adopt an agreed-upon build and deploy process with scripted phases. For example, during the initial phases, the project lead and architect will create an expected build and deploy pipeline for their development servers. A developer then will commit code to a feature branch, triggering the CI tool to build the artifact, run unit tests, deploy the artifact, run functional tests and vulnerability scans, merge feature and master branches and push notifications to the team.

Still, integration can be difficult and cause timeline delays, leading teams to ultimately decide that developing is more important than having a complete and working pipeline. A non-working pipeline implies that there is no scripted way to execute the components of the pipeline successfully. At that point, teams will manually handle everything beyond the failing sections of the pipeline.

The problem is this tends to imply that other phases of the pipeline are also broken. In such cases, the team is not using a repeatable and stable approach. Instead, they are introducing more opportunities for problems, inhibiting their ability to find and resolve issues and slowing down the development lifecycle.

Start simple

To avoid these issues, developers should establish a simple procedure in which code from a SCM triggers a build with the CI tool, pushes to an artifact repository, and does a deployment. That’s it! Suddenly, the first phase of the pipeline is complete and ready. Now, developers have the freedom to bootstrap on anything additional they’d like to feature as part of their pipeline, including code coverage, functional testing, notifications, agile tools and more.

Additional pipelines can be replicated for other environments with the eventual goal of connecting them, manually or otherwise. This approach automatically and naturally includes all teams involved in a release and creates a feedback loop, reflecting the true collaborative spirit of DevOps. Once the team has the basics down, it can start thinking about other ways to improve, streamline, and optimize business development practices.

As they start down the DevOps road, federal development teams are bound to find themselves getting hung up on some hurdles. The good news is that many of these hurdles can be easily cleared. It just takes a bit of upfront work to establish stable, complete and simple DevOps pipelines.

About the Author

Jason D. Marley is a senior consultant at Red Hat.


  • Records management: Look beyond the NARA mandates

    Pandemic tests electronic records management

    Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.

  • boy learning at home (Travelpixs/Shutterstock.com)

    Tucson’s community wireless bridges the digital divide

    The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone.

Stay Connected