Containers and hybrid IT: Plenty of benefits, but one big challenge
- By Brandon Shopp*
- Apr 25, 2019
Open source containers and hybrid cloud environments are making tremendous in-roads into the public sector. According to a 2018 survey, 44% of respondents ranked containers as a top technology priority, a massive jump from 16% in 2017. Meanwhile, 97% of federal IT professionals listed hybrid IT/cloud among the top five most important technologies to their overall IT strategies.
This rise in popularity of both containers and hybrid IT is no coincidence. Containers break up applications into logical and vital system functions. This separation allows for the creation and deployment of smaller, simplified applications that can exist anywhere -- on-premises or otherwise -- making them ideal for the world of hybrid cloud.
Containers offer some enormous advantages for IT administrators working with different types of clouds, but they also introduce some unique challenges when it comes to cloud monitoring for agencies. Let’s look at how containers are changing the government IT landscape for the better and what administrators should be aware of as they seek to gain better insight into their evolving IT infrastructures.
Big benefits in a small package
Until recently, applications were monolithic constructions that had to be configured to run on different cloud deployments -- an onerous and time-consuming task. Worse, upgrading a service required shutting down the entire application and restarting it. The process could be highly disruptive, causing significant downtime and lost productivity.
Now, thanks to containers, agencies can have small pieces of their applications running wherever they will be the most effective. For example, an organization can run security-related aspects of an application in an on-premises database while keeping other components of that application in a public cloud to take advantage of cost efficiencies. Code updates are as simple as redeploying the specific service in question, rather than the entire application.
Elasticity is another big benefit. For instance, during tax season the IRS expects a spike in traffic to its website, particularly as April 15 approaches. However, the agency doesn’t necessarily need the resources to manage that spike at other times of the year. IRS administrators can easily scale their containerized applications up or down depending on their requirements, allocating resources accordingly and potentially saving a significant amount of money.
Automation is essential
Gone are the days of deploying an armada of beefy hardware to manually adjust to the rising and ebbing tides of users. Today, automated orchestration engines are essential to container management and deployment. These engines allow IT professionals to set rules and dictate when a service should be initiated or taken down, handling everything behind the scenes with no manual intervention required.
There are many orchestration engines out there. Docker -- the company that many people automatically associate with containers -- has one. Microsoft also offers a service that makes it easy to deploy Kubernetes containers to its Azure cloud. These and other choices exist to help IT administrators get the most out of their container deployments with minimal effort.
Monitoring in and out of the container
Despite their many benefits, containers introduce an abstraction layer that can pose monitoring challenges. Traditional government IT monitoring strategies may work to some extent, but true visibility into containers requires a degree of monitoring that may be unfamiliar to many agencies.
That’s because containers break down applications into groups of interdependent chunks. Administrators must be able to understand the relationships among these chunks for proper monitoring to occur.
However, the abstraction layer makes this difficult, so administrators need specialized tools to give them insights into how everything works together. Solutions that communicate with these technologies can give administrators complete visibility into the health of containers and the various pieces they host.
But visibility doesn’t just begin and end at the container. What if a service fails? How do administrators know if that failure is occurring in a container, on the network or on the server? How do they know if it’s happening on-premises or in a hosted cloud environment? The only way to be sure is by using a unified platform that provides total visibility into all aspects of the IT environment – one that monitors application health across physical hardware, virtual machines, hybrid cloud environments and containers.
Opportunities and costs
Applications evolve over time, and the code being written today doesn’t look like code from five years ago. Breaking apart application interdependencies makes it easier for agencies to update only what they need, when they need to. It’s a proposition that can smooth the path to modernization without forcing agencies to manage or upgrade huge applications.
That doesn’t mean they don’t come with a cost, of course. Agencies must find new ways to manage and monitor these unique technologies and the hybrid IT environments in which they reside. If they can do this, they can ensure that everything will continue to run smoothly -- and their increasing investments in containers and hybrid IT will continue to pay off.
Brandon Shopp has been our Vice President of Product for Network Management since February 2018. He served as our Director of Product Management since November 2011, assuming the title and responsibilities of Senior Director of Product Management in July 2013. Previously, Shopp was the Vice President of Product Management at AlienVault, from August 2016 until February 2018 and the Senior Director of Products at Embarcadero Technologies, from July 2015 until August 2016. Shopp has a proven success record in product delivery and revenue growth, with a wide variety of software product, business model, M&A, and go-to-market strategies experience. Shopp holds a B.B.A. from Texas A&M University.