Why now is the right time for an open-source serverless strategy
- By John Osborne
- May 18, 2021
Government agencies have been using open-source technologies such as Linux, Kubernetes, Ansible and more recently Linux containers to make application deployments more expedient and efficient. Increasingly, these applications are being built in acknowledgment of the National Institute of Standards and Technology’s cloud characteristics, such as on-demand self-service, resource pooling and rapid elasticity.
Simultaneously, many agencies continue to rely on traditional static infrastructure provisioning models to support these increasingly dynamic applications. However, that approach may not make much sense in today’s cloud-based, data-intensive, event-driven world.
While traditional server infrastructures are still suitable for some workloads, they can be at odds with agencies’ efforts to achieve greater efficiency and agility. Traditional servers tend to be always on and running at full capacity 24/7, even if they have to handle only occasional spikes in capacity. That’s counterproductive to efficiency.
Also, traditional servers require developers to understand their infrastructure requirements in advance, which is not something typical developers want to worry about. They want to focus on accelerated development, not hardware components and configurations.
Serverless computing can help alleviate these challenges. While not a new concept, serverless remains somewhat underutilized and misunderstood in the government space. It’s well worth investigating, particularly for agencies using containers for application development.
Let’s take a closer look at serverless computing and how agencies can use it to maximize their resources (from both technological and human perspectives) and why open source serverless technologies offer an ideal complement to modern application development.
The first step is to define what serverless computing is and what it is not.
Serverless computing is a cloud-native concept that enables agencies to outsource the management of their servers, databases and even application logic to a cloud platform. In a serverless computing environment, developers can freely build and run applications without having to worry about their underlying technology infrastructures.
Ironically, serverless computing is not serverless any more than “wireless” communication is wireless. Just because there’s not a wire connecting a laptop to the internet does not mean there are not thousands of miles of fiber-optic cables delivering web requests to users’ computers. It’s the same idea with serverless; there are servers, but they are abstracted away from the developer by the platform or cloud-provider. Developers simply need to package their apps in containers and deploy them.
Being able to package applications within a container is especially important when dealing with monolithic or legacy systems, because the applications do not have to be rewritten. That’s a big advantage over proprietary solutions, which are better suited for applications that are being built from the ground up. Packaging legacy applications in containers allows those applications to be orchestrated by Kubernetes and spun up or down based on demand, in an event-driven manner, without the need for rewrites.
In fact, an event-driven model is one of the other main benefits of serverless computing. Instead of an architecture that’s always on, operating at full throttle every minute of the day, resources are only used when they need to be.
There are a number of different government use cases where serverless computing can be helpful. Think of Tax Day when millions of Americans flood the IRS’ servers with tax documentation or the open enrollment period for the Healthcare.gov marketplace. Neither of these services necessarily needs virtual infrastructures operating at full capacity all the time, but agencies understandably prepare for worst-case scenarios. Yet it’s more efficient to have a serverless infrastructure that can dial up and down as needed.
Agencies that rely on large-scale batch processing could make use of serverless computing far more than just a few times a year. For example, a government office that sends large files to another agency at the same time once a day doesn’t need an array of dedicated virtual machines running hot all day long. It can simply spin up resources on-demand before scaling back down to zero. Serverless computing enables users to scale up and down as needed in an event-driven fashion.
Serverless is also helpful for data science initiatives, including artificial intelligence and machine learning workloads. In such cases, raw data may need to be transformed or shaped at the point of ingestion. With serverless computing, these workloads do not need to be running all the time. The servers can operate as needed only as data is ingested.
Open-source serverless computing helps maximize human resources
Agencies with DevOps teams or a container-based approach to development are likely already invested in an open-source ecosystem. For them, it makes sense to complement current development approaches with an open-source serverless model that supports the work they’re already doing in a more productive and efficient manner.
Getting locked into a proprietary serverless solution, on the other hand, can undermine developers’ efforts by potentially forcing them to change the way they build and test their code to fit proprietary workflows and requirements. With open-source serverless, they can simply continue packaging existing workflows into containers without having to worry about the solution itself. Developers can continue their work without impediments.
Open-source technologies can also alleviate the need for specialized skill sets or additional training. While many proprietary serverless computing platforms require additional tools (a serverless database, for example) to work correctly, teams working with open-source solutions do not have to continually learn new workflow processes or technologies.
While open-source serverless computing may not be appropriate for all use cases, it is and will continue to be valuable for certain situations, such as extracting and pulling data from multiple sources at different periods. Serverless computing can prove invaluable in these instances, and offer a key tool to help agencies balance the need for efficiency and agility.
John Osborne is a Chief OpenShift® Architect with Red Hat® Public Sector. He has focused for more than 3 years on the role of Kubernetes in government IT modernization. Before his arrival at Red Hat, he worked at a startup and then spent 7 years with the U.S. Navy developing high-performance applications and deploying them to several mission-critical areas across the globe. He is also co-author of OpenShift In Action.