Elevate your security posture and readiness for 2021
- By Ari Vidali
- Jan 21, 2021
For some agencies, the SolarWinds attack was simply a wake-up call. For untold thousands of others, it was a tangible threat to digital assets with the potential for real-world consequences. While only 50 such organizations are thought to be “genuinely impacted” by the breach -- and the ramifications may be years or decades from full discovery -- it is clear that agencies must strongly reconsider their security posture and organizational readiness in light of the attack.
What does that mean for government IT personnel and related stakeholders? As the people keeping vital information systems safe, the best thing agencies and staff can do is find ways to apply these lessons in day-to-day operations.
The software supply chain matters more than ever
The potential for supply chain attacks and breaches from are “far from a new concept,” one ComplianceWeek piece noted, but recent examples remind us that attackers can leverage third-party code to directly compromise agency systems. Software supply chain attacks are up more than 400%, pointing to an increasingly attractive avenue of attack.
Also of concern is the practice of using free or open-source tools. While it is tempting to use free solutions, the risk of breach is quite high. By nature, open-source supply chain software is even more vulnerable to compromise by nefarious nation-state-sponsored hackers intent on breaching U.S. homeland defense and public safety organizations.
Organizations prioritizing security should evaluate open-source software carefully, and those using prepackaged programming interfaces and other third-party components must make a stronger commitment to testing, verifying and securing code integrated from outside sources. An initial breach in one system can allow attackers to gain increasing control over time, leapfrog to other systems and ultimately infect those outside the agency via a compromised update.
Agencies must likewise verify the safety of any third-party systems that integrate or use core agency computing or infrastructure systems -- such as a vendor’s schedule program sending automated update emails over the network -- and confirm the security of the vendors used by their third-party partners as much as possible.
Even within local government, every agency’s digital topography will consist of dozens or even hundreds of third-party products, themselves comprised of hundreds more underlying third-party components.
Using guidance from the Federal Risk and Authorization Management Program and Federal Information Security Modernization Act, agencies can conduct a thorough audit of their third-party contractors by asking these questions:
- How do they nominally do their jobs?
- What would a possible security breach using their components look like?
- How do the people providing the service plan negate the chance of a successful attack?
- What are their protocols for when malicious traffic does get through?
Knowing these answers can make life much easier both during normal operations and in the event of a breach. Strong organizational readiness requires deep knowledge into the systems, processes and organizations with which agencies work.
Move from blacklisting to a whitelisting strategy
Think of blacklisting -- banning malicious or untrustworthy activity -- as a reactive approach to security. In contrast, whitelisting is a proactive strategy that assigns trust to reliable sources instead of revoking trust when things go wrong.
How do things look when an agency approaches security from a trust-giving perspective instead of a trust-taking one? Agencies can model the idea over any number of digital activities, from web traffic to application data to inbound network requests from presumably trustworthy sources.
Embrace the zero-trust model
In a technology environment with so many moving parts, it can be difficult to monitor all suspicious activity. Instead of trying to identify all potentially nefarious actors, consider a zero-trust security model -- a system of governance aligned to the trust-giving perspective. Having caught the IT world by storm, the idea as described by one expert in a CSO piece is quite simple: “Cut off all access until the network knows who you are. Don’t allow access to IP addresses, machines, etc. until you know who that user is and whether they’re authorized.”
In a public-safety context, for example, the concept of inside vs. outside is key. While older “castle-and-moat” governance styles give a large degree of freedom to devices and users once they’ve been permitted past the initial moat, zero trust regards interior users with a consistent level of wariness.
With a castle-and-moat model, hackers can leverage the trust allocated to vendors to compromise agency system more easily -- executing remote commands, sniffing passwords and more. A system that instead requires components to be identified, justified and authenticated at all points is one that can more easily catch compromises and prevent further access. This makes a zero-trust model a serious consideration for IT managers trying to keep operations secure with minimal manual intervention.
Check weak points before it’s too late
Knowing about potential (or even confirmed) breaches has obvious value and is also a boon for an agency’s overall security posture -- understanding weaknesses and points of entry means they can be addressed.
For agencies developing their own code in some capacity, static code analysis -- considered a staple of secure development -- allows developers to test code between development and full deployment. In terms of breach prevention, developers can find and correct changes at the baseline before they become larger concerns in production. Agencies concerned about source code security in the wake of the SolarWinds breach, meanwhile, may find help in products that scan binary code, allowing a level of verification over the source code of third-party products, even when that source code is not available, which is usually the case.
As for active-production measures, an agency’s ability to detect malicious or unwanted network traffic is essential. A Dark Reading piece on the topic includes several helpful tips for keeping a closer eye:
- Paying special attention to packets that deviate from normal sizes.
- Monitoring unusual bandwidth usage to spot the source of the aberration.
- Watching for desktops and laptops that attempt to connect to one another.
- Looking for outbound connections from printers and internet-of-things devices.
Of course, the topic of network forensics goes as deep as one cares to explore. Agencies more deeply concerned about security posture could look further into trapping, inspecting and acting upon the discovery of suspect activity.
Refactor two-factor authentication
Two-factor authentication (2FA) is rightly hailed as a strong security measure for organizations wishing to add another layer of user verification -- a concern that should be front-of-mind for any agency considering its readiness.
That said, not all 2FA approaches are equally effective, and the most popular method is arguably the least effective: the phone-based calls and messages commonly used by many applications. As Tom’s Guide says, it’s key to note that phone-based systems are tied to numbers, not people, and the calls and texts themselves are subject to interception and other trickery that can defeat the whole purpose of a verification system.
Agencies that still use phone-based apps or SMS systems for 2FA should consider instead the advantages of a dedicated authentication hardware token. The popular YubiKey, for example, is a device registered on a qualified online service and physically plugged in (or gets close to, if the technology utilizes near field communication) to gain access to the system in question. These devices do create some day-to-day hassle in the average IT staffer’s life -- replacing and re-registering lost keys, for instance. However, their ability to boost an agency’s security posture and overall readiness are unquestionable -- making 2FA one of the fastest and easiest weak points to shore up, compared to the legacy systems agencies may be using now.
Staying secure and ready
SolarWinds was a multifaceted, highly sophisticated attack, and its full ramifications will likely not be known for some time. In the interim, all organizations can learn something from the breach in terms of their security posture and readiness when nominal operations go askew.
Both the attack and the potential reactions come down to a few overarching concepts, namely trust and verification. That applies to the third-party vendors providing critical services, naturally, but the examination shouldn’t stop there. Using the SolarWinds breach as a framework, it’s clear that internal processes (including those usually given a high degree of trust, such as software updates) also need a greater focus.
Whatever that discovery looks like, agencies should not miss this opportunity for positive change, because waiting for the attack to happen is never the best response.
This article was changed Jan. 26 to clarify the author's points about open source software.