4 layers of enterprise security

Although threats to government IT  security continue unabated, agencies  do have some mature solutions for protecting enterprise assets from malicious intruders.  


The new cyber landscape

With tight budgets the only constant, agencies face a fast-evolving range of challenges -- from securing hybrid networks and authenticating users who connect via mobile devices and the cloud. Read more.

No one advocates abandoning the traditional protections -- firewalls and secure routers, network analytic and intruder detection tools, for example. However, new strategies are helping agencies verify traffic, isolate networks, prevent data loss and protect the endpoints in support of locking down government data.  The following four approaches, sometimes with overlapping tools, have been developed in recent years to enhance data-centric security.

Zero trust

Agencies are increasingly abandoning the perimeter-centric model for security and adopting a zero-trust model.

The key to zero-trust is simple: No traffic on the network is presumed to be trustworthy. The model effectively eliminates the distinction between trusted inside-the-perimeter network activity and untrusted activity that crosses that perimeter.

Actual implementation, on the other hand, is more complicated -- though it boils down to applying three principles:

  1. All data must be secured regardless of location. In practice, that means all data must be encrypted even when residing and being accessed from within the network perimeter.
  2. User identities must be confirmed and access to data strictly enforced, with the default being minimal privileges.
  3. All network traffic should be logged and analyzed. As a recent report from Forrester Research put it, “Zero trust flips the mantra ‘trust but verify’ into ‘verify and never trust.”

“As we move our data outside of the firewall, we have to adopt a zero-trust type model,” ,” said Chris Townsend, Symantec’s vice president for federal.  “We are shifting our security enforcement out to the data itself, and you have to have a security policy that follows that user no matter where that user is or what device they are using to access the data.”


One increasingly popular technology that can help with implementing a zero-trust model is microsegmentation, which uses software-defined virtual networks to create myriad isolated networks. Whereas standard networks use firewalls and routers to segment traffic for an entire organization, microsegmentation might define a network that is accessible only to single workgroup or even a single individual.

Bill Rowan, vice president of federal sales at VMware, compared the concept to that of a submarine built with various compartments so that a breach in one compartment would not flood the others. “That’s where we are taking the approach of microsegmenting networks,” he said.

According to Rowan, a well-designed microsegmentation solution imposes no noticeable cost in terms of network performance and even simplifies some network chores, such as moving an application from one virtual network to another. “Because we are separating the physical from the logical, I can simply build that same network topology on the other side,” he said. “Heretofore, I had to go back in and change all my network settings to make sure the application could effectively communicate.”

Townsend, however, said it’s not clear whether microsegmentation will be viable. His main concern is scalability. “The idea is that for every virtual application and every virtual network segment, you have a security policy that follows that data or that portion of the network,” he said. That could overwhelm an agency’s IT team.

“Right now, our federal customers are struggling from an operations standpoint to manage their security environment as it stands today,” he added.

Data loss prevention

The Cybersecurity Framework published by the National Institute of Standards and Technology in 2014 designated data loss prevention  as a core cybersecurity strategy.

DLP solutions use various technologies to keep unauthorized people from accessing data. Although organizations can implement data loss prevention measures using separate tools for controlling user access to network and data center resources -- firewalls, intrusion detection, file permissions and user credentials, for example -- DLP generally refers to software packages that classify data and control access to it by comparing pieces of data to user authorizations.

DLP software might also prevent or allow users to copy, print or email data. Some advanced packages also monitor access to data and use artificial intelligence to detect unusual, even if authorized, access.

“One of the biggest challenges that we are seeing, especially in the federal space, is where you’ve got well-meaning internal users who are using Gmail or Dropbox to move potentially sensitive or personally identifiable information around,” Townsend said.

DLP tools monitors such actions. If an upload contains sensitive information, Symantec’s solution will dynamically encrypt the data and force the user to authenticate. “It forces them to acknowledge that and stores the keys on premises,” Townsend said. “No matter how many times it’s been forwarded on, you could always revoke the key and essentially wipe the data.”

A 2014 study found that only 18 percent of agencies had invested in DLP. Analysts say the percentage of agencies with DLP has increased since then, but in a 2017 survey, only one-third of agencies give themselves a grade of A for their DLP efforts. According to the same survey of 150 federal IT managers, 50 percent said their agencies need to adopt multifactor user authentication, 49 percent said real-time activity monitoring needed to be expanded, and 45 percent said their agencies needed to classify data and adopt DLP.

Endpoint detection and response No agency, of course, can ensure that no adversary will access its network. As a result, vendors that have often been called in to help agencies perform vulnerability assessments and respond to intrusions have begun to offer services for continuous proactive monitoring of clients’ networks to detect and respond to intruders. The services are called endpoint detection and response (EDR), which refers to the aim of determining the source of any suspected malicious activity on the network.

Symantec recently began offering an EDR service, and Townsend said the government has been slower to adopt EDR than the private sector. “We haven’t seen federal customers moving wholesale to EDR yet,” he said, though he is confident they will.

The basic principle behind EDR is for an organization to assume it has already been breached. The average time between an intruder accessing federal networks and being discovered was 49 days last year, said Brian Hussey, vice president for cyberthreat protection and response at Trustwave, a cybersecurity company that offers EDR services. Given the damage that can be done in 49 days, “assume you have an attacker right now in your system, and it is our job to go and find it,” he added.

EDR services monitor all traffic between the network and endpoints -- computers and mobile devices -- whether they are on premises or remotely accessing the network. “These tools give us the ability to monitor every single event that happens on a network,” Hussey said. “A thousand events may happen every single minute. You’re going to be able to monitor and capture every single one of those, bring them into our data center and correlate them not across just one computer but across entire networks.”

Backed up by human analysts, EDR software scans the traffic constantly looking for suspicious behaviors. “We have network forensics and artificial intelligence capabilities that we can stop and absorb every single network callout, every single piece of traffic, and cull malicious activity from there,” Hussey said. “Once we have that, we can use our accessed endpoints as a direct pivot point so we can do a deep-dive investigation and continue the hunt.”

Federal clients’ slow adoption might be due to the fact EDR requires continuous monitoring of the agency’s networks, something federal network administrators have been leery about.

“We know without a doubt that everybody is territorial,” said Bill Rucker, president of Trustwave Government Solutions, but he added that those instincts are increasingly being outweighed by concerns about intruders. And, of course, EDR service providers are required to have the same security clearances as in-house specialists.

Some agencies, however, are implementing EDR with their own employees manning the monitors. EDR has made incident response significantly more effective at NIST, said James Fowler, the agency’s acting deputy CIO.

Much of EDR’s legwork is automated, Fowler added, but NIST officials make the final decision when suspicious activity is detected. “We have set it up so that we get yellow alerts, orange alerts and red alerts based on the nature of the behavior that we are seeing,” he said. “If it’s an orange alert, that would typically mean a human [will] go in there and look at what is actually going on and make a decision as to whether or not additional attention is needed.”

 “Before we were using this kind of technology, we would have to pull a computer off the network that was suspected to be compromised, but we weren’t 100 percent sure and then we would do analysis,” he said. “Now we are able to actually go remotely into the box while it is still connected and make a determination as to whether or not it’s compromised. That has been a big improvement.”

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.