frustrated manager

5 mistakes to avoid after a cybersecurity breach

It's been more than two years since the Office of Personnel Management publicly disclosed data breaches that had exposed the highly sensitive data of over 20 million current, former and prospective federal employees.

The breach was one of the biggest involving a federal agency in terms of both records compromised and victims affected. A report released by the House Oversight and Government Reform Committee last September blamed the incident on a variety of errors and miscalculations that OPM officials made before and after they discovered the network intrusions that led to the data theft.

Although some have challenged the findings in the report, the breach remains a potent reminder of the enormous challenges organizations face in detecting and responding to incidents in an era of incessant security threats. Given the vast attack surfaces of today’s systems, intrusions are nearly inevitable, but they don’t have to be catastrophic.

Here, according to security analysts, are five of the most common missteps to avoid after a breach is discovered.

  1. Acting before fully understanding the problem

The immediate aftermath of a breach discovery can be terrifying, especially if there’s reason to believe a malicious actor might have compromised sensitive data or systems. But it is important not to overreact. Often, the first actions organizations take after discovering a breach determine how the rest of the incident plays out.

“One of the biggest mistakes that people make is not fully understanding the scope of the problem,” said Brian Calkin, vice president of operations at the Center for Internet Security’s Multi-State Information Sharing and Analysis Center (MS-ISAC).

Unless active data exfiltration or other malicious activity poses an immediate threat to the network, there’s little to be gained by shutting down and wiping systems clean at the first sign of a breach. “It’s better to take an extra day or two to assess the situation versus going with guns blazing taking systems down,” Calkin said.

U.S. Postal Service officials were urged to take that approach after the U.S. Computer Emergency Readiness Team (US-CERT) discovered four servers sending unauthorized communications outside the organization in September 2014.

Instead of immediately disconnecting the servers, USPS officials heeded the advice of US-CERT and its own Office of Inspector General and held off on taking any action until they could prepare a coordinated response plan.

USPS Vice President of Secure Digital Solutions Randy Miskanic told Congress at the time that “the guidance document instructed the [chief information security officer] to take no action -- including further investigative activity, scanning, re-imaging, resetting account passwords, taking systems off-line or searching IP addresses.”

The concern was that the threat actors would escalate their actions if they knew they had been discovered.

After nearly two months of scoping the problem, USPS investigators finally felt confident enough to initiate their remediation plan. As Miskanic described it, the plan was an exercise in caution that began with a network brownout period -- two days during which connections were limited between the USPS network and the internet.

During the same period, virtual private network and remote connections were blocked, and email between USPS accounts and outsiders was curbed. Care was taken to ensure that internal email service was available, as were all mail collection, processing and delivery systems, and front-office operations.

USPS later faced criticism for not moving quickly enough to inform victims that their data might have been compromised, but analysts still say it is generally better to be cautious than hasty about responding to a cybersecurity incident.

“A common mistake is to jump right in to fixing without a plan,” said Jeff Schmidt, vice president and chief cybersecurity innovator at Columbus Collaboratory, an Ohio-based cybersecurity consortium of seven major organizations that include Nationwide, Cardinal Health and Battelle.

That approach can cause more problems, especially when it involves repairing servers and other complex environments in which a cascade of system restarts, version mismatches and database restorations can create havoc, Schmidt said.

“Reimaging simple desktops is one thing, but software requiring updates -- or worse, recompilation -- must be integration-tested in complex environments,” he added.

  1. Failing to maintain operational secrecy

Keeping quiet about an ongoing network intrusion is paramount until an organization is certain the threat has been contained. The last thing officials want to do is drive intruders deeper into the network by tipping them off.

Even weeks into the USPS breach, for instance, investigators kept a tight lid on their mitigation activities so the intruders wouldn’t know they had been spotted. The FBI’s cyber sleuths had determined that the hackers were sophisticated and that officials needed to proceed with extreme caution so short-term remediation efforts would not be compromised.

“As an incident responder, you always have to assume that the adversary is watching you,” said Gregory Touhill, former U.S. CISO.

Until recently, a wipe and a reload was the standard prescription for recovering from an incident. These days, the only way to be 100 percent sure that an intruder has been eradicated is to restructure the network and its devices, Touhill added.

That can be an expensive exercise, and it reinforces the need for organizations to implement best practices such as network segmentation, multifactor authentication, Active Directory whitelisting, and minimal access for remote and privileged users.

“Really good adversaries know how to evade rudimentary network administration techniques,” Touhill said. “Great ones have the ability to burrow in right away, erase their tracks and remain extremely persistent” at the first hint they have been spotted.

When that happens, officials must decide whether to build a new network with better defenses or try an expensive, often futile, attempt to detect and eject the adversary, he added.

  1. Being overly reliant on automation

The logs and event data generated by security tools and network devices can reveal a great deal about the security status of an organization’s network. Many agencies have implemented tools for collecting and correlating such data from multiple sources in order to better understand the nature of security events on their networks.

Karen Evans, former administrator of the Office of Management and Budget’s Office of E-Government and IT, acknowledged that such automation is critical to incident detection and response, but she said it is a mistake to rely on automated tools exclusively.

As one example of the danger of such over-reliance, Evans cited a situation in which an automated tool keeps fixing the same hole without triggering a follow-up investigation by a human analyst. If a vulnerability that has already been patched is repeatedly fixed, it means the automated tools have missed something on the network that is capable of exploiting the weakness, Evans said.

An intruder with legitimate credentials, for instance, could be moving around inside the network and exploiting the same vulnerability over and over again to exfiltrate data. When an organization trusts automated tools to do too much, it is easy to miss such red flags, she added.

“Don’t over-automate,” Evans said. “You want to push and do automation to maximize the way you use your analysts, but you don’t want to automate to where you don’t involve your analysts at all.”

  1. Neglecting to maintain evidence

Touhill said cybersecurity is a risk management issue, and although IT teams are typically under pressure to restore operations as soon as possible, they should resist that urge.

“In rushing to do so, many cyber incident victims -- both public- and private-sector -- take tactical actions to restore the assets through actions like wipes and reloading,” he said. Unfortunately, those moves can also clear out system logs and other forensic evidence that is essential for responders such as US-CERT and law enforcement officials.

Furthermore, “if you don’t have the forensic info to figure out how the bad guys got in, you may rebuild your network with the same flaws the bad actors exploited to penetrate your network in the first place,” Touhill said.

It is vital, therefore, for public and private entities to have clearly defined procedures for retaining forensic information when an incident occurs. It is equally important to regularly test those procedures through drills and exercises. And agencies should include incident responders such as US-CERT, the Industrial Control Systems Cyber Emergency Response Team and law enforcement in their incident response plans, Touhill said.

Gathering all the evidence needed to determine what happened can take time. Calkin said that in several situations in which MS-ISAC assisted a state or local government in a breach investigation, it took one to two weeks to complete the necessary analysis. Sometimes, organizations need to make a forensic image of infected systems before shutting them down to ensure they capture any memory-resident malware, he added.

  1. Declaring victory prematurely

After US-CERT informed OPM that its network had been breached in March 2014, agency officials spent over two months monitoring the hackers’ movements within its network. In May 2014, after determining what they thought was the full extent of the compromise, OPM officials initiated a coordinated remediation plan, internally dubbed Big Bang, to eradicate the intruders from its network and restore compromised systems.

Confident that the monitoring and subsequent remediation had worked, OPM officials missed the activities of a second hacker, who used an OPM contractor’s credentials to log into the agency’s network and then plant a backdoor around the time officials were monitoring the first hacker. Over the next several months, the second hacker or hackers systematically exfiltrated millions of records containing Social Security numbers and information on background investigations.

Such lapses are not uncommon. A big mistake that organizations often make is declaring victory after finding the most obvious machines that are involved and re-imaging them, said John Pescatore, director of emerging security trends at the SANS Institute.

That approach often causes serious business interruptions because of lost or corrupted data, he added, and besides, most threats embed themselves into a network in ways to survive all but the most thorough eradication plans.

Schmidt agreed that once they have compromised a network, intruders will almost always install multiple backdoors and other mechanisms to stay hidden and defend their presence in the network against eradication efforts. Some will even close or patch the vulnerability they used to gain access to prevent other intruders from finding their way in.

“When cleaning up after an intrusion, assume there is an adversary acting against you and avoid a net reduction of controls or security posture during the process,” Pescatore said. He advised agencies to avoid installing or patching systems in place. Instead, they should be patched or reinstated from a known clean or isolated environment, especially when the network’s trustworthiness is not clear.

“Even brief windows of vulnerability can be used against you,” he added.

Schmidt said agencies must make sure they understand the nature of the initial infection so intruders cannot persist through the cleanup. “Look for backdoors, new accounts, new services, new open ports and other mechanisms intruders use to attempt to survive cleanup operations,” he added.

Stay Connected

Sign up for our newsletter.

I agree to this site's Privacy Policy.