Cyberecurity’s not done until the paperwork is finished

Cybersecurity’s not done until the paperwork is finished

The Veterans Affairs Department has been dinged once again by the Government Accountability Office for  lack of follow-through in its cybersecurity operations. In a recent report, VA Needs to Address Identified Vulnerabilities,  the GAO warned that unless VA’s security weaknesses are fully addressed, “its information is at heightened risk of unauthorized access, modification and disclosure, and its systems at risk of disruption.”

The problem cited in the report is not so much that VA is doing a bad job securing its networks and systems, but that it has not properly documented security activities and has not developed action plans and milestones for correcting problems.

Documentation and planning are more than busywork. Although it is true that checking boxes and creating reports will not by themselves improve IT security, without them it can be difficult if not impossible to assure what has been done, that it has been done properly and that it can be repeated if necessary.

These processes can make the difference between constantly fighting brushfires and being able to effectively protect an agency enterprise and improve  its security posture.

To quote a rule well-known to every government worker: The job’s not finished until the paperwork is done.

Because of its size and the amount of personal and other sensitive information it maintains, the VA is a high-value target. In January, a defect in VA’s web-based eBenefits system exposed personal data of thousands of veterans and their dependents. And in 2010, a nation-state-sponsored attack took advantage of weak technical controls to gain “unchallenged and unfettered access” to VA systems, the GAO said.

These were fairly recent hits, but the fact remains that development of an effective information security program has been a major management challenge for the department since the late 1990s.

This does not mean that VA has no information security. VA’s Network Security Operations Center in 2012 responded to an attack by outsiders, analyzing the scope of the incident and documenting its responses. Even so, “VA could not provide sufficient documentation to demonstrate that these actions were effective,” GAO said.

This problem is not limited to VA. A recent governmentwide review by GAO found that agencies were not able to document effectiveness of their incident response about 65 percent of the time.

In the case of the 2012 VA incident cited, forensics analysis data was not available because of a lack of storage space. The department’s incident response policies also did not provide the incident response team with access to systems logs needed to fully assess the extent of the breach, which raises questions about the effectiveness of the response.

The problems are part of a vicious circle in government cybersecurity. Incident response teams are stretched thin, and their top priority is responding to the problem at hand. Documentation and policy enforcement often take a back seat. But without effective documentation and policies, it can be hard to move beyond crisis management to effectively managing risk.

As I have said before, regulatory compliance does not equal security, but it can provide an essential baseline for achieving more effective security.

Posted by William Jackson on Dec 05, 2014 at 1:08 PM0 comments


Look for more attacks coming from privileged accounts

Look for more attacks coming from privileged accounts

Abuse of privileged accounts has been understood for a long time to be a major security concern, since it opens up broad access to an organization’s data and IT resources. Up to now, however, the focus has mainly been on how this applies to the so-called insider threat.

Perhaps that has to change. A new report from security solutions vendor CyberArk, which surveyed many of the world’s top security forensics experts, makes the worrisome claim that most, if not all, of the more sophisticated, targeted attacks from the outside are due to exploitation of privileged accounts. And it’s something that many agencies are unaware of.

What’s more, attackers have become adept at using an organization’s business and trading partners to gain access to systems. Even if the organization itself has built its security to make direct attacks on its privileged accounts hard, small and medium-sized partners will probably not have the same experience and expertise. However, they may well have been given privileged access to the organization’s systems just because it’s the easier way to do business.

The oft-cited Target breach, for example, which resulted in the theft of millions of customers’ credit card account data, was due to attackers first getting Target network credentials through its air conditioning vendor.

In August this year, Department of Homeland Security contractor USIS, which does background checks on the agency’s employees, revealed a breach that had “all the markings of a state-sponsored attack” and bled details on up to 25,000 DHS workers.

Even more startling is the report’s assertion that, on average, organizations have as many as three to four times the number of privileged accounts as they do employees. That makes for a very broad field from which attackers can gain access to those accounts.

This is often overlooked in organizations, according to Udi Mokady, CyberArk’s chief executive, since privileged accounts are rightly seen as the key to the IT kingdom and therefore are naturally assumed to be limited.

“In fact, there are many of them,” he said. “They are built into every piece of information technology so, at a minimum, there is one for every desktop system in the organization. But they are also built into every server and every application, operating system, network device, database and so on.”

Those all come with built-in accounts for a system administrator to monitor and control them. But in the modern environment of virtualized IT, every time a new system is spun off, yet another operating system and set of applications is created, along with the virtualization layer with its administrative functions. So the number of privileged accounts tends “to grow exponentially,” Mokady said.

If all of this isn’t enough to give security professionals nightmares, the report also points out that the Internet of Things (IoT) is coming – and quickly. That speaks to allthe embedded systems organizations will have to account for outside of the regular IT infrastructure, “the pieces of technology with a brain,” as Mokady put it.

Critical infrastructure organizations are just one example of what this will entail, with energy companies having privileged accounts also to manage, for example, industrial control systems that take care of power plants and electricity grids.

“These are computers, though not the typical idea of a computer, but they are often targeted by attackers even more than the regular pieces of the IT infrastructure,” Mokady said.

They are also probably more vulnerable than regular IT systems. The report points out that embedded devices require regular firmware updates and typically have more complex quality assurance cycles, which may in turn cause them to lag behind other products as far as security is concerned.

The basic problem for an organization’s security defense is that when attackers gain access to privileged accounts they can penetrate systems undetected, without throwing up alarms and red flags. Intrusion detection tools now are useless. Agencies  have to assume that breaches have occurred, that attackers are already inside their networks, and hunt for them.

Managing this situation will mean both a turnaround in traditional thinking about security, though the idea of first stopping attacks at the network perimeter is fast losing ground to the idea that the focus has to be on the inside, which will help. Still, to guard against the privileged account threat, organizations have to spread their arms wide enough to actively monitor and manage those accounts.

No question, that’s a major headache. But some of the current tools can be reworked to help, Mokady said, such as using firewalls to segment user access within an organization, and using encryption to more closely guard data. There are also automated tools now on the market, like those from CyberArk itself, that can detect and count the number of privileged accounts within an organization, and also to automate and change credentials and encrypt and segregate them in secure vaults.

The report lays out generic guidelines that organizations can follow that, just by tightening the regular security practices they follow, will greatly improve their ability to defend against attacks:

  • Inventory privileged accounts.
  • Make it harder to get privileged access.
  • Proactively monitor privileged accounts.
  • Perform regular, recurrent housekeeping.
  • Monitor and limit the privilege of service accounts.
  • Apply patches as quickly as possible.
  • Practice classic defense in depth.

Posted by Brian Robinson on Nov 21, 2014 at 9:50 AM0 comments


NIST marks top security requirements for government cloud

NIST marks top security requirements for government cloud

Cloud computing offers both unique advantages and challenges to government users. The advantages are well-advertised: Greater efficiency, economy and flexibility that can help agencies meet rapidly changing computing needs quickly and cheaply while being environmentally friendly.

Among the challenges, security is the most commonly-sited concern in moving mission-critical services or sensitive information to the cloud.

To address this, a recently released roadmap from the National Institute of Standards and Technology recommends a plan to ensure cloud offerings meet government security needs while being flexible enough to adapt to the policies and requirements of multiple tenants, including foreign governments. The plan involves periodic assessments of security controls and development of international profiles and standards.

The recommendations are brief and make up a small part of the 140-page document released by NIST in October but categorized as “high priority.”

The final version of the U.S. Government Cloud Computing Technology Roadmap has been several years in the making and reflects more than 200 comments on the initial draft, released in 2011.

Security is the first of three high-priority requirements addressed in volume one. Interoperability and portability – the ability of data to be moved from one cloud facility to another—are the others.

The government already has established the Federal Risk and Authorization Management Program (FedRAMP), which became operational in 2012 to ensure that cloud service providers meet a baseline set of federal security requirements, easing the task of certifying and authorizing the systems for government operations. But the NIST roadmap addresses security requirements that extend beyond federal users.

Security in the cloud is complicated by a number of factors. First, it upsets the traditional IT security model that relies on logical and physical system boundaries. “The inherent characteristics of cloud computing make these boundaries more complex and render traditional security mechanisms less effective,” the roadmap says.

Second, a cloud system has to meet not only U.S. government security needs, but also those of other customers sharing the environment, and so security policy must be de-coupled from U.S. government-specific policies. “Mechanisms must be developed to allow differing policies to co-exist and be implemented with a high degree of confidence, irrespective of geographical location and sovereignty.”

Moreover, a comprehensive set of security requirements have not yet been fully established, the roadmap says. “Security controls need to be reexamined in the context of cloud architecture, scale, reliance on networking, outsourcing and shared resources,” the authors write. “For example, multi-tenancy is an inherent cloud characteristic that intuitively raises concern that one consumer may impact the operations or access data of other tenants running on the same cloud.”

NIST says recommended priority action plans for cloud security are:

  • Continue to identify cloud consumer priority security requirements, on at least a quarterly basis.
  • Periodically identify and assess the extent to which risk can be mitigated through existing and emerging security controls and guidance. Identify gaps and modify existing controls and monitoring capabilities.
  • Develop neutral cloud security profiles, technical security attributes and test criteria.
  • Define an international standards-based conformity assessment system approach.

Posted by William Jackson on Nov 14, 2014 at 8:52 AM0 comments


Attacks on open source call for better software design

Attacks on open source call for better software design

Another day, another major vulnerability for government systems, it seems. This time it affects Drupal, a popular, open source content management system that’s been used for an increasing number of agency websites, including the White House.

An announcement from the organization that oversees Drupal warned several weeks ago of a vulnerability that would allow an attacker to use an SQL injection, where malware can be inserted into a system because of an error in database code, for example. Depending on the content of the attacker’s request, it said, the attack could lead to privilege escalation, arbitrary PHP execution or other scenarios that put data at risk.

However, the real danger of this vulnerability was revealed several weeks later, when the Drupal organization put out another announcement warning that, even if the patch issued at the time of the original announcement was applied, timing was critical. If sites weren’t patched “within hours” of the vulnerability announcement, the damage may have already been done.

Automated attacks began compromising sites shortly after the vulnerability was revealed, and those who waited to patch their systems then should assume their sites were compromised.

Even if the system appears to be patched, the Drupal organization warned, attackers may have “fixed” it themselves after they injected their malware, in order to keep other attackers out and to try and fool IT administrators into thinking it was safe. Attackers may also have created backdoors to later get into affected systems .

If timely patches weren’t applied, then the Drupal security team outlined a lengthy process required to restore a website to health:

  • Take the website offline by replacing it with a static HTML page.
  • Notify the server’s administrator emphasizing that other sites or applications hosted on the same server might have been compromised via a backdoor installed by the initial attack.
  • Consider obtaining a new server, or otherwise remove all the website’s files and database from the server. (Keep a copy safe for later analysis.)
  • Restore the website (Drupal files, uploaded files and database) from backups from before 15 October 2014.
  • Update or patch the restored Drupal core code.
  • Put the restored and patched/updated website back online.
  • Manually redo any desired changes made to the website since the date of the restored backup.
  • Audit anything merged from the compromised website, such as custom code, configuration, files or other artifacts, to confirm they are correct and have not been tampered with.

This year has been “Annus Horribilis” for open source software used in government. The Heartbleed OpenSSL bug revealed in April was considered “one of the scariest ever” in terms of its potential for attackers to get access to data. A steady stream of scares followed, and by October when the Shellshock bug in Linux and Unix operating systems was announced people seemed to be suffering from bug fatigue, even thought it was deemed as potentially damaging as Heartbleed.

Consequently, warning bells started ringing, again, about the inherent security of open source software. As the theory goes, open source is, by nature, open to the widest range of bad guys who could compromise it. Various industry types have tried to downplay that, however, putting it down to human mistakes that could happen anywhere.

Others point out that most of the compromised software has one thing in common: it was built on pre-fabricated modules. That’s generally considered a benefit. Because developers don’t have to repeat what’s gone before, they can use a more Lego-like approach and only write code where it’s needed.

That leads to a much speedier time to market, but it also means that whatever errors are included in those modules gets passed along. Some security vendors estimate that as much as 90 percent of the code used for in-house developments is based on these components.

We need more and better tools that scan these components for potential vulnerabilities before they are tied into actual products. That’s something the National Institute of Standards and Technology, for example, has recognized with its recent effort to develop better guidelines for systems and software design.

On a related note, Google recently came out with its nogotofail tool that can be used to test networks for weak transport layer security and secure socket layer connections. That won’t address every bug out there – it doesn’t address the Drupal bug, for example – but it will go some way toward fixing the kinds of vulnerabilities that Heartbleed and similar bugs introduce.

Posted by Brian Robinson on Nov 07, 2014 at 10:14 AM4 comments