NIST marks top security requirements for government cloud

NIST marks top security requirements for government cloud

Cloud computing offers both unique advantages and challenges to government users. The advantages are well-advertised: Greater efficiency, economy and flexibility that can help agencies meet rapidly changing computing needs quickly and cheaply while being environmentally friendly.

Among the challenges, security is the most commonly-sited concern in moving mission-critical services or sensitive information to the cloud.

To address this, a recently released roadmap from the National Institute of Standards and Technology recommends a plan to ensure cloud offerings meet government security needs while being flexible enough to adapt to the policies and requirements of multiple tenants, including foreign governments. The plan involves periodic assessments of security controls and development of international profiles and standards.

The recommendations are brief and make up a small part of the 140-page document released by NIST in October but categorized as “high priority.”

The final version of the U.S. Government Cloud Computing Technology Roadmap has been several years in the making and reflects more than 200 comments on the initial draft, released in 2011.

Security is the first of three high-priority requirements addressed in volume one. Interoperability and portability – the ability of data to be moved from one cloud facility to another—are the others.

The government already has established the Federal Risk and Authorization Management Program (FedRAMP), which became operational in 2012 to ensure that cloud service providers meet a baseline set of federal security requirements, easing the task of certifying and authorizing the systems for government operations. But the NIST roadmap addresses security requirements that extend beyond federal users.

Security in the cloud is complicated by a number of factors. First, it upsets the traditional IT security model that relies on logical and physical system boundaries. “The inherent characteristics of cloud computing make these boundaries more complex and render traditional security mechanisms less effective,” the roadmap says.

Second, a cloud system has to meet not only U.S. government security needs, but also those of other customers sharing the environment, and so security policy must be de-coupled from U.S. government-specific policies. “Mechanisms must be developed to allow differing policies to co-exist and be implemented with a high degree of confidence, irrespective of geographical location and sovereignty.”

Moreover, a comprehensive set of security requirements have not yet been fully established, the roadmap says. “Security controls need to be reexamined in the context of cloud architecture, scale, reliance on networking, outsourcing and shared resources,” the authors write. “For example, multi-tenancy is an inherent cloud characteristic that intuitively raises concern that one consumer may impact the operations or access data of other tenants running on the same cloud.”

NIST says recommended priority action plans for cloud security are:

  • Continue to identify cloud consumer priority security requirements, on at least a quarterly basis.
  • Periodically identify and assess the extent to which risk can be mitigated through existing and emerging security controls and guidance. Identify gaps and modify existing controls and monitoring capabilities.
  • Develop neutral cloud security profiles, technical security attributes and test criteria.
  • Define an international standards-based conformity assessment system approach.

Posted by William Jackson on Nov 14, 2014 at 8:52 AM0 comments


Attacks on open source call for better software design

Attacks on open source call for better software design

Another day, another major vulnerability for government systems, it seems. This time it affects Drupal, a popular, open source content management system that’s been used for an increasing number of agency websites, including the White House.

An announcement from the organization that oversees Drupal warned several weeks ago of a vulnerability that would allow an attacker to use an SQL injection, where malware can be inserted into a system because of an error in database code, for example. Depending on the content of the attacker’s request, it said, the attack could lead to privilege escalation, arbitrary PHP execution or other scenarios that put data at risk.

However, the real danger of this vulnerability was revealed several weeks later, when the Drupal organization put out another announcement warning that, even if the patch issued at the time of the original announcement was applied, timing was critical. If sites weren’t patched “within hours” of the vulnerability announcement, the damage may have already been done.

Automated attacks began compromising sites shortly after the vulnerability was revealed, and those who waited to patch their systems then should assume their sites were compromised.

Even if the system appears to be patched, the Drupal organization warned, attackers may have “fixed” it themselves after they injected their malware, in order to keep other attackers out and to try and fool IT administrators into thinking it was safe. Attackers may also have created backdoors to later get into affected systems .

If timely patches weren’t applied, then the Drupal security team outlined a lengthy process required to restore a website to health:

  • Take the website offline by replacing it with a static HTML page.
  • Notify the server’s administrator emphasizing that other sites or applications hosted on the same server might have been compromised via a backdoor installed by the initial attack.
  • Consider obtaining a new server, or otherwise remove all the website’s files and database from the server. (Keep a copy safe for later analysis.)
  • Restore the website (Drupal files, uploaded files and database) from backups from before 15 October 2014.
  • Update or patch the restored Drupal core code.
  • Put the restored and patched/updated website back online.
  • Manually redo any desired changes made to the website since the date of the restored backup.
  • Audit anything merged from the compromised website, such as custom code, configuration, files or other artifacts, to confirm they are correct and have not been tampered with.

This year has been “Annus Horribilis” for open source software used in government. The Heartbleed OpenSSL bug revealed in April was considered “one of the scariest ever” in terms of its potential for attackers to get access to data. A steady stream of scares followed, and by October when the Shellshock bug in Linux and Unix operating systems was announced people seemed to be suffering from bug fatigue, even thought it was deemed as potentially damaging as Heartbleed.

Consequently, warning bells started ringing, again, about the inherent security of open source software. As the theory goes, open source is, by nature, open to the widest range of bad guys who could compromise it. Various industry types have tried to downplay that, however, putting it down to human mistakes that could happen anywhere.

Others point out that most of the compromised software has one thing in common: it was built on pre-fabricated modules. That’s generally considered a benefit. Because developers don’t have to repeat what’s gone before, they can use a more Lego-like approach and only write code where it’s needed.

That leads to a much speedier time to market, but it also means that whatever errors are included in those modules gets passed along. Some security vendors estimate that as much as 90 percent of the code used for in-house developments is based on these components.

We need more and better tools that scan these components for potential vulnerabilities before they are tied into actual products. That’s something the National Institute of Standards and Technology, for example, has recognized with its recent effort to develop better guidelines for systems and software design.

On a related note, Google recently came out with its nogotofail tool that can be used to test networks for weak transport layer security and secure socket layer connections. That won’t address every bug out there – it doesn’t address the Drupal bug, for example – but it will go some way toward fixing the kinds of vulnerabilities that Heartbleed and similar bugs introduce.

Posted by Brian Robinson on Nov 07, 2014 at 10:14 AM4 comments


Critics await ‘The Return of Open Enrollment’

Critics await 'The Return of Open Enrollment'

Last year’s rollout of online health insurance exchanges under the Affordable Care Act was – to put it mildly – disappointing. It turned out that providing insurance to millions of people, many of whom had not been covered before, was a lot more complex than expected. What’s more, neither the technology nor the processes were up to the job.

The situation wasn’t helped by all of the states that refused to establish their own online portals, putting additional pressure on the central federal site at HealthCare.gov. Still, when the dust had settled, more than 7 million people had enrolled to buy insurance.

When the exchanges failed to perform as expected during the first ACA open enrollment period, call centers provided a vital backup. Maximus, the Reston, Va., company that supports many government health and human services programs, provided call center services, fielding 4.8 million calls for five state exchanges and the District of Columbia as well as the federal site.

With the second open enrollment period set to open Nov. 15, what does the company expect this time around? “The system is much more mature now,” said Jim Miller, senior vice president for strategic solutions at Maximus. “There has been an awful lot going on in the last year.”

But that  doesn’t mean things will be easy. While Maximus and the exchange operators have the experience from OE1 to draw on, OE2 is expected to present a new set of challenges.

The federal and state sites that failed last year have undergone major overhauls, and they should be better able to perform. But the upcoming enrollment period will be shorter, from Nov. 15 through Feb. 15. New sets of questions and problems are expected as many of those already insured come back to renew their coverage. And after the low-hanging fruit was addressed last time, a new harder-to-reach population is being pursued this time around.

All of which means the call centers are gearing up for another busy season. But anticipating failure is what call centers are all about, Miller said. “Our responsibility is the alternative to success. We have to be ready for any contingency. We have to ask ourselves, what are the likely problems?”  

Maximus uses interactive voice response to direct calls and to access the proper resources, customer relationship management software to gather information on calls and callers and knowledge management systems to generate scripts addressing common problems.

The company anticipated significant problems last year in providing a complex product such as health insurance to first-time buyers. But it didn’t expect the almost complete failure of technology on many sites that kept customers from connecting or finishing their enrollment. Because of that, the number of agents in place to handle calls had to be scaled up from an initial 2,500 to 4,000. “We were able to flex to meet that demand,” Miller said.

But it wasn’t just a failure of technology that drove people from their browsers to their phones. Although online exchanges are the preferred method for enrolling and selecting policies for both the states and the federal exchanges, many users are not comfortable with self-service websites and want to talk to a real person.

One of the problems with the initial rollout of ACA enrollment, in addition to underestimating the complexity of the process that was being automated, was to overestimate what technology was capable of achieving. Technology alone does not serve all citizen needs, even when it works.

Still, with upgrades to the exchanges and an increased focus on the needs of citizens, “we think it will go better this year,” Miller said.

Posted by William Jackson on Oct 31, 2014 at 12:08 PM0 comments


What a secure mobile OS means for BYOD

Lollipop or lockdown? What a secure mobile OS means for BYOD

Mobile managers will soon be grappling with the advent of new and more secure mobile operating systems as both Apple and Google have recently rewritten iOS and Android to take account of both personal and enterprise security demands and requirements.

These new OSs will eventually have an effect on the use of mobile devices in government, where administrators are working to balance the culture of security against the irresistible force of bring your own device.

Out of the box, both iOS 8 and Android Lollipop (Android L) both have encryption turned on by default. The development has already caused a mild panic in intelligence circles, with the FBI saying it will make cyber investigations much more difficult.

On the other hand, encryption from the start will make it easier for enterprise managers to ensure secure data on users’ phones, particularly if they use their own phone for business purposes.

At the same time, it will put more of an onus on users to maintain their own settings. With Android L, for example, users will have to remember the device’s PIN, which unlocks encryption. Forget it and the device and its data will have to be wiped and reset, though apparently enterprises will be able to manage these PINs centrally. 

Android L, whose launch is imminent, has a number of other security-based features that should appeal to agency enterprise managers.

Google’s Android Work, a subset of Android L features for mobile device management,  will give IT and network administrators more control over how to provision apps for users or groups. Admins will also be able to define policies for how those apps are used and decide which users can access specific apps and data.

This should make it easier for government agencies to safely accommodate BYOD which, even though the phrase itself has lost some caché, is still a major concern. As an added incentive, new APIs will make it easier for enterprise mobility program developers to include Android Work in their own solutions.

One concern for some agency developers: Tougher security features in Android L are likely to make it harder to root the operating system in order to meet ad hoc requirements. Rooting – the ability to easily obtain “superuser” rights and permissions – had made it relatively easy for admins to change or modify any of the software code or load custom software on the devices.

However, there have already been workarounds reported, with some already coming out with device-specific solutions.

Much of the upgraded security in Android L benefits from the containerization technology that frames Samsung Knox, a four-year development that the company is using to try and consolidate a lion’s share of the Android mobile market.

The firm has already spent considerable time shopping its security vision to government, and the military in particular seems to be interested

The latest signup is the National Security Agency, which recently put Samsung mobile devices and solutions that use Knox onto its Commercial Solutions for Classified program, making them the first consumer devices to be validated to handle classified information. Ironically, this is a what-goes-around-comes-around affair since Samsung Knox uses the Security Enhanced Android specification originally developed by the NSA.

Also, Samsung devices are notably absent from the list of device manufacturers who have said they would be soon be updating their products to Android L.

However, the Korean company has not given over all of Knox’s features for Android L, opting to keep hardware specific items to itself. That means new and updated Samsung devices will use an operating system that should be at least as secure as those that use the first vanilla versions of Android L.

In other developments on the cybersecurity front …

The National Institute of Standards and Technology recently published first draft recommendations for secure deployment of hypervisors (SP 800-125 A). The public comment period runs from October 20 through November 10.

NIST said though it might appear that activities related to secure hypervisors should be based on established practices for server-based software in general, the functionality that hypervisors deliver should be examined from two considerations:

  • Hypervisor platform architectural choices – in other words, the way various modules link with each other and the server
  • Hypervisor baseline functions – the core functions that provide the virtualization functionality

There are 22 recommendations in all in the draft, which also describes some of the security threats specific to hypervisors and how errors in deployment can lead to their being open to attack.

Posted by Brian Robinson on Oct 24, 2014 at 11:28 AM1 comments