What a secure mobile OS means for BYOD

Lollipop or lockdown? What a secure mobile OS means for BYOD

Mobile managers will soon be grappling with the advent of new and more secure mobile operating systems as both Apple and Google have recently rewritten iOS and Android to take account of both personal and enterprise security demands and requirements.

These new OSs will eventually have an effect on the use of mobile devices in government, where administrators are working to balance the culture of security against the irresistible force of bring your own device.

Out of the box, both iOS 8 and Android Lollipop (Android L) both have encryption turned on by default. The development has already caused a mild panic in intelligence circles, with the FBI saying it will make cyber investigations much more difficult.

On the other hand, encryption from the start will make it easier for enterprise managers to ensure secure data on users’ phones, particularly if they use their own phone for business purposes.

At the same time, it will put more of an onus on users to maintain their own settings. With Android L, for example, users will have to remember the device’s PIN, which unlocks encryption. Forget it and the device and its data will have to be wiped and reset, though apparently enterprises will be able to manage these PINs centrally. 

Android L, whose launch is imminent, has a number of other security-based features that should appeal to agency enterprise managers.

Google’s Android Work, a subset of Android L features for mobile device management,  will give IT and network administrators more control over how to provision apps for users or groups. Admins will also be able to define policies for how those apps are used and decide which users can access specific apps and data.

This should make it easier for government agencies to safely accommodate BYOD which, even though the phrase itself has lost some caché, is still a major concern. As an added incentive, new APIs will make it easier for enterprise mobility program developers to include Android Work in their own solutions.

One concern for some agency developers: Tougher security features in Android L are likely to make it harder to root the operating system in order to meet ad hoc requirements. Rooting – the ability to easily obtain “superuser” rights and permissions – had made it relatively easy for admins to change or modify any of the software code or load custom software on the devices.

However, there have already been workarounds reported, with some already coming out with device-specific solutions.

Much of the upgraded security in Android L benefits from the containerization technology that frames Samsung Knox, a four-year development that the company is using to try and consolidate a lion’s share of the Android mobile market.

The firm has already spent considerable time shopping its security vision to government, and the military in particular seems to be interested

The latest signup is the National Security Agency, which recently put Samsung mobile devices and solutions that use Knox onto its Commercial Solutions for Classified program, making them the first consumer devices to be validated to handle classified information. Ironically, this is a what-goes-around-comes-around affair since Samsung Knox uses the Security Enhanced Android specification originally developed by the NSA.

Also, Samsung devices are notably absent from the list of device manufacturers who have said they would be soon be updating their products to Android L.

However, the Korean company has not given over all of Knox’s features for Android L, opting to keep hardware specific items to itself. That means new and updated Samsung devices will use an operating system that should be at least as secure as those that use the first vanilla versions of Android L.

In other developments on the cybersecurity front …

The National Institute of Standards and Technology recently published first draft recommendations for secure deployment of hypervisors (SP 800-125 A). The public comment period runs from October 20 through November 10.

NIST said though it might appear that activities related to secure hypervisors should be based on established practices for server-based software in general, the functionality that hypervisors deliver should be examined from two considerations:

  • Hypervisor platform architectural choices – in other words, the way various modules link with each other and the server
  • Hypervisor baseline functions – the core functions that provide the virtualization functionality

There are 22 recommendations in all in the draft, which also describes some of the security threats specific to hypervisors and how errors in deployment can lead to their being open to attack.

Posted by Brian Robinson on Oct 24, 2014 at 11:28 AM1 comments


Taking aim at stealthy attacks

Taking aim at stealthy attacks

By now you no doubt have heard about SandWorm, the cyberespionage campaign against NATO and other high-value targets, attributed by researchers at iSight Partners to Russian hackers.

The researchers have been monitoring activities of this hacker team since late 2013, but its origins date back as far as 2009. Using spearphishing with malicious attachments, they have successfully exploited a zero-day Windows vulnerability and other vulnerabilities to compromise military and other Western European government organizations, including energy companies, the Ukrainian government and U.S. academic organizations.

It seems to be a textbook example of an advanced, persistent threat. The attackers were motivated and well resourced; and the compromises were successful, stealthy and apparently long-lived.

“Though we have not observed details on what data was exfiltrated in this campaign, the use of this zero-day vulnerability virtually guarantees that all of those entities targeted fell victim to some degree,” wrote iSight’s Stephen Ward.

How do agencies defend against such an threat? When the vulnerability is unknown and the malicious code is well hidden, IT managers have to look for active footprints. They have to keep an eye on the traffic that is entering and leaving their systems and watch what is happening inside those systems. No matter how stealthy the exploit, it has to activate inside the system, and that is where to spot it and stop it.

That’s the idea behind the Cyber Kill Chain.

The Cyber Kill Chain is based on the military concept of establishing a systematic process to target, engage and defeat an adversary. It relies on the assumption that an adversary will have to carry out specific steps to attack in a given environment.

The Cyber Kill Chain, introduced by Lockheed Martin in 2011, upends the traditional wisdom that an IT defender has to be successful 100 percent of the time, while an attacker has to succeed only once. Under this concept, the attacker has to successfully complete the entire seven-step process, while the defender can defeat him at any point in the chain.

The seven links in the Cyber Kill Chain are:

  1. Reconnaissance: Gathering intelligence to identify a target.
  2. Weaponization: Packaging an exploit in a deliverable payload.
  3. Delivery: Delivering the weapon to the victim, through email, malicious websites, removable media, etc.
  4. Exploitation: Executing the exploit on the victim’s system.
  5. Installation: Installing malware on the target.
  6. Command and control: Opening a channel for remote manipulation of the target system.
  7. Action on objectives: Gathering, exfiltrating or altering data, manipulating systems or other activity against the target.

Breaking an attack into incremental steps rather than looking at it as a binary action – compromised or not compromised – gives the defender many points at which the attack can be identified, targeted, and eliminated or mitigated.

But it also requires an intelligence-driven approach to defense. That means having visibility into the networks and systems being defended and the ability to analyze data so that anomalies or other patterns being displayed in the attack can be identified.

This is not necessarily easy to achieve, and defending systems against complex or sophisticated attacks will remain challenging.

But tools and services are available, and the government’s move toward continuous monitoring (or continuous diagnostics and mitigation) is a step toward enabling intelligence-driven defense. Attacks and breaches might be inevitable, but cyberdefense is not a game we have to lose.

Posted by William Jackson on Oct 17, 2014 at 10:27 AM0 comments


What gives? Shellshock fails to shock

What a difference a few months can make. Shortly after the Heartbleed bug caused a panic in security circles, along comes something which could be even more serious and the reaction seems to be one big yawn.

The so-called Shellshock vulnerability is in the GNU Bourne-Again Shell (Bash), which is the command-line shell used in Linux and Unix operating systems as well as Apple’s Unix-based Mac OS X. It could allow an attacker to execute shell commands and insert malware into systems.

This is not a vulnerability in concept only. Trend Micro, which has been looking for threats based on Shellshock, has already identified a slew of them and says other common communications protocols such as HTTP, SMTP, SSH and FTP are also vulnerable.

Shellshock doubles the threat posed by the OpenSSL Heartbleed bug in various ways. Apparently servers that host OpenVPN, a widely used application, are also vulnerable to Shellshock, just as they are from Heartbleed.  Other security researchers have reported exploits. Heartbleed and Shellshock come from similar development stock. Both are faults in code used in the initial writing of programs that went unnoticed for a long time, apparently for over 20 years in the case of Shellshock. Developers then simply didn’t think of the kind of vulnerabilities today’s threat environment can use, and it’s brought the issue of open source development rigor into question.

Patches are quickly being thrown out to cope with Shellshock, just as with Heartbleed, though security organizations have warned that initial solutions don’t completely resolve the vulnerability. And, anyway, it depends on what people do with these fixes. Months after the Heartbleed bug was trumpeted in the headlines, critical systems around the world were still at risk.

Not all vulnerabilities are equal

Then again, perhaps organizations aren’t as vulnerable from Heartbleed, Shellshock and similar code-driven bugs as people think. University and industry researchers have proposed in a recent paper that existing security metrics don’t capture the extent of actual exploits.

The researchers developed several new metrics derived from actual field data and evaluated those metrics on some 300 million intrusions reported on over 6 millions hosts and found that none of the products they used in their study has more than 35 percent of their disclosed vulnerabilities exploited in the wild, and that for all the products combined only 15 percent of vulnerabilities are exploited.

“Furthermore,” the authors wrote, “the exploitation ratio and the exercised attack surface tend to decrease with newer product releases [and that] hosts that quickly upgrade to newer product versions tend to have reduced exercised attack surfaces.”

In all, they propose four new metrics that they claim, when added to existing metrics, provide a necessary measure for systems that are already deployed and working in real-world environments:

  • A count of vulnerabilities in the wild.
  • The ratio of a product’s vulnerabilities to how many of those are exploited over time.
  • A product’s attack volume, or how frequently it’s attacked.
  • The exercised attack surface, or the portion of a product’s vulnerabilities that are attacked in a given month.

These metrics, they say, could be used as part of a quantitative assessment of cyber risks and can inform the design of future security technologies.

Don’t forget the hardware

Then again, what’s the use of vulnerability announcements and security metrics, all aimed at revealing software bugs and fixes, if the hardware that hosts the software is compromised?

In times past, when chips and the systems that use them were all manufactured in the United States, or by trusted allies, that wasn’t such a concern. But with the spread of globalization comes a diversification of manufacturing sources to China and other countries and increasing fears of adversaries tampering with hardware components to make it easier for them to successfully attack U.S. systems.

That’s been the impetus behind several trusted computing initiatives in the past few years. Most recently, the National Institute of Standards and Technology developed its Systems Security Engineering initiative to try and guide the building of trustworthy systems.

The National Science Foundation is now in the game through the government’s Secure, Trustworthy, Assured and Resilient Semiconductors and Systems (STARRS) program. One approach, in concert with the Semiconductor Research Corporation (SRC), is to develop tools and techniques to make sure components have the necessary assured security from the design stage through manufacturing.

Nine initial research awards were recently made for this program, which is a part of the NSF’s $75 million Secure and Trustworthy Cyberspace “game changing” program.

While all of this is pretty broad-based, the ultimate result for government agencies could be that, in just a few years, they will be able to specify in their procurements exactly what assured hardware the computing systems they buy need to contain. 

Posted by Brian Robinson on Oct 10, 2014 at 11:52 AM1 comments


Hoping higher FISMA scores mean more than compliance

Hoping higher FISMA scores mean more than compliance

The news in government cybersecurity is not all bad.

Following a slip in compliance scores for IT security requirements in fiscal 2012, scores rebounded in FY 2013. And a new emphasis on continuous monitoring and authorization of IT systems – together with a program to provide the necessary tools for the job – could mean that things will get a little better when the results are in for the fiscal year just ended.

The overall state of government cybersecurity is judged by the Federal Information Security Management Act, and the scorecard is the Office of Management and Budget’s annual report to Congress on FISMA compliance. In the report for FY 2012, released in early 2013, overall FISMA compliance slipped from 75 percent in FY 2011 to 73 percent.

In the report for FY 2013 however, overall performance jumped to 81 percent, “with significant improvements in areas such as the adoption of automated configuration management, remote access authentication and email encryption.”

I am the first to admit that FISMA compliance – or compliance with any standards – does not equate to security. But the reports provide a useful baseline and indicate that agencies are paying attention to their security and the maturity of their programs.

Patrick Howard, former chief information security officer for the Nuclear Regulatory Commission and the Department of Housing and Urban Development (and now the program manager for continuous diagnostics and mitigation (CDM) at Kratos Defense), points out that the most recent results show that agencies still are struggling to develop long-term security plans, and he expects to see this again for FY 2014. “That’s nothing new,” he said. “We’ve been seeing that for years.”

But there are some reasons to believe – or at least hope – that there will be continued improvement. The latest report cited an improvement in meeting cross-agency performance goals, including trusted Internet connections, strong authentication and continuous monitoring. And there will be a stronger emphasis on continuous monitoring in the next evaluations.

In November 2013, OMB Memo M-14-03 set a timeline for agencies to move from static reauthorization of IT systems every three years, to continuous monitoring and ongoing reauthorization. Agencies were to have a strategy for information security continuous monitoring (ICSM) in place by Feb. 28, 2014, begin cooperation with the Homeland Security Department to implement the plans and begin procuring products and services through the DHS CDM program. Agencies will be evaluated on their compliance with these requirements in their 2014 FISMA reviews.

Challenges to fully implementing these ICSM goals remain, of course. DHS has not yet established a governmentwide ISCM dashboard, as called for in the memo. And the CDM program, which provides a source for procuring tools and services through a blanket purchase agreement at the General Services Administration, still is a work in progress.

Two of six task orders to be released under Phase 1 of CDM have been released for industry quotes, and the remaining four orders are expected to be released in fiscal 2015. Phase 2 of the CDM program still is being developed. Howard says there is a lack of awareness among many agencies about the continuous monitoring services available under CDM and that many agencies are waiting to see what happens with the second task order before implementing these services.

I am hopeful that the increased resources and attention on continuous monitoring – both in formal programs and in the security community in general – will help continue the upward trend in FISMA scores, however. Higher scores might not mean that agency IT systems are more secure, but they couldn’t hurt.

Posted by William Jackson on Oct 03, 2014 at 12:33 PM0 comments