Personal Identity Verification card

Happy birthday HSPD-12; there’s still a long way to go

This month marks the 10th anniversary of Homeland Security Presidential Directive 12 mandating the development and use of an interoperable smart ID card for civilian government employees and contractors. The results of the program so far range from the impressive to the disappointing.

“I would call the programmatic platform a huge success,” said Ken Ammon, chief strategy officer for Xceedium, an ID management software vendor.

As of the first quarter of this year, 5.4 million Personal Identity Verification (PIV) cards have been issued to civilian employees and contractors, accounting for 96 percent of those who need the cards. Given employee turnover and the need to periodically reissue the cards, the coverage is quite good.

The challenge now is having them used as they were intended, as strong, two-factor authentication for both logical and physical access across agencies. This is a multifaceted challenge that is proving to be a much tougher nut to crack than designing and issuing the cards.

HSPD-12 was issued Aug. 27, 2004, by then-President George W. Bush. The heart of the mandate was simple. Inconsistencies in government IDs left the government vulnerable to terrorist attack. “Therefore, it is the policy of the United States to enhance security, increase government efficiency, reduce identity fraud and protect personal privacy by establishing a mandatory, governmentwide standard for secure and reliable forms of identification issued by the federal government to its employees and contractors (including contractor employees).”

The National Institute of Standards and Technology was given six months to produce the standards, which included identity vetting and secure, interoperable digital technology. Eight months after that, agencies would have to require use of the cards, “to the maximum extent practicable,” for access both to physical facilities and IT systems.

The first part of this effort, developing the standard and technical specifications and designing, producing and issuing the PIV cards, is the programmatic success Ammon cited. But the second part, the qualification “to the maximum extent practicable,” has proved to be a speed bump.

Seven years after the directive, the Government Accountability Office concluded in 2011 that although substantial progress had been made in issuing PIV cards and fair progress in using them for physical access to government facilities, only limited progress had been made in using them for access to government networks and minimal progress in cross-agency acceptance.

A year later, increasing the use of PIV and the military’s Common Access Card credentials was identified by the White House as a priority area for improvement. Agencies were given until March 31, 2012, to develop policies for the use of these credentials.

Reasons for the lack of widespread use cited by GAO were not technical, but administrative: Logistics, agency priorities and of course budgets. “According to agency officials, a lack of funding has . . . slowed the use of PIV credentials,” the report stated.

But technology also is an issue, as the card is only one element in any authentication system. Use of the electronic credentials in the cards has to be incorporated into systems already in place, or those systems must be replaced. Under a 2011 White House directive, all new systems under development at agencies must be enabled for PIV credentials and existing systems were to be upgraded by fiscal 2012.

Like many unfunded mandates, this has been a tough one to meet. And in the meantime, technology keeps changing. Mobile computing, for instance, means that many government workers are using tablets and cell phones for work. Technically these should require PIV authentication for government work, but many are not equipped to accommodate that.

HSPD-12 is not a failure, but it could be doing a lot better if strong, two-factor authentication was a higher priority within agencies. However, the the rapid pace of technological change makes it unlikely that any government-mandated technology will ever be completely successful.  Even so, much more could be done.

Posted by William Jackson on Aug 22, 2014 at 10:16 AM2 comments


Man using Samsung phone in raid

Will Knox tip government buyers toward Android?

On a global basis, Android devices far outsell those that use other operating systems. But it’s been a much different story in government, where Apple has become a preferred mobile device supplier in many cases and where Blackberry still has a strong presence.

The situation is caused mainly by perceptions that Android security is suspect. But that may finally be changing, based on work by Samsung, the leading smartphone supplier. Its Knox containerization technology, under development for four years, seems to be gaining traction across the federal, state and local markets.

Now government pilot projects are being launched and, according to Samsung, attracting potential users who are coming to see how they can use the technology.

“We have numerous examples of where agencies are willing to enter into those initial presentation pilots,” said Johnny Overcast, director of government sales for Samsung Mobile. “We’ve been working with major executive branch agencies in particular for some time, and there have already been significant purchases of Samsung Knox.”

The Defense Department was one of the first to get on board with Android, and Samsung in particular. In May 2013, the Defense Information Systems Agency announced Security Technical Implementation Guides (STIGs) for mobile devices aimed at getting the technology into the hands of military users as quickly as possible. The STIGs describe the security policy and configuration requirements for government-issued devices, including those that use Samsung Knox.

More recently, the Army announced it would use Samsung Galaxy Note II smartphones as the end user device in its Nett Warrior program, whose goal is to give front-line soldiers advanced situational awareness capabilities.

In March, the DOD approved the Samsung Knox Hypervisor virtualization technology and Authority to Operate on sensitive networks.

The departments of Justice and Homeland Security have also bought into the Knox hardened approach for Android, along with various three-letter intelligence agencies.

To some extent, Samsung Knox closes a circle, since it uses the Security Enhanced (SE) Android specification developed by the National Security Agency, which prevents any user without proper permission from getting access to the secure container. It also extended the use of the NSA’s SELinux into the Android operating system. An even further closing will happen when Google integrates elements of Samsung Knox into Android L, a next-generation version of the operating system that had its beta release in June.

Samsung Knox adds to the secure capabilities that Android already has, Overcast pointed out. Vanilla Android, an install of Android without customization, already offers discretionary user access control, and the Knox platform adds such things as a trusted boot process.

That trusted boot uses Trust Zone Integrity Management Architecture to continually scan the hardware, applying a mathematical check to make sure that what’s being loaded onto the device is authorized.

With the technical basis for Samsung Knox increasingly accepted by users, Overcast said the company is focusing on broadening the choices those users will have. It now provides for multiple user domains on a device, for example, and the ability for users to choose what kind of container technology they have. With Knox’s new multiuser framework, administrators can also select what permissions and applications can be used with specific containers.

This is all in preparation for what Overcast said he sees as a tipping point in the government mobile markets, when agencies get beyond fundamental questions about security and instead look to the kinds of devices that will help them best execute their missions and provide better services to citizens.

And that, he said, is not too far into the future.

Posted by Brian Robinson on Aug 15, 2014 at 11:07 AM2 comments


Laptop at empty table

GSA makes room at the table for the CISO

The General Services Administration has spelled out a new policy for agency IT projects to ensure that basic principles promoting economy, efficiency and transparency are integrated into technology solutions developed for or operated by GSA.

Included in the IT Integration policy issued July 24 are requirements that cybersecurity be incorporated into IT projects from the beginning and that the appropriate security team has a place at the table during planning.

 “One of the largest challenges for GSA IT is early and consistent engagement with the IT security team throughout the project to understand what security requirements apply, who needs to be engaged to assist in implementation and how this impacts the project schedule,” agency CIO Sonny Hashmi wrote in the instruction letter.

With the cyber threat landscape growing in intensity and sophistication, security no longer can be layered on in IT projects as an afterthought, Hashmi explained in a blog post. “This principle will require that the GSA Office of the Chief Information Security Officer acts as a consultant and partner throughout the project life cycle, rather than being viewed as a compliance step towards the end of the project,” he wrote.

Hashmi also spelled out another principle that could help significantly improve cybersecurity: platform reuse first. That is, GSA will give priority to leveraging existing platforms for new services over building new systems.

Cybersecurity is just one part of the new GSA directive. It also includes compliance with the federal cloud-first policy and requirements for a GSA open-source-first policy as well as for single sign-on, online delivery of services, records management and better stewardship of procurements.

But I am focusing on the security requirements. IT security has been designated by the General Accountability Office as a high-risk area for all executive branch agencies since 1997 and has remained so since. This is not so much because there has been no improvement in security, but because government dependence on IT continues to increase as the systems become more complex, making it difficult for administrators to keep up with security requirements.

Ensuring that security is included from the earliest stages of planning and development could help change this. Reusing existing platforms to reduce the number of new projects being developed also could improve security by allowing administrators to concentrate on a smaller number of legacy systems with a known security profile. Expanding the use of existing platforms does not guarantee their security, of course. Expansion and repurposing will require new evaluations and new controls to make sure they meet risk management requirements. But if done well, this could be more efficient that constantly bringing new systems online.

The new policies apply to all new GSA projects, regardless of size, to all enhancements of existing systems that are over the $150,000 threshold for simplified acquisition and to any cloud acquisition or blanket purchase agreement regardless of value. Failure to follow policy could result in project termination.

Like any policy, GSA’s IT Integration policy could devolve into a morass of paperwork and checkboxes that achieves little or nothing. But if the new policy cuts through existing bureaucracy rather than add new layers, it could be a step toward improving the agency’s cybersecurity.

Posted by William Jackson on Aug 08, 2014 at 10:06 AM0 comments


Next-gen cybersecurity means anticipating threats

Next-gen cybersecurity means anticipating threats

The recent announcement of a forward-looking cyberthreat tool from the Georgia Tech Research Institute (GTRI) is an example of a developing trend in security of using broad-based data that bad guys themselves put out to try and get ahead of threats. It’s also a tacit admission that security solely based on reacting to threats is not, and will not, work.

The GTRI tool, called BlackForest, collects information from the public Internet such as hacker forums and other places those said bad guys gather to swap information and details about the malware they write and sell. It then relates that information to past activities, and uses all of that collated intelligence to warn organizations of potential threats against them – and once attacks have happened, how to make their security better.

Ryan Spanier, the head of GTRI’s Threat Intelligence Branch, said the intention is give organizations some kind of predictive ability so that, if they see certain things happening, they’ll know they may need to take action to protect their networks.

These and similar tools are badly needed. The CyberEdge Group, in its 2014 Cyberthreat Defense Report, found that more than a quarter of the organizations it surveyed had no effective foundation for threat defense. Overall, investment in those next-generation tools that could be most effective against advanced threats is still “fairly low.”

In addition, it said, because of the speed at which threats are deployed these days, the relative security and confidence of today can be gone tomorrow, and IT security teams can only make educated guesses at what attackers will try next, and where they will try it. The bottom line, it said, is that maintaining effective cyberthreat defenses not only requires constant vigilance, “but also an eye on the road ahead.”

It’s something both government and industry organizations are starting to push with more urgency. Greg Garcia, the former head of cybersecurity and communications at the Department of Homeland Security, recently said he expects to see more investment in tools that will help banks and financial institutions anticipate emerging risks. As the new executive director at the Financial Services Sector Coordinating Council for Critical Infrastructure Protection and Homeland Security, he knows how important that will be for an industry that is a primary target for cyberattacks.

The National Institute of Standards and Technology is also trying to push government agencies in that direction. In the first iteration of a cybersecurity framework it published in February this year, NIST listed four levels at which the framework could be implemented and which would “provide context on how an organization views cybersecurity risk and the processes in place to manage that risk.”

The highest level, Tier 4, is labeled Adaptive and describes an organization that “actively adapts to a changing cybersecurity landscape and responds to evolving and sophisticated threats in a timely manner” and has “continuous awareness of activities on their systems and networks.” Though NIST takes pains to say that the tiers don’t represent actually maturity of cybersecurity defenses, it also says agencies should be “encouraged” to move to higher levels.

The methodology GTRI uses for BlackForest is not that new to the security field, at least in broad terms. Security companies have for years trawled global networks to identify threats and develop defenses against them, and that’s the basis for the regular update of antivirus signatures they send to their customers. As CyberEye recently pointed out, however, those techniques are become less effective and are all but useless against the most sophisticated, and most damaging, kinds of malware.

Success for organizations in the future will not be based on how many attackers it can keep out of their networks and systems, but how fast and how effectively they can detect and respond to attacks that are already on the inside. That’s the understanding for a rush to big data analytics, which organizations are betting on will enable that kind of timely response. Gartner believes that, by 2016, fully 25 percent of large companies around the world will have adopted big data analytics for that purpose.

Whether or not BlackForest and similar tools provide the level of security their developers say they will is still to be seen. After all, the attackers have proven they are just as intelligent and creative as defenders. But these tools merely indicate the direction security needs to go, because the regular way of doing things just ain’t working.

Posted by Brian Robinson on Aug 01, 2014 at 10:55 AM3 comments