IT manager filling out forms for computer compliance

Security best practices at the root of FISMA amendments

A bill updating federal information security requirements has passed unanimously in the House and now awaits action in the Senate, raising the possibility that Congress might actually enact some kind of cybersecurity legislation.

The Federal Information Security Amendments Act of 2013 would require agencies to take a risk-based approach to information security, using automated tools for continuous monitoring of civilian, military and intelligence IT systems. It essentially would bring the Federal Information Security Management Act into line with the best practices agencies already are adopting.

Like the current FISMA, it would require annual reports to Congress, and it would be congressional oversight that ultimately would determine its success in improving federal cybersecurity. The question is: Will Congress continue to grade agency performance based on paperwork compliance, or will it measure actual security?

The bill was introduced by Rep. Darrell Issa (R-Calif.) with five bipartisan cosponsors to “provide a comprehensive framework for ensuring the effectiveness of information security controls,” and “effective governmentwide management and oversight of the related information security risks,” for both civilian and national security systems.

It is technology agnostic, leaving the selection of the appropriate hardware and software up to each agency based on guidance and standards developed by the National Institute of Standards and Technology. It defines “adequate security” as “security commensurate with the risk and magnitude of the harm resulting from the unauthorized access to or loss, misuse, destruction or modification of information.”

The bill gives a nod to cloud computing by including services in its definition of systems. NIST would develop standards in cooperation with security agencies, including the National Security Agency, “to assure, to the maximum extent feasible, that such standards and guidelines are complementary with standards and guidelines developed for national security systems,” although the Defense Department and CIA will continue to oversee their own systems. Each agency would have a chief information security officer, either the CIO or a senior official reporting directly to the CIO.

None of this is radically different from FISMA as it now stands, and nothing in the current law prohibits the use of these tools and processes. But FISMA has remained mired in paperwork documenting compliance within the letter of the law rather than improving cybersecurity. And much of the fault for that lies with Congress.

In the early days of FISMA there was a lot of basic and remedial work to be done. Agencies had to create accurate inventories of IT systems, determine their condition and OK their operation. Not certify that they were secure, but that the agency understood the risks of operating them and accepted those risks.

These were necessary tasks and important steps toward effective security. But FISMA has struggled to get past this stage because it is easier to measure paperwork compliance than security status. Harried administrators and security teams worked diligently to keep Congress off their backs and devoted what resources were left to improving security.

A focus on establishing priorities and automating processes has improved security in recent years, although agencies still struggle to keep up with the bad guys. Codifying these efforts could help if Congress can find a way to measure results rather than process.

Posted by William Jackson on Jun 14, 2013 at 9:39 AM1 comments


Example of a password strength meter

Those meters that rate password strength work, until they don't

We know the limitations of passwords: They are difficult to scale, and managing truly secure passwords is a headache for administrators and end users. We also know that although there are alternate technologies for online authentication, passwords probably are here to stay.

“Passwords are not going to disappear overnight, or in the next 10 years or 20 years,” said Lujo Bauer, assistant research professor in Carnegie Mellon University’s Electrical & Computer Engineering Department.

So how to make the best of what we are stuck with? One tool increasingly common on public- and private-sector websites are strength meters, an alternative to stringent password policies intended to nudge users toward better security by providing feedback when creating passwords. As a user creates a password, it provides feedback, such as whether the password is “weak,” “good” or “strong.”

But a study of these tools at Carnegie Mellon  suggests that you can only push users so far before you hit the point of diminishing returns.

Using the meters resulted in longer, sometimes better, passwords. But, “there seems to be a limit to the stringency that a user will tolerate,” researchers found. “Were meters too stringent, users might just give up.”

 Percentage of passwords broken after 5 trillion guesses

46.7% Created with no strength meter
39.4% Created with baseline strength meter
39.2% Created with meter requiring eight letters, numerals and characters for a top score
33.7% Created with a meter requiring 16 letters for a top score
26.3% Created with a meter awarding only half the score of other meters
27.9% Created with a meter awarding only one third the score of other meters

Source: How Does Your Password Measure Up? The Effect of Strength Meters on Password Creation

The findings are significant not because they are unexpected — they’re not — but because this apparently is the first large-scale study of a technology that is widely used but not well understood.

Bauer and colleagues at Carnegie Mellon conducted the study with 2,931 subjects who created passwords on sites using one of 14 types of meters with different displays and criteria for determining strength. The only requirement was that the password be at least eight characters long. Strength was evaluated using a simulated password-guessing algorithm and the participants returned to the test site two days later to see how well they remembered their passwords.

All of the strength meters resulted in users creating longer, more complex passwords than those created on sites with no meter. But length does not equal strength. Only users at sites using two very stringent meters produced passwords that were significantly more difficult to break.

However, security reached a plateau on the site with the most stringent meter, which gave users very low scores — grading at a rate of one-third of other meters — and required more complexity to get a strong security rating. Apparently the higher requirements frustrated users who gave up trying to please the meter.

Interestingly, the ability to remember a password two days later did not vary significantly according to its strength.

The lesson: Don’t push users too far; take the annoyance factor into account when having users create new passwords.

Bauer, who studies access control systems, had some other practical recommendations for making the most of passwords:

  • Strong passwords do not have to be hard to use. Combinations of words — pass phrases — can provide a high level of security while being easy to remember.
  • Length is a more effective requirement for producing strong passwords than the use of numerals and special characters. Requiring 16 letters tends to produce a stronger password than requiring a combination of eight letters, numbers and other characters.
  • Instruction can have a significant impact on password strength. Explain to users why a strong password is needed and what makes it strong.

Posted by William Jackson on Jun 11, 2013 at 9:39 AM7 comments


Cyber retaliation illustration road sign

The hack-back vs. the rule of law: Who wins?

Cyberspace has often been compared to the Wild West, where six-gun law and posse justice prevailed against rustlers and claim jumpers. But beware of calls for vigilante justice for cyber criminals.

The concept of protecting yourself assertively online is not new. The active defense company CrowdStrike advocates strategies with “flexibility of response actions,” including “deception, containment, tying up adversary resources and creating doubt and confusion while denying them the benefits of their operations.”

The subject has gained new visibility lately with the publication of a report from an independent Commission on the Theft of American Intellectual Property.

The commissioners, who include former high-level intelligence, defense and diplomatic officials, offer a balanced set of recommendations to address the problem of intellectual property theft, which they say could be costing our economy hundreds of billions of dollars annually.

The key is raising the cost for the thieves, they say. “IP theft needs to have consequences, with costs sufficiently high that state and corporate behavior and attitudes that support such theft are fundamentally changed.” Their recommendations include making IP theft a national security issue and strengthening law enforcement and other legal responses; strengthening government acquisition requirements and supply chain security; promoting the rule of law; strengthening diplomatic efforts; and improving cybersecurity.

But what gets attention in the 100-page report is the comment that, “without damaging the intruder’s own network, companies that experience cyber theft ought to be able to retrieve their electronic files or prevent the exploitation of their stolen information.” And, “both technology and law must be developed to implement a range of more aggressive measures that identify and penalize illegal intruders into proprietary networks, but do not cause damage to third parties.”

Hardly a call to vigilante justice. But the Center for Strategic and International Studies’ James A. Lewis offers a warning about taking private retaliation too far. “This is a remarkably bad idea that would harm the national interest,” he says in a recent commentary.

Lewis is a strong advocate for global norms of behavior in cyberspace and use of diplomacy to address international issues. Patience is a safer and more practical way to effect change than direct action, he says. For government to allow private retaliation through means that otherwise would be illegal would undercut U.S. efforts to foster international norms and respect for the rule of law. As a practical matter it could expose U.S. citizens to prosecution under foreign and international law, and there could be other — possibly more embarrassing — consequences.

“In a contest over who can go further in violating the law, despite the bluster of some in the high-tech community, private citizens are no match for the Russian mafia, the Russian Federal Security Service or the People’s Liberation Army in China,” Lewis writes. “This is not a contest American companies can win.”

In the face of increasing cyber threats there is an understandable pent-up desire for an active response, but this response should not cross legal thresholds. In the end, we either have the rule of law or we don’t. That others do not respect this rule does not excuse us from observing it. Admittedly this puts public- and private-sector organizations and individuals at a short-term disadvantage while correcting the situation, but it’s a pill we will have to swallow.

 

Posted by William Jackson on May 31, 2013 at 9:39 AM0 comments


Different cybersecurity word clouds on two facing monitors

Built-in security could start with a common lexicon

It makes sense to buy products and services with some degree of security built-in rather than to add security piecemeal as vulnerabilities are found. That is one of the goals of an interagency working group developing plans for cybersecurity requirements in federal acquisitions.

The Joint Working Group on Improving Cybersecurity and Resilience through Acquisition, a cooperative effort between the Defense and Homeland Security departments and headed by the General Services Administration, has issued a request for information on how best to include cybersecurity requirements in contracts. Such requirements are not entirely absent from Federal Acquisition Regulations, but the working group is tasked with making them more consistent — both across government and with industry requirements — and focusing them on risk management rather than boiler-plate contract language.

Not that language isn’t important. “The importance of common language cannot be overstated,” the RFI says. “It is apparent that a common lexicon is one of the critical gaps in harmonizing federal acquisition requirements related to cybersecurity.”

Attempts at developing a common lexicon are being made. DHS’ National Initiative for Cybersecurity Careers and Studies, for example, has a cybersecurity glossary  intended “to enable clearer communication and common understanding of cybersecurity terms, through use of plain English and annotations on the definitions.” The question is whether a common lexicon can be applied consistently to the acquisition process.

The acquisition effort is part of a presidential initiative in the face of congressional gridlock to improve government and critical infrastructure cybersecurity. A voluntary framework for privately-owned critical infrastructure systems is being developed, but additions to FAR would be mandatory for agencies, although it is not anticipated that the changes would be a one-size-fits-all set of requirements.

The working group was formed under Executive Order 13636 on Improving Critical Infrastructure Cybersecurity and Presidential Policy Directive-21 on Critical Infrastructure Security and Resilience, both issued in February. According to the RFI, one of the goals of the orders is to “provide and support government-wide contracts for critical infrastructure systems and ensure that such contracts include audit rights for the security and resilience of critical infrastructure.”

The working group’s job is to make recommendations on the feasibility, benefits and merits of incorporating security standards into contracting requirements. The recommendations are expected to lay the foundation for any standards.

The working group wants to identify internal conflicts between different government cybersecurity requirements as well as conflicts with industry and international standards. Some of the issues it is asking for feedback on are incentives that could be offered to government contractors and suppliers in the face of tight budgets; how closely current commercial standards and best practices meet federal requirements; and how to better match commercial practices with federal needs.

Anyone interested in providing input to the project should respond to the RFI by June 12.

Posted by William Jackson on May 28, 2013 at 9:39 AM0 comments