We know the limitations of passwords: They are difficult to scale, and managing truly secure passwords is a headache for administrators and end users. We also know that although there are alternate technologies for online authentication, passwords probably are here to stay.
“Passwords are not going to disappear overnight, or in the next 10 years or 20 years,” said Lujo Bauer, assistant research professor in Carnegie Mellon University’s Electrical & Computer Engineering Department.
So how to make the best of what we are stuck with? One tool increasingly common on public- and private-sector websites are strength meters, an alternative to stringent password policies intended to nudge users toward better security by providing feedback when creating passwords. As a user creates a password, it provides feedback, such as whether the password is “weak,” “good” or “strong.”
But a study of these tools at Carnegie Mellon suggests that you can only push users so far before you hit the point of diminishing returns.
Using the meters resulted in longer, sometimes better, passwords. But, “there seems to be a limit to the stringency that a user will tolerate,” researchers found. “Were meters too stringent, users might just give up.”
Percentage of passwords broken after 5 trillion guesses
||Created with no strength meter|
||Created with baseline strength meter|
||Created with meter requiring eight letters, numerals and characters for a top score|
||Created with a meter requiring 16 letters for a top score|
||Created with a meter awarding only half the score of other meters|
||Created with a meter awarding only one third the score of other meters|
Source: How Does Your Password Measure Up? The Effect of Strength Meters on Password Creation
The findings are significant not because they are unexpected — they’re not — but because this apparently is the first large-scale study of a technology that is widely used but not well understood.
Bauer and colleagues at Carnegie Mellon conducted the study with 2,931 subjects who created passwords on sites using one of 14 types of meters with different displays and criteria for determining strength. The only requirement was that the password be at least eight characters long. Strength was evaluated using a simulated password-guessing algorithm and the participants returned to the test site two days later to see how well they remembered their passwords.
All of the strength meters resulted in users creating longer, more complex passwords than those created on sites with no meter. But length does not equal strength. Only users at sites using two very stringent meters produced passwords that were significantly more difficult to break.
However, security reached a plateau on the site with the most stringent meter, which gave users very low scores — grading at a rate of one-third of other meters — and required more complexity to get a strong security rating. Apparently the higher requirements frustrated users who gave up trying to please the meter.
Interestingly, the ability to remember a password two days later did not vary significantly according to its strength.
The lesson: Don’t push users too far; take the annoyance factor into account when having users create new passwords.
Bauer, who studies access control systems, had some other practical recommendations for making the most of passwords:
- Strong passwords do not have to be hard to use. Combinations of words — pass phrases — can provide a high level of security while being easy to remember.
- Length is a more effective requirement for producing strong passwords than the use of numerals and special characters. Requiring 16 letters tends to produce a stronger password than requiring a combination of eight letters, numbers and other characters.
- Instruction can have a significant impact on password strength. Explain to users why a strong password is needed and what makes it strong.
Posted by William Jackson on Jun 11, 2013 at 9:39 AM7 comments
Cyberspace has often been compared to the Wild West, where six-gun law and posse justice prevailed against rustlers and claim jumpers. But beware of calls for vigilante justice for cyber criminals.
The concept of protecting yourself assertively online is not new. The active defense company CrowdStrike advocates strategies with “flexibility of response actions,” including “deception, containment, tying up adversary resources and creating doubt and confusion while denying them the benefits of their operations.”
The subject has gained new visibility lately with the publication of a report from an independent Commission on the Theft of American Intellectual Property.
The commissioners, who include former high-level intelligence, defense and diplomatic officials, offer a balanced set of recommendations to address the problem of intellectual property theft, which they say could be costing our economy hundreds of billions of dollars annually.
The key is raising the cost for the thieves, they say. “IP theft needs to have consequences, with costs sufficiently high that state and corporate behavior and attitudes that support such theft are fundamentally changed.” Their recommendations include making IP theft a national security issue and strengthening law enforcement and other legal responses; strengthening government acquisition requirements and supply chain security; promoting the rule of law; strengthening diplomatic efforts; and improving cybersecurity.
But what gets attention in the 100-page report is the comment that, “without damaging the intruder’s own network, companies that experience cyber theft ought to be able to retrieve their electronic files or prevent the exploitation of their stolen information.” And, “both technology and law must be developed to implement a range of more aggressive measures that identify and penalize illegal intruders into proprietary networks, but do not cause damage to third parties.”
Hardly a call to vigilante justice. But the Center for Strategic and International Studies’ James A. Lewis offers a warning about taking private retaliation too far. “This is a remarkably bad idea that would harm the national interest,” he says in a recent commentary.
Lewis is a strong advocate for global norms of behavior in cyberspace and use of diplomacy to address international issues. Patience is a safer and more practical way to effect change than direct action, he says. For government to allow private retaliation through means that otherwise would be illegal would undercut U.S. efforts to foster international norms and respect for the rule of law. As a practical matter it could expose U.S. citizens to prosecution under foreign and international law, and there could be other — possibly more embarrassing — consequences.
“In a contest over who can go further in violating the law, despite the bluster of some in the high-tech community, private citizens are no match for the Russian mafia, the Russian Federal Security Service or the People’s Liberation Army in China,” Lewis writes. “This is not a contest American companies can win.”
In the face of increasing cyber threats there is an understandable pent-up desire for an active response, but this response should not cross legal thresholds. In the end, we either have the rule of law or we don’t. That others do not respect this rule does not excuse us from observing it. Admittedly this puts public- and private-sector organizations and individuals at a short-term disadvantage while correcting the situation, but it’s a pill we will have to swallow.
Posted by William Jackson on May 31, 2013 at 9:39 AM0 comments
It makes sense to buy products and services with some degree of security built-in rather than to add security piecemeal as vulnerabilities are found. That is one of the goals of an interagency working group developing plans for cybersecurity requirements in federal acquisitions.
The Joint Working Group on Improving Cybersecurity and Resilience through Acquisition, a cooperative effort between the Defense and Homeland Security departments and headed by the General Services Administration, has issued a request for information on how best to include cybersecurity requirements in contracts. Such requirements are not entirely absent from Federal Acquisition Regulations, but the working group is tasked with making them more consistent — both across government and with industry requirements — and focusing them on risk management rather than boiler-plate contract language.
Not that language isn’t important. “The importance of common language cannot be overstated,” the RFI says. “It is apparent that a common lexicon is one of the critical gaps in harmonizing federal acquisition requirements related to cybersecurity.”
Attempts at developing a common lexicon are being made. DHS’ National Initiative for Cybersecurity Careers and Studies, for example, has a cybersecurity glossary intended “to enable clearer communication and common understanding of cybersecurity terms, through use of plain English and annotations on the definitions.” The question is whether a common lexicon can be applied consistently to the acquisition process.
The acquisition effort is part of a presidential initiative in the face of congressional gridlock to improve government and critical infrastructure cybersecurity. A voluntary framework for privately-owned critical infrastructure systems is being developed, but additions to FAR would be mandatory for agencies, although it is not anticipated that the changes would be a one-size-fits-all set of requirements.
The working group was formed under Executive Order 13636 on Improving Critical Infrastructure Cybersecurity and Presidential Policy Directive-21 on Critical Infrastructure Security and Resilience, both issued in February. According to the RFI, one of the goals of the orders is to “provide and support government-wide contracts for critical infrastructure systems and ensure that such contracts include audit rights for the security and resilience of critical infrastructure.”
The working group’s job is to make recommendations on the feasibility, benefits and merits of incorporating security standards into contracting requirements. The recommendations are expected to lay the foundation for any standards.
The working group wants to identify internal conflicts between different government cybersecurity requirements as well as conflicts with industry and international standards. Some of the issues it is asking for feedback on are incentives that could be offered to government contractors and suppliers in the face of tight budgets; how closely current commercial standards and best practices meet federal requirements; and how to better match commercial practices with federal needs.
Anyone interested in providing input to the project should respond to the RFI by June 12.
Posted by William Jackson on May 28, 2013 at 9:39 AM0 comments
Government has been the driving force in the adoption and use of biometrics. Law enforcement has used fingerprints for forensic identification for more than a century, and more recently the U.S. government has required biometrics for identify management through smart government ID cards. Internationally, governments around the world are adopting biometric standards for passports and border controls.
But a panel of government and industry experts told legislators that biometrics might be poised to take off as a consumer technology. Like so many other recent changes, it could be driven by the evolution and convergence of the laptop and smart phone.
“Acceptance will be driven by providing added value,” said Charles H. Romine, director of the IT Laboratory at the National Institute of Standards and Technology.
And where will that added value come? Stephanie Schuckers, director of the Center for Identification Technology Research, a federally funded cooperative research center, is clear about that. “The killer app is the mobile payment system, and the driver is the customer,” she said. The convenience of using a smart phone or other mobile device for fast, secure transactions will create a market for convenient biometric authentication.
John Mears, a board member of the International Biometrics and Identification Association trade group, said rumor has it that Apple’s new iPhone 5S, which might or might not be released this summer, will come with a fingerprint reader. And if Apple can’t build a market for new technology, who can? With an expected capacity of 128G, the new phone could have the capacity to handle biometric templates.
These statements were made at a May 21 hearing of the House Science, Space and Technology subcommittees on research and technology. Given the rapid expansion of life online and the inadequacy of the current user-name-and-password paradigm, the legislators wanted to know why biometrics hasn’t been adopted more rapidly.
There are a number of reasons. For all of its promise, biometrics still is a maturing technology, and although it is practical it is not yet broadly interoperable. And for all of the recent attention paid to online threats, the public is notoriously unwilling to inconvenience itself in the name of better security.
These things will change, and maybe soon. But the legislators seemed to be working with the assumption that biometrics is rock-solid secure technology. It isn’t. There are weaknesses, trade-offs and concerns, just as with all forms of identity verification.
The experts pointed out that for a biometric, such as a fingerprint or a voice analysis, to be effective it must be unique (or close to it) and persistent. And although agencies have been using biometrics for decades, to date there is precious little research on just how unique and unchanging these features are. This is necessary before those accepting biometrics can decide if the features provide the level of certainty they require for a given purpose.
And despite the common idea that a biometric is absolute, matching has always been on a “close enough,” basis. Maybe no one else has your fingerprint, but print-matching applications use only a sampling of data picked up from a reader and stored in a template. How detailed that data is and how closely two scans must match in order to be accepted depends on the level of security an application requires. More security requires more computing capacity, more expense and possibly more inconvenience.
None of this means that biometrics can’t be a big improvement over user names and passwords. But once the technology matures organizations still will have to decide what levels of risk they are willing to accept in given situations and what expense — in terms of money, time and resources — they are willing to trade for it.
Posted by William Jackson on May 23, 2013 at 9:39 AM1 comments