We all know the gears of government grind slowly, but when it comes to the arcane world of government encryption standards, “slowly” can mean something else entirely. When government time meets technology time, sparks can fly.
Take SHA-1, for example. That 160-bit hash algorithm has been at the heart of vital web security protocols such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS) since shortly after it was developed by the National Security Agency in the 1990s. It has also been a core member of the FIPS standards published by National Institute of Standards and Technology.
However, it’s been under fire for nearly a decade. In 2005, a professor in China demonstrated an attack that could be successfully launched against the SHA-1 hash function, a feat that led to a lot of soul searching within the encryption community. Less than a year later, NIST was urging agencies to begin moving away from SHA-1 and toward stronger algorithms.
At the beginning of 2011, NIST went even further and put what seemed the final kibosh on the beleaguered algorithm by stating definitely that “SHA-1 shall not be used for digital signature generation after December 31, 2013.”
But earlier this year, stories began to emerge pointing out that despite the NIST statement, many government entities were still generating new SSL certificates using SHA-1 in favor of stronger versions.
In a February survey, web services company Netcraft found that fully 98 percent of all the SSL certificates used on the web used SHA-1 signatures, and less than 2 percent used the 256-bit SHA-256. The company also pointed out that a huge number of those certificates as originally issued would still be valid beyond the start of 2017.
It’s not that the security provided by these certificates has so far proven to be porous, but a so-called collision attack could open up valid certificates to be substituted by ones constructed by attackers, allowing them to circumvent web browser verification checks.
It would be time-consuming and need a lot of computing power, but the increasingly market-driven nature of the threat industry is making that less of a barrier. Researchers have shown how the price of an SHA-1 attack will rapidly shrink over the next few years.
That’s all driving a sense of inevitability about the continuing use of SHA-1. Companies such as Microsoft and Google said some time ago they would start winding down the use of the algorithm in their products, and now the browser companies are getting on board.
The developers of Chrome, for example, recently said they will start sunsetting the use of SHA-1 beginning with a release due in November, and on Sept. 23 those in charge of Mozilla-based browsers such as Firefox said they also will be “proactively” phasing out their support of certificates that use SHA-1 signatures.
What’s a government agency to think of this? There have certainly been confusing signals along the way. In 2012, the year after it said it wanted agencies to move away from SHA-1, NIST announced the winner in a competition to create a secure hash algorithm that could eventually be the basis of a new federal SHA-3 standard.
But at the same time, NIST downplayed the need for a new standard in the shorter term, saying SHA-2 seemed to be working just fine (though NIST recent issued a request for comments on a new FIPS 202 that will validate the use of SHA-3). Meanwhile. the current version of NIST secure hash standards (FIPS PUB 180-4) still lists SHA-1 as valid for use in government applications. At the rate the private sector seems to be moving, however, it seems that will soon be impractical.
Posted by Brian Robinson on Sep 26, 2014 at 10:21 AM0 comments
It has been a brutal season for data breaches, from the wholesale theft of customer records numbering in the billions to the exposure of naughty celebrity pictures. More significant to agencies is the case that cost US Investigations Services (USIS) a contract to perform government background checks.
It was bad enough when USIS gained attention as the contractor that vetted NSA leaker Edward Snowden and Washington Navy Yard shooter Aaron Alexis. But in the wake of an IT breach that might have exposed the files of thousands of Homeland Security employees, the Office of Personnel Management in September said “enough,” and dropped the company.
The growing pressure by hackers against high value targets and the volumes of personal and other sensitive information being stolen highlights one of the basic questions of cybersecurity: How do you keep the bad guys out?
Identity management and access control are the front lines of security. The ability to accurately identify users and control what they do within your systems is what separates insiders from outsiders. It has been apparent for some time that the traditional tool for this task – the password – is inadequate for the job, and biometrics is emerging as an alternative.
Which is better? The answer is that neither is adequate for strong, practical security on its own. Each has strengths and weaknesses, and real security requires some combination of these or other technologies.
The password by itself actually is a pretty good tool. It is simple to use, easy to implement and can be reasonably strong. The problem is one of scale. For a user juggling passwords for multiple accounts and for administrators juggling many users, the system quickly becomes unwieldy, and strong security begins to break down. In addition, the steady growth in computing power erodes password security by making dictionary and brute force attacks more practical.
Biometrics – the use of physical traits such as fingerprints, irises, faces or voices to identify persons – is more complex, but is becoming more practical. It offers the promise of better security based on the premise that there is only one you.
Yet it has its drawbacks. All forms of biometrics operate on the “close enough” principle. Whereas a password must be exact to be accepted, matching a biometric trait requires a judgment about whether there is a proper match. This leaves room for mistakes, either as false positives or false negatives. The algorithms making the decision can be tuned depending on the level of security required. But higher security comes at a cost in the form of increased time or computing power to determine a match and by increasing the possibility that a legitimate biometric will be rejected. And although there is only one you, biometric systems can be susceptible to spoofing. A stolen digital template of a biometric trait could be inserted into the authentication process to authenticate the wrong user.
There are other ID management technologies, of course, such as digital certificates, a form of electronic ID vouched for by a trusted party. These can be powerful, but also challenging to manage on a large scale.
The bottom line is, no matter how much these technologies improve, no single tool is likely to be good enough for really practical strong authentication, and it is unlikely that a new and perfect technology will come along any time soon. None of these technologies is a complete failure, either. By combining strengths to offset weaknesses, these common tools can be integrated into multifactor authentication that provides security that is stronger than the sum of its parts.
Government already has a tool that can enable multi-factor authentication, the Personal Identity Verification Card and its military counterpart, the Common Access Card. Taking full advantage of these for access control could go a long way toward improving federal cybersecurity.
Posted by William Jackson on Sep 19, 2014 at 6:56 AM3 comments
There are many ways bad guys attack systems, disrupt infrastructures and steal data, but one of the most common uses an entry point that is vital to Internet communications and yet, it seems, carelessly disregarded: the humble, but crucial, SSL.
Secure Sockets Layer is the standard way of establishing an encrypted link between a server and a client, such as a web browser. All transactions that are needed for modern Internet-based communications and commerce – credit card numbers, personal identifiers such as Social Security numbers, website logins etc. – use SSL.
But despite the moniker, SSL is sometimes not that secure. One particular and apparently growing problem is with improper SSL validation. That was the focus of the GoTo bug discovered early this year (and since patched) in Apple’s iOS and Mac OS X. The vulnerability opened up users of those systems to so-called man-in-the-middle (MITM) attacks, in which those with a “trusted” certificate can insert themselves into a communication stream between systems and read its contents.
Similar concerns are being expressed about Android devices.
Given that there are now some 1.3 million apps in the Google Play store, a million or so of which are free, it would take a long, long time to test each of them to see if they are vulnerable to an MITM attack. Organizations, or individual users, can test a limited number of apps they use without much problem (such as with this method). Testing a wide range of apps to certify that they are OK for people to use is a different matter.
Fortunately, security organizations are starting to catch up with the need. In August, the CERT Coordination Center at Carnegie Mellon University, which works with many companies and government agencies, introduced CERT Tapioca (Transparent Proxy Capture Appliance), a virtual machine that automates MITM analysis.
According to CERT/CC researcher Will Dorman, Tapioca so far is only catching low-hanging fruit, but it at least doesn’t take up any of his time, and it’s already caught several hundred vulnerable applications in just a few weeks of use.
The issue should be getting even more press than it has, particularly in government circles, since there are expectations that Android devices could become more attractive in the public sector with the introduction and further development of Samsung’s Knox containerization technology. Apart from device-specific elements of Knox, which Samsung is keeping to itself, most of the technology could find itself incorporated in Google’s next generation operating system, Android L.
Samsung itself got some criticism late in 2013, when researchers from Ben Gurion University in Israel said they had found a vulnerability on a Galaxy S4 device that was using Knox, but Samsung later said that wasn’t a fault with Knox itself. The company also said Knox offers additional protection against MITM attacks through mobile device management and a feature that allows traffic only from designated and secure apps to be sent via VPN tunnel.
Altogether, it’s not been a good year for SSL. In April, a major vulnerability in OpenSSL, the so-called Heartbleed bug, was revealed, one that had been around for over a year before anyone noticed it. That was also fixed, but it’s still an ongoing concern. Researchers at IBM also recently reported that, though attacks using Heartbleed have quieted down, there might still be as many as 250,000 servers left unpatched.
The OpenSSL Project, pushed by the flak it got from the Heartbleed fiasco, for the first time recently published the policy of how it handles security issues. Internally, it says, it divides security issues into low, moderate and high severity and will notify the openssl-announce list and update the organization’s home page when any fixes are planned.
SSL users can also get help through a recently started SSL Blacklist, an online and downloadable resource of SSL certificates associated with malware or botnet activities.
None of the potential problems with SSL are all that new, but with attackers becoming ever more sophisticated in their methods, as is the malware they use to disrupt systems and extract sensitive data, at least those problems seem to be getting more attention, along with tools to address them.
Posted by Brian Robinson on Sep 12, 2014 at 12:12 PM0 comments
This week’s high-profile hack is celebrity pictures stolen from iPhone accounts in the Apple cloud. Such overexposure highlights one of the first rules of digital security: If you don’t want to see a photo posted all over the web, don’t take it.
But it also highlights another rule, one that is applicable to agencies whose workers are using mobile devices: If you want to protect digital data, you need to know what data is being collected and where it goes. This is especially important with smartphones and tablets that increasingly rely on the cloud for data backup, raising questions about privacy and security.
This service is great if your phone is stolen, if you drop your tablet into the swimming pool, or you just upgrade to a new model and want to keep your settings and data. The problem is that backups can happen without users being aware, and not all clouds are secure.
The celebrity iPhone hacks apparently occurred because of a flaw in the device that allowed someone to guess passwords and access data in the account. But the vector and exploit used are not important here. What is important is that data can be exposed when it is moved to the cloud in an unmanaged way.
When it comes to embarrassing photos, everyone is responsible for his or her own security. But if you are using a device for work, you also are responsible to your agency to see that work-related data, even something as prosaic as geolocation, is properly secured. And you can’t do that if you don’t know what is being backed up. Most, if not all agencies, block automatic commercial cloud backups from agency-issued devices and from managed personal devices. But that still leaves a large number of personal devices used informally on the job on which personal and government data are mixed.
Data from iPhones, iPads and iPod touch is backed up on iCloud, which provides 5 GB of storage to users. According to Apple, “iCloud automatically backs up your device over Wi-Fi every day while it’s turned on, locked and connected to a power source.”
Android offers a data backup service with remote cloud storage that provides a restore point for application data and settings. This is application specific, however, and not all Android devices include the backup transport function.
This backup is limited, according to Android. “You cannot read or write backup data on demand and cannot access it in any way other than through the APIs provided by the Backup Manager.” Android also warns, “because the cloud storage and transport service can differ from device to device, Android makes no guarantees about the security of your data while using backup. You should always be cautious about using backup to store sensitive data, such as usernames and passwords.”
Windows Phone 8.1 lets users opt-in to data backup. Users can turn the service on and choose how the phone backs up apps, settings, texts, photos and videos to the cloud.
The bottom line is that agencies and employees should be aware of the backup policy and mechanism of the mobile devices being used on the job and actively manage these options to ensure that sensitive data is not being moved somewhere where it could be exposed.
Posted by William Jackson on Sep 05, 2014 at 11:15 AM0 comments