SSL remains security weakness despite latest reinforcements

SSL remains security weakness despite latest reinforcements

There are many ways bad guys attack systems, disrupt infrastructures and steal data, but one of the most common uses an entry point that is vital to Internet communications and yet, it seems, carelessly disregarded: the humble, but crucial, SSL.

Secure Sockets Layer is the standard way of establishing an encrypted link between a server and a client, such as a web browser. All transactions that are needed for modern Internet-based communications and commerce – credit card numbers, personal identifiers such as Social Security numbers, website logins etc. – use SSL.

But despite the moniker, SSL is sometimes not that secure. One particular and apparently growing problem is with improper SSL validation. That was the focus of the GoTo bug discovered early this year (and since patched) in Apple’s iOS and Mac OS X. The vulnerability opened up users of those systems to so-called man-in-the-middle (MITM) attacks, in which those with a “trusted” certificate can insert themselves into a communication stream between systems and read its contents.

Similar concerns are being expressed about Android devices.

Given that there are now some 1.3 million apps in the Google Play store, a million or so of which are free, it would take a long, long time to test each of them to see if they are vulnerable to an MITM attack. Organizations, or individual users, can test a limited number of apps they use without much problem (such as with this method). Testing a wide range of apps to certify that they are OK for people to use is a different matter.

Fortunately, security organizations are starting to catch up with the need. In August, the  CERT Coordination Center at Carnegie Mellon University, which works with many companies and government agencies, introduced CERT Tapioca (Transparent Proxy Capture Appliance), a virtual machine that automates MITM analysis.

According to CERT/CC researcher Will Dorman, Tapioca so far is only catching low-hanging fruit, but it at least doesn’t take up any of his time, and it’s already caught several hundred vulnerable applications in just a few weeks of use.

The issue should be getting even more press than it has, particularly in government circles, since there are expectations that Android devices could become more attractive in the public sector with the introduction and further development of Samsung’s Knox containerization technology. Apart from device-specific elements of Knox, which Samsung is keeping to itself, most of the technology could find itself incorporated in Google’s next generation operating system, Android L.

Samsung itself got some criticism late in 2013, when researchers from Ben Gurion University in Israel  said they had found a vulnerability on a Galaxy S4 device that was using Knox, but Samsung later said that wasn’t a fault with Knox itself. The company also said Knox offers additional protection against MITM attacks through mobile device management and a feature that allows traffic only from designated and secure apps to be sent via VPN tunnel.

Altogether, it’s not been a good year for SSL. In April, a major vulnerability in OpenSSL, the so-called Heartbleed bug, was revealed, one that had been around for over a year before anyone noticed it. That was also fixed, but it’s still an ongoing concern. Researchers at IBM also recently reported that, though attacks using Heartbleed have quieted down, there might still be as many as 250,000 servers left unpatched.

The OpenSSL Project, pushed by the flak it got from the Heartbleed fiasco, for the first time recently published the policy of how it handles security issues. Internally, it says, it divides security issues into low, moderate and high severity and will notify the openssl-announce list and update the organization’s home page when any fixes are planned.

SSL users can also get help through a recently started SSL Blacklist, an online and downloadable resource of SSL certificates associated with malware or botnet activities.

None of the potential problems with SSL are all that new, but with attackers becoming ever more sophisticated in their methods, as is the malware they use to disrupt systems and extract sensitive data, at least those problems seem to be getting more attention, along with tools to address them.

Posted by Brian Robinson on Sep 12, 2014 at 12:12 PM0 comments

Do you know where your mobile data is?

Do you know where your mobile data is?

This week’s high-profile hack is celebrity pictures stolen from iPhone accounts in the Apple cloud. Such overexposure highlights one of the first rules of digital security: If you don’t want to see a photo posted all over the web, don’t take it.

But it also highlights another rule, one that is applicable to agencies whose workers are using mobile devices: If you want to protect digital data, you need to know what data is being collected and where it goes. This is especially important with smartphones and tablets that increasingly rely on the cloud for data backup, raising questions about privacy and security.

This service is great if your phone is stolen, if you drop your tablet into the swimming pool, or you just upgrade to a new model and want to keep your settings and data. The problem is that backups can happen without users being aware, and not all clouds are secure.

The celebrity iPhone hacks apparently occurred because of a flaw in the device that allowed someone to guess passwords and access data in the account. But the vector and exploit used are not important here. What is important is that data can be exposed when it is moved to the cloud in an unmanaged way.

When it comes to embarrassing photos, everyone is responsible for his or her own security. But if you are using a device for work, you also are responsible to your agency to see that work-related data, even something as prosaic as geolocation, is properly secured. And you can’t do that if you don’t know what is being backed up. Most, if not all agencies, block automatic commercial cloud backups from agency-issued devices and from managed personal devices. But that still leaves a large number of personal devices used informally on the job on which personal and government data are mixed.

Data from iPhones, iPads and iPod touch is backed up on iCloud, which provides 5 GB of storage to users. According to Apple, “iCloud automatically backs up your device over Wi-Fi every day while it’s turned on, locked and connected to a power source.”

Android offers a data backup service with remote cloud storage that provides a restore point for application data and settings. This is application specific, however, and not all Android devices include the backup transport function.

This backup is limited, according to Android. “You cannot read or write backup data on demand and cannot access it in any way other than through the APIs provided by the Backup Manager.” Android also warns, “because the cloud storage and transport service can differ from device to device, Android makes no guarantees about the security of your data while using backup. You should always be cautious about using backup to store sensitive data, such as usernames and passwords.”

Windows Phone 8.1 lets users opt-in to data backup. Users can turn the service on and choose how the phone backs up apps, settings, texts, photos and videos to the cloud.

The bottom line is that agencies and employees should be aware of the backup policy and mechanism of the mobile devices being used on the job and actively manage these options to ensure that sensitive data is not being moved somewhere where it could be exposed.

Posted by William Jackson on Sep 05, 2014 at 11:15 AM0 comments

Cubes floating in space

The growing security threat to virtual systems

Security threats to virtualized environments is not a new subject, but it’s one that should be gaining prominence as organizations, and particularly those in government, virtualize more in order to cut costs and improve IT efficiencies. If agencies don’t consider the security implications, they’re opening themselves up to a world of hurt.

That’s especially true since the world of malware innovators is not standing still, as a new report from Symantec points out. As fast as defenses are erected, attackers come up with ways to get around them.

One of the more recent exploits involves attacks that are designed to wait out the automatic malware detection and analysis defenses that are increasingly being built into virtual systems. Some trojans will simply wait for multiple mouse clicks to occur before they decrypt themselves and start up their payload, and that can make it all but impossible for automated systems to come to any timely conclusion about the threat.

“Time is on [the malware’s] side,” said Candid Wüest, author of the Symantec report. “If the sample does not behave maliciously within the first five to 10 minutes, the analysis system will most likely deem the file as harmless.”

This in turn has prompted attackers to develop other methods of evading automated analysis on virtual machines, such as focusing more on the user’s interaction. The malware waits, for example, for three left-button clicks on the mouse before executing. In that case any kind of user interaction, say the use of a CAPTCHA box, a test to determine if a user is human, could prompt action.

Those kinds of exploits are harder to patch on virtual machines and require some background monitoring to generate the necessary interaction triggers, Wüest said.

The most complete government guide to virtualization security is NIST’s three-year old SP 800-125 Guide to Security for Full Virtualization Technologies. It says that the security of a full virtualization solution is heavily dependent on the individual security of its components – from hypervisor and host operation systems to applications and storage. Other sound security practices, such as keeping up to date with security patches, are also necessary.

That’s all true, but it doesn’t seem to be enough in the face of what appears to be an inevitable push by malware designers into the virtual space. And it could be that virtualization attacks are now an embedded feature in most malware. Up to 82 percent of the malware tracked by Symantec was able to run on virtual machines.

It’s not as if organizations haven’t been warned. As long ago as 2009, the Cloudburst Attack showed how attackers could go through a guest virtual machine to attack the host, in many ways an IT administrator’s worst nightmare. In 2012, the Crisis malware, which targeted Windows and Mac systems, was shown to also be capable of sneaking onto virtual machines if a specific image was installed on it.

Given the range of threats now facing virtualized environments, Symantec recommends a number of best practices for organizations to follow:

  • Protect the host server, which provides access to virtual machines, with lockdown solutions and host intrusion detection systems along with regular software updates and patches.
  • Protect both the host server and virtual machines running on it with “proactive” components that go beyond classic security such as antivirus scanners.
  • Have administrators enforce proper user access controls to the servers hosting virtual machines and use two-factor authentication or other strong login processes.
  • Make sure the virtual machines are fully integrated into disaster recovery and business continuity plans.
  • Network security tools should also have access to the virtual network traffic between the virtual machines.
  • Snapshots and images of virtual machines need to be included in the patch and upgrade cycle. Unpatched virtual machines are frequent targets of malware.
  • Integrate virtual machines into the security logging and security information and event management (SIEM) systems that are used for all other IT devices.

The overall message, which NIST stresses, is that the security of virtual machines and networks needs to be handled just as intensely as that of other IT. Given the rate of innovation on the malware side, that might not be all that’s needed, but it will go a long way.

Posted by Brian Robinson on Aug 29, 2014 at 11:58 AM0 comments

Personal Identity Verification card

Happy birthday HSPD-12; there’s still a long way to go

This month marks the 10th anniversary of Homeland Security Presidential Directive 12 mandating the development and use of an interoperable smart ID card for civilian government employees and contractors. The results of the program so far range from the impressive to the disappointing.

“I would call the programmatic platform a huge success,” said Ken Ammon, chief strategy officer for Xceedium, an ID management software vendor.

As of the first quarter of this year, 5.4 million Personal Identity Verification (PIV) cards have been issued to civilian employees and contractors, accounting for 96 percent of those who need the cards. Given employee turnover and the need to periodically reissue the cards, the coverage is quite good.

The challenge now is having them used as they were intended, as strong, two-factor authentication for both logical and physical access across agencies. This is a multifaceted challenge that is proving to be a much tougher nut to crack than designing and issuing the cards.

HSPD-12 was issued Aug. 27, 2004, by then-President George W. Bush. The heart of the mandate was simple. Inconsistencies in government IDs left the government vulnerable to terrorist attack. “Therefore, it is the policy of the United States to enhance security, increase government efficiency, reduce identity fraud and protect personal privacy by establishing a mandatory, governmentwide standard for secure and reliable forms of identification issued by the federal government to its employees and contractors (including contractor employees).”

The National Institute of Standards and Technology was given six months to produce the standards, which included identity vetting and secure, interoperable digital technology. Eight months after that, agencies would have to require use of the cards, “to the maximum extent practicable,” for access both to physical facilities and IT systems.

The first part of this effort, developing the standard and technical specifications and designing, producing and issuing the PIV cards, is the programmatic success Ammon cited. But the second part, the qualification “to the maximum extent practicable,” has proved to be a speed bump.

Seven years after the directive, the Government Accountability Office concluded in 2011 that although substantial progress had been made in issuing PIV cards and fair progress in using them for physical access to government facilities, only limited progress had been made in using them for access to government networks and minimal progress in cross-agency acceptance.

A year later, increasing the use of PIV and the military’s Common Access Card credentials was identified by the White House as a priority area for improvement. Agencies were given until March 31, 2012, to develop policies for the use of these credentials.

Reasons for the lack of widespread use cited by GAO were not technical, but administrative: Logistics, agency priorities and of course budgets. “According to agency officials, a lack of funding has . . . slowed the use of PIV credentials,” the report stated.

But technology also is an issue, as the card is only one element in any authentication system. Use of the electronic credentials in the cards has to be incorporated into systems already in place, or those systems must be replaced. Under a 2011 White House directive, all new systems under development at agencies must be enabled for PIV credentials and existing systems were to be upgraded by fiscal 2012.

Like many unfunded mandates, this has been a tough one to meet. And in the meantime, technology keeps changing. Mobile computing, for instance, means that many government workers are using tablets and cell phones for work. Technically these should require PIV authentication for government work, but many are not equipped to accommodate that.

HSPD-12 is not a failure, but it could be doing a lot better if strong, two-factor authentication was a higher priority within agencies. However, the the rapid pace of technological change makes it unlikely that any government-mandated technology will ever be completely successful.  Even so, much more could be done.

Posted by William Jackson on Aug 22, 2014 at 10:16 AM2 comments