The challenge of so-called “shadow IT” is the inherent insecurity posed by unsanctioned devices and applications used throughout the enterprise. If IT managers don’t know what they’ve got running on the network, they can’t assess the risk these smartphones and apps pose or what kind of malware is poised to strike at the agency’s systems and data.
Even if the users are aware of the potential problems of the devices and applications they are toting in the workplace, that doesn’t mean they are safe. As the Defense Department recently pointed out, actual malware doesn’t have to exist in the apps on a device to offer a potential threat.
In an advisory put out by several of the services, common access card (CAC) users were warned not to use a free application they could download from Google Play that would scan the barcode on the front of the ID card, and through that get personal data of the cardholder such as name, Social Security number, military rank and DOD ID number.
As one memo from the Air Force put it, why would users even need such an app since, presumably, they already know the details embedded in their own cards? And even if there is an innocent reason for scanning other cards (some kind of misplaced curiosity?), there’s no way to know where the scanned information will end up.
The app, called CAC Scan, expands the definition of what should be considered a “risky” app in bring-your-own-device and shadow IT era, according to mobile security company Lookout. When it analyzed the app, it found no malicious behavior that would trigger any regular security concern, but nevertheless it does accurately decode the contents of the barcode on the front of the CAC card.
The DOD itself was thinking of the insider threat posed by this app. But a bigger problem, as Lookout engineer Alex Gladd pointed out, is that this barcode scanner app saves a history of all of the barcodes its users scan and stores that data in an unencrypted database. A bad guy could use a targeted phishing campaign to get a copy of that database and subsequently extract the sensitive personal information of military members.
Think of the breach of Office of Personnel Management -- except potentially even worse.
Bad guys, who are never less than innovative, have caught on to the potential of using the apparently benign apps users can download from app stores as a front end for their nefarious means. Benign, when it comes to apps, no longer means what you think it means.
In its advisory about CAC Scan, the Army offers its CAC users a number of general pointers on mobile app security:
- Before downloading, installing or using any application, take a moment to review the “About the Developer” section and visit the developer’s website and assess its content for history, other published apps, professional appearance, etc.
- Apps that purport to allow access to military or government sites should only be installed if they are official apps and downloaded through official channels.
- Perusing user ratings and reviews gives a sense of the veracity of the application’s claims. Inarguably, no app is completely perfect for all users, but complaints about security should quickly stand out from other relatively benign issues.
- Users who have inadvertently download an app they’re unsure about should inspect the device’s application permissions screen to determine what other applications or information will be accessed by the app. A video game, for example, is unlikely to have a legitimate need to access your contacts.
All well and good, but does DOD -- or any other government agency -- expect all its employees to follow all of this advice? BYOD and shadow IT aren’t going away. What CAC Scan illustrates is the kind of expanded security risk all government agencies, not just the DOD, are now facing.
Posted by Brian Robinson on Jul 05, 2016 at 1:19 PM0 comments
The Department of Defense recently reported on its “Hack the Pentagon” pilot project, and you could say there’s both good and bad news. The good news is that the hackers hired to hunt down bugs in the Pentagon’s systems found over 100 vulnerabilities in the three weeks or so they had, beginning April 18. The bad news is that they found over 100 vulnerabilities.
The pilot was the first-ever government program to pay people to hunt down bugs in systems, as a way to more quickly and less expensively shore up cybersecurity. It mirrors successful programs that companies such as Facebook, Google and Microsoft have been running over the past few years.
Defense Secretary Ashton Carter revealed the number of bugs at a recent industry conference. DOD officials had earlier put the number at a slightly lower number.
So the fact that the more than 1,400 hackers who swarmed the Pentagon’s systems found so many bugs in a such a short time is good news, inasmuch as DOD can rectify the buggy systems and make them safe again. All around, the program did its job in boosting Pentagon cybersecurity and probably laid the groundwork for similar programs in the future.
However, finding that many bugs in just a few weeks, particularly when no critical or sensitive systems were included, raises doubts about just how many other vulnerabilities are present in Pentagon systems. By extension, what does that mean also for the security of other government systems?
It’s not an academic question, given the rate at which the black hatters are improving their ability to attack vulnerable systems and access sensitive data. The example of the devastating attack on the Office of Personnel Management’s systems, and the compromise of millions of records there, is only a year old, after all.
Industry researchers have turned up more evidence of just how pervasive and industrialized the cybercriminal efforts have become. An underground marketplace called xDedic is now selling access to compromised servers for as little as $6 each. It has over 70,000 servers from 173 countries belonging to government agencies or corporations up for sale.
As the researchers point out, criminals or state groups can buy the credentials of the remote desktop protocol servers and then use those to launch broader attacks on an organization’s networks and systems or use them as a platform for broader attacks, such as distributed denial of service. And all without the owners of said servers knowing what’s going on.
There’s no obvious answer to this kind of market-driven black hattery. You could maybe go in the direction that the government of Singapore has decided to go, by cutting off access to the Internet completely for a fair number of its systems. That’s already done in the U.S. by three-letter agencies for some of their systems, for example, but you take an obvious hit to productivity when you apply that more broadly.
Given the apparent success of the Pentagon pilot, there would seem to be a case for expanding that kind of bounty program. However, that runs into a very government-specific problem, i.e. the lack of money. The Pentagon had $75,000 available for the pilot, and paid the bug hunters up to $15,000 for each discovery, depending on how important the find was.
It’s likely any expanded program would need a lot more, however. Yahoo alone has reportedly paid out some $1.6 million in bounties since 2013. Recently, Google said it had paid $550,000 to 82 people in just the one year its Android Security Rewards program has been running, and it intends to boost rewards even more, to a maximum of $50,000 for such things as discovery of remote exploit chains.
So, hunting bugs is getting to be an expensive endeavor, but maybe that’s what’s needed given the ROI being offered to the bad guys. Paying $6 for a chance at a potentially huge jackpot is a no-brainer for them, which is why such things as xDedic will only become bigger and exploits more available.
Posted by Brian Robinson on Jun 20, 2016 at 3:07 PM1 comments
A hackathon is a generic industry term used to describe online or in-person events where people work collaboratively on software development. They don’t always yield perfect solutions, but they often result in major advances on tough problems.
They’re also proving vital to development of security products. The Cloud Security Alliance (CSA) has used them for several years to prove the reliability of various proposals for software-defined perimeter security for cloud-based infrastructures. So far, that SDP security has been impregnable.
Earlier this year, in the fourth hackathon of the series, the CSA together with Verizon and security solutions company Vidder, tested the viability of cloud-based high availability infrastructure using an SDP front end to provide the access between various compute resources located across multiple public clouds.
First off, the event proved a cloud-based, high-availability infrastructure can be produced much quicker and for considerably less money than the equivalent hardware-based version. It also proved SDP is a solid security solution. Offering a $10,000 reward, CSA and its partners invited hackers around the world to try and break the SDP security. Despite some “highly sophisticated” attempts from the 191 identified participants who generated millions of attacks, it stood firm.
An additional demo was also included to showcase SDP’s capabilities for the U.S. government sector, where security requirements are more stringent than those for more general users of the infrastructure. A cloud-based flight management system for unmanned aerial vehicles was placed on the same network as CSA’s hackathon. Although the network was under constant attack, the UAV application was not disrupted.
SDP is an attractive solution for cloud security because it doesn’t require much new investment. It basically combines the existing device authentication, identity-based access and dynamically provisioned connectivity that most organizations should have in place under a software overlay. The CSA said the SDP model has been shown to stop all forms of network attacks, including distributed denial of service (DDoS), man-in-the-middle, SQL injection and advanced persistent threat.
Given the cost and resource constraints government is under, agencies see potential in SDP and have already launched various initiatives involving SDP security. Late last year, for example, the Department of Homeland Security selected Waverley Labs to develop the first open source SDP to defend against large and sophisticated DDoS attacks.
The CSA, together with Waverley Labs, has been working for a year on developing open source code for SDP, with the intent of getting information security and network providers to deploy SDP widely for cloud solutions. The goal is to take the inherently open approach of the internet -- a liability for security in the age of an Internet of Things -- and essentially make parts of it dark. In these appropriately named “Dark Clouds,” only those connections that can be definitely authenticated will be allowed.
None of the technologies used for SDP are new, and all of the concepts -- such as as geolocation and federation used for connectivity -- are well understood. However, most of the SDP implementations up to now have been highly customized and proprietary and designed for an organization’s specific needs. The push behind the CSA program is to develop a more general approach to SDP that can be readily applied across all organizations.
The CSA’s SDP Working Group has launched several use case initiatives, including the open-source DDoS effort and, more directly aimed at government, one that will use SDP to enable virtualized security for cloud infrastructures that complies with the “moderate” level described in the Federal Information Security Management Act. The latest initiative targets SDP that can be deployed for infrastructure as a service.
For anyone looking for examples of what it takes to erect an enterprise SDP infrastructure, Google has detailed the approach it used for its BeyondCorp initiative, which defines how employees and devices across Google access internal applications and data. With SDP, the company said, BeyondCorp essentially sees both internal and external networks as untrusted and allows access by dynamically asserting and enforcing various tiers of access.
As for CSA, the SDP Working Group hopes to get its analysis of what’s required for SDP security for those use cases, along with architectural and deployment guidelines, published in the next year or two.
Posted by Brian Robinson on Jun 03, 2016 at 1:31 PM0 comments
A recent report painted a curious picture of the state of federal government’s cybersecurity stance a year after the attack on the Office of Personnel Management, and its massive breach of government employee data, was revealed.
The report, by the non-profit industry group (ISC)2, suggested overall that government is still struggling with cybersecurity and how to effectively protect its networks, systems and data. Critical offices in many agencies, which by now should understand security imperatives, still aren’t on board.
However, what the report indicated for one key security tool may be the most interesting part.
When it comes to the technologies agencies can use to improve security, a large wedge of those security and IT professionals surveyed said they are looking to predictive analytics as the most significant and “game-changing” solution available to them. Predictive analytics received over 40 percent of the votes, against just single-digit tips for other solutions such as next-generation, identity-based distributed firewalls.
The report itself pointed out that the predictive analytics hype generated by the security industry could be behind that response. No security solution today is complete without at least some mention of a powerful analytics engine at the heart of it that will help the user get ahead of the bad guys and the threats they pose.
Analytics, as in being able to sift through vast amounts of data and flag potential dangers, certainly is a vital tool for security organizations. It provides a way to automate threat detection and allows organizations to more quickly respond to threats and intrusions, which in itself can significantly limit the impact of cyberattacks.
Predictive analytics, on the other hand, promise those organizations an ability that’s a step or two beyond that. As one of the respondents to the (ISC)2 report said, although “the jury is still out,” it’s a key component in getting ahead of the threat and preventing malicious activity rather than just cleaning up after the fact. The verdict on these predictive tools “is coming soon,” this former federal CISO said.
The Department of Homeland Security, for one, certainly seems convinced of the potential. In its fiscal 2016 performance plan, the DHS Office of Inspector General put predictive analytics front and center in preventing terrorism and enhancing security.
It’s not just security that can benefit. Other industries, such as healthcare, also see enormous potential in predictive analytics, and it’s apparently already driving a transformation in the way medical professionals assess their patients’ risk of contracting various diseases and conditions.
There’s no question that big data (itself once a much-hyped term) and analytics are becoming a large part of how organizations set themselves up to respond to cybersecurity threats, particularly as the black hats continue to design more sophisticated threats. Gartner, for example, has regularly projected their uptake by companies over the past few years.
When it comes to predictive analytics, however, some Gartner analysts are less sanguine. The results of predictive analytics don’t make for a convincing argument so far, though there’s always hope.
To be fair, the (ISC)2 report also makes that uncertainty clear. Another respondent to the survey noted that while predictive analytics may help, they can’t be considered a silver bullet because bad guys these days work very hard to mask their activities and to make themselves look like routine users of the network.
So is predictive analytics really the game changer many seem to think it is, or at least could be? It seems likely to be a part of the security toolkit, and possibly even a vital part. But given the way the threat industry has managed to twist and morph itself around defenses so far, it’s unlikely to be the answer.
Unfortunately, even for it to get that far, government organizations need to get much more serious about their security overall. On that issue, at least, the (ISC)2 report seems to be certain: The situation is depressingly bad.
Posted by Brian Robinson on May 20, 2016 at 8:30 AM2 comments