The security threat faced by government networks and computer systems should now be obvious to everyone, even if some of the efforts to protect against those threats have been tardy. Threats against critical infrastructure systems, which are just as important to all levels of government, are less well known.
Security vendor Kaspersky Labs has taken a deep dive into the world of industrial control systems (ICS), which form the digital backbone of critical infrastructure systems, and found that it’s a very scary place. Even though the 189 ICS vulnerabilities it found in 2015 are at the same level of the past few years, the report said, that’s 10 times more than were discovered in 2010.
The higher numbers can likely be put down to increased attention on ICS security. However, as Kaspersky pointed out, that also means those vulnerabilities likely have been present for years before they were discovered and, presumably, open to exploit that whole time.
Just under half of the vulnerabilities in 2015 were considered critical by Kaspersky, and most of the rest were of “medium severity.” However, exploits for 26 of the vulnerabilities are already available, it said, while for many of the others no exploit code was necessary to get unauthorized access to the vulnerable systems. Kaspersky also found that only 85 percent of the published vulnerabilities had been completely fixed.
As with other types of cyberattacks, the threats against critical infrastructure systems seem to be getting more sophisticated. The hairs on the back of many peoples’ necks stood to attention when a likely state-sponsored attack on Ukraine’s power grid in December last year was discovered. An analysis said it was the first time such an attack had been made against a nation’s critical infrastructure systems.
Fearful that a similar attack could be leveled against U.S. systems, several senators recently proposed legislation that seeks to guard against that by replacing some of the digital components in the U.S. power grid with analog versions as a first attempt to stiffen the country’s critical infrastructure defenses.
The bad news continues. SentinelOne, another security firm, has found other sophisticated malware targeting at least one energy company. It’s likely a dropper tool used to gain access to carefully targeted network users, and it “exhibits traits seen in previous nation-state Rootlets and appears to have been designed by multiple developers with high-level skills and access to considerable resources,” the company said.
In other words, this is another piece of government-sponsored malware aimed at critical infrastructure. What’s more concerning is that the malware, called Furtim, was found on a dark web hacking forum, where such government-sponsored stuff isn’t usually found.
The potential danger of these kinds of attacks has been recognized by the U.S. government for some time, with outfits such as the National Institute of Standards and Technology and the Department of Homeland Security describing various security frameworks and monitoring practices that companies and infrastructure organizations should adopt to boost their cyber defenses.
More specific tools could be on the way. The Defense Advanced Research Projects Agency, for example, will soon kick off its Rapid Attack Detection, Isolation and Characterizations Systems (RADICS) program, which is aimed at developing automated systems that will help utilities restore power within seven days of a cyberattack. Part of that program is intended to produce tools that “can localize and characterize malicious software that has gained access to critical utility systems,” according to the broad agency announcement.
The problems posed by the growing, and increasingly sophisticated, attacks on critical infrastructure expand when the Internet of Things is taken into account. With many systems linked through the IoT, new vulnerabilities may be created by the “expanded” critical infrastructure. As Kaspersky Labs points out, business requirements now often dictate that ICS link with external systems and networks.
Protecting the infrastructure from attack will require a new way of thinking about critical systems cybersecurity. The old ways of isolating critical environment and “security through obscurity” can no longer be considered a sufficient security control for ICS, Kaspersky said.
Posted by Brian Robinson on Jul 18, 2016 at 2:25 PM0 comments
The challenge of so-called “shadow IT” is the inherent insecurity posed by unsanctioned devices and applications used throughout the enterprise. If IT managers don’t know what they’ve got running on the network, they can’t assess the risk these smartphones and apps pose or what kind of malware is poised to strike at the agency’s systems and data.
Even if the users are aware of the potential problems of the devices and applications they are toting in the workplace, that doesn’t mean they are safe. As the Defense Department recently pointed out, actual malware doesn’t have to exist in the apps on a device to offer a potential threat.
In an advisory put out by several of the services, common access card (CAC) users were warned not to use a free application they could download from Google Play that would scan the barcode on the front of the ID card, and through that get personal data of the cardholder such as name, Social Security number, military rank and DOD ID number.
As one memo from the Air Force put it, why would users even need such an app since, presumably, they already know the details embedded in their own cards? And even if there is an innocent reason for scanning other cards (some kind of misplaced curiosity?), there’s no way to know where the scanned information will end up.
The app, called CAC Scan, expands the definition of what should be considered a “risky” app in bring-your-own-device and shadow IT era, according to mobile security company Lookout. When it analyzed the app, it found no malicious behavior that would trigger any regular security concern, but nevertheless it does accurately decode the contents of the barcode on the front of the CAC card.
The DOD itself was thinking of the insider threat posed by this app. But a bigger problem, as Lookout engineer Alex Gladd pointed out, is that this barcode scanner app saves a history of all of the barcodes its users scan and stores that data in an unencrypted database. A bad guy could use a targeted phishing campaign to get a copy of that database and subsequently extract the sensitive personal information of military members.
Think of the breach of Office of Personnel Management -- except potentially even worse.
Bad guys, who are never less than innovative, have caught on to the potential of using the apparently benign apps users can download from app stores as a front end for their nefarious means. Benign, when it comes to apps, no longer means what you think it means.
In its advisory about CAC Scan, the Army offers its CAC users a number of general pointers on mobile app security:
- Before downloading, installing or using any application, take a moment to review the “About the Developer” section and visit the developer’s website and assess its content for history, other published apps, professional appearance, etc.
- Apps that purport to allow access to military or government sites should only be installed if they are official apps and downloaded through official channels.
- Perusing user ratings and reviews gives a sense of the veracity of the application’s claims. Inarguably, no app is completely perfect for all users, but complaints about security should quickly stand out from other relatively benign issues.
- Users who have inadvertently download an app they’re unsure about should inspect the device’s application permissions screen to determine what other applications or information will be accessed by the app. A video game, for example, is unlikely to have a legitimate need to access your contacts.
All well and good, but does DOD -- or any other government agency -- expect all its employees to follow all of this advice? BYOD and shadow IT aren’t going away. What CAC Scan illustrates is the kind of expanded security risk all government agencies, not just the DOD, are now facing.
Posted by Brian Robinson on Jul 05, 2016 at 1:19 PM0 comments
The Department of Defense recently reported on its “Hack the Pentagon” pilot project, and you could say there’s both good and bad news. The good news is that the hackers hired to hunt down bugs in the Pentagon’s systems found over 100 vulnerabilities in the three weeks or so they had, beginning April 18. The bad news is that they found over 100 vulnerabilities.
The pilot was the first-ever government program to pay people to hunt down bugs in systems, as a way to more quickly and less expensively shore up cybersecurity. It mirrors successful programs that companies such as Facebook, Google and Microsoft have been running over the past few years.
Defense Secretary Ashton Carter revealed the number of bugs at a recent industry conference. DOD officials had earlier put the number at a slightly lower number.
So the fact that the more than 1,400 hackers who swarmed the Pentagon’s systems found so many bugs in a such a short time is good news, inasmuch as DOD can rectify the buggy systems and make them safe again. All around, the program did its job in boosting Pentagon cybersecurity and probably laid the groundwork for similar programs in the future.
However, finding that many bugs in just a few weeks, particularly when no critical or sensitive systems were included, raises doubts about just how many other vulnerabilities are present in Pentagon systems. By extension, what does that mean also for the security of other government systems?
It’s not an academic question, given the rate at which the black hatters are improving their ability to attack vulnerable systems and access sensitive data. The example of the devastating attack on the Office of Personnel Management’s systems, and the compromise of millions of records there, is only a year old, after all.
Industry researchers have turned up more evidence of just how pervasive and industrialized the cybercriminal efforts have become. An underground marketplace called xDedic is now selling access to compromised servers for as little as $6 each. It has over 70,000 servers from 173 countries belonging to government agencies or corporations up for sale.
As the researchers point out, criminals or state groups can buy the credentials of the remote desktop protocol servers and then use those to launch broader attacks on an organization’s networks and systems or use them as a platform for broader attacks, such as distributed denial of service. And all without the owners of said servers knowing what’s going on.
There’s no obvious answer to this kind of market-driven black hattery. You could maybe go in the direction that the government of Singapore has decided to go, by cutting off access to the Internet completely for a fair number of its systems. That’s already done in the U.S. by three-letter agencies for some of their systems, for example, but you take an obvious hit to productivity when you apply that more broadly.
Given the apparent success of the Pentagon pilot, there would seem to be a case for expanding that kind of bounty program. However, that runs into a very government-specific problem, i.e. the lack of money. The Pentagon had $75,000 available for the pilot, and paid the bug hunters up to $15,000 for each discovery, depending on how important the find was.
It’s likely any expanded program would need a lot more, however. Yahoo alone has reportedly paid out some $1.6 million in bounties since 2013. Recently, Google said it had paid $550,000 to 82 people in just the one year its Android Security Rewards program has been running, and it intends to boost rewards even more, to a maximum of $50,000 for such things as discovery of remote exploit chains.
So, hunting bugs is getting to be an expensive endeavor, but maybe that’s what’s needed given the ROI being offered to the bad guys. Paying $6 for a chance at a potentially huge jackpot is a no-brainer for them, which is why such things as xDedic will only become bigger and exploits more available.
Posted by Brian Robinson on Jun 20, 2016 at 3:07 PM1 comments
A hackathon is a generic industry term used to describe online or in-person events where people work collaboratively on software development. They don’t always yield perfect solutions, but they often result in major advances on tough problems.
They’re also proving vital to development of security products. The Cloud Security Alliance (CSA) has used them for several years to prove the reliability of various proposals for software-defined perimeter security for cloud-based infrastructures. So far, that SDP security has been impregnable.
Earlier this year, in the fourth hackathon of the series, the CSA together with Verizon and security solutions company Vidder, tested the viability of cloud-based high availability infrastructure using an SDP front end to provide the access between various compute resources located across multiple public clouds.
First off, the event proved a cloud-based, high-availability infrastructure can be produced much quicker and for considerably less money than the equivalent hardware-based version. It also proved SDP is a solid security solution. Offering a $10,000 reward, CSA and its partners invited hackers around the world to try and break the SDP security. Despite some “highly sophisticated” attempts from the 191 identified participants who generated millions of attacks, it stood firm.
An additional demo was also included to showcase SDP’s capabilities for the U.S. government sector, where security requirements are more stringent than those for more general users of the infrastructure. A cloud-based flight management system for unmanned aerial vehicles was placed on the same network as CSA’s hackathon. Although the network was under constant attack, the UAV application was not disrupted.
SDP is an attractive solution for cloud security because it doesn’t require much new investment. It basically combines the existing device authentication, identity-based access and dynamically provisioned connectivity that most organizations should have in place under a software overlay. The CSA said the SDP model has been shown to stop all forms of network attacks, including distributed denial of service (DDoS), man-in-the-middle, SQL injection and advanced persistent threat.
Given the cost and resource constraints government is under, agencies see potential in SDP and have already launched various initiatives involving SDP security. Late last year, for example, the Department of Homeland Security selected Waverley Labs to develop the first open source SDP to defend against large and sophisticated DDoS attacks.
The CSA, together with Waverley Labs, has been working for a year on developing open source code for SDP, with the intent of getting information security and network providers to deploy SDP widely for cloud solutions. The goal is to take the inherently open approach of the internet -- a liability for security in the age of an Internet of Things -- and essentially make parts of it dark. In these appropriately named “Dark Clouds,” only those connections that can be definitely authenticated will be allowed.
None of the technologies used for SDP are new, and all of the concepts -- such as as geolocation and federation used for connectivity -- are well understood. However, most of the SDP implementations up to now have been highly customized and proprietary and designed for an organization’s specific needs. The push behind the CSA program is to develop a more general approach to SDP that can be readily applied across all organizations.
The CSA’s SDP Working Group has launched several use case initiatives, including the open-source DDoS effort and, more directly aimed at government, one that will use SDP to enable virtualized security for cloud infrastructures that complies with the “moderate” level described in the Federal Information Security Management Act. The latest initiative targets SDP that can be deployed for infrastructure as a service.
For anyone looking for examples of what it takes to erect an enterprise SDP infrastructure, Google has detailed the approach it used for its BeyondCorp initiative, which defines how employees and devices across Google access internal applications and data. With SDP, the company said, BeyondCorp essentially sees both internal and external networks as untrusted and allows access by dynamically asserting and enforcing various tiers of access.
As for CSA, the SDP Working Group hopes to get its analysis of what’s required for SDP security for those use cases, along with architectural and deployment guidelines, published in the next year or two.
Posted by Brian Robinson on Jun 03, 2016 at 1:31 PM0 comments