The bad news about bug hunting

The bad news about bug hunting

The Department of Defense recently reported on its “Hack the Pentagon” pilot project, and you could say there’s both good and bad news. The good news is that the hackers hired to hunt down bugs in the Pentagon’s systems found over 100 vulnerabilities in the three weeks or so they had, beginning April 18. The bad news is that they found over 100 vulnerabilities.

The pilot was the first-ever government program to pay people to hunt down bugs in systems, as a way to more quickly and less expensively shore up cybersecurity. It mirrors successful programs that companies such as Facebook, Google and Microsoft have been running over the past few years.

Defense Secretary Ashton Carter revealed the number of bugs at a recent industry conference. DOD officials had earlier put the number at a slightly lower number.

So the fact that the more than 1,400 hackers who swarmed the Pentagon’s systems found so many bugs in a such a short time is good news, inasmuch as DOD can rectify the buggy systems and make them safe again. All around, the program did its job in boosting Pentagon cybersecurity and probably laid the groundwork for similar programs in the future.

However, finding that many bugs in just a few weeks, particularly when no critical or sensitive systems were included, raises doubts about just how many other vulnerabilities are present in Pentagon systems. By extension, what does that mean also for the security of other government systems?

It’s not an academic question, given the rate at which the black hatters are improving their ability to attack vulnerable systems and access sensitive data. The example of the devastating attack on the Office of Personnel Management’s systems, and the compromise of millions of records there, is only a year old, after all.

Industry researchers have turned up more evidence of just how pervasive and industrialized the cybercriminal efforts have become. An underground marketplace called xDedic is now selling access to compromised servers for as little as $6 each. It has over 70,000 servers from 173 countries belonging to government agencies or corporations up for sale.

As the researchers point out, criminals or state groups can buy the credentials of the remote desktop protocol servers and then use those to launch broader attacks on an organization’s networks and systems or use them as a platform for broader attacks, such as distributed denial of service. And all without the owners of said servers knowing what’s going on.

There’s no obvious answer to this kind of market-driven black hattery. You could maybe go in the direction that the government of Singapore has decided to go, by cutting off access to the Internet completely for a fair number of its systems. That’s already done in the U.S. by three-letter agencies for some of their systems, for example, but you take an obvious hit to productivity when you apply that more broadly.

Given the apparent success of the Pentagon pilot, there would seem to be a case for expanding that kind of bounty program. However, that runs into a very government-specific problem, i.e. the lack of money. The Pentagon had $75,000 available for the pilot, and paid the bug hunters up to $15,000 for each discovery, depending on how important the find was.

It’s likely any expanded program would need a lot more, however. Yahoo alone has reportedly paid out some $1.6 million in bounties since 2013. Recently, Google said it had paid $550,000 to 82 people in just the one year its Android Security Rewards program has been running, and it intends to boost rewards even more, to a maximum of $50,000 for such things as discovery of remote exploit chains.

So, hunting bugs is getting to be an expensive endeavor, but maybe that’s what’s needed given the ROI being offered to the bad guys. Paying $6 for a chance at a potentially huge jackpot is a no-brainer for them, which is why such things as xDedic will only become bigger and exploits more available.

Posted by Brian Robinson on Jun 20, 2016 at 3:07 PM1 comments

Software-defined perimeter security for cloud-based infrastructures

Software-defined perimeter security for cloud-based infrastructures

A hackathon is a generic industry term used to describe online or in-person events where people work collaboratively on software development. They don’t always yield perfect solutions, but they often result in major advances on tough problems.

They’re also proving vital to development of security products. The Cloud Security Alliance (CSA) has used them for several years to prove the reliability of various proposals for software-defined perimeter security for cloud-based infrastructures. So far, that SDP security has been impregnable.

Earlier this year, in the fourth hackathon of the series, the CSA together with Verizon and security solutions company Vidder, tested the viability of cloud-based high availability infrastructure using an SDP front end to provide the access between various compute resources located across multiple public clouds.

First off, the event proved a cloud-based, high-availability infrastructure can be produced much quicker and for considerably less money than the equivalent hardware-based version. It also proved SDP is a solid security solution. Offering a $10,000 reward, CSA and its partners invited hackers around the world to try and break the SDP security. Despite some “highly sophisticated” attempts from the 191 identified participants who generated millions of attacks, it stood firm.

An additional demo was also included to showcase SDP’s capabilities for the U.S. government sector, where security requirements are more stringent than those for more general users of the infrastructure. A cloud-based flight management system for unmanned aerial vehicles was placed on the same network as CSA’s hackathon.  Although the network was under constant attack, the UAV application was not disrupted. 

SDP is an attractive solution for cloud security because it doesn’t require much new investment. It basically combines the existing device authentication, identity-based access and dynamically provisioned connectivity that most organizations should have in place under a software overlay. The CSA said the SDP model has been shown to stop all forms of network attacks, including distributed denial of service (DDoS), man-in-the-middle, SQL injection and advanced persistent threat.

Given the cost and resource constraints government is under, agencies see potential in SDP and have already launched various initiatives involving SDP security. Late last year, for example, the Department of Homeland Security selected Waverley Labs to develop the first open source SDP to defend against large and sophisticated DDoS attacks.

The CSA, together with Waverley Labs, has been working for a year on developing open source code for SDP, with the intent of getting information security and network providers to deploy SDP widely for cloud solutions. The goal is to take the inherently open approach of the internet -- a liability for security in the age of an Internet of Things -- and essentially make parts of it dark. In these appropriately named “Dark Clouds,” only those connections that can be definitely authenticated will be allowed.

None of the technologies used for SDP are new, and all of the concepts -- such as as geolocation and federation used for connectivity -- are well understood. However, most of the SDP implementations up to now have been highly customized and proprietary and designed for an organization’s specific needs. The push behind the CSA program is to develop a more general approach to SDP  that can be readily applied across all organizations.

The CSA’s SDP Working Group has launched several use case initiatives, including the open-source DDoS effort and, more directly aimed at government, one that will use SDP to enable virtualized security for cloud infrastructures that complies with the “moderate” level described in the Federal Information Security Management Act. The latest initiative targets SDP that can be deployed for infrastructure as a service.

For anyone looking for examples of what it takes to erect an enterprise SDP infrastructure, Google has detailed the approach it used for its BeyondCorp initiative, which defines how employees and devices across Google access internal applications and data. With SDP, the company said, BeyondCorp essentially sees both internal and external networks as untrusted and allows access by dynamically asserting and enforcing various tiers of access.

As for CSA, the SDP Working Group hopes to get its analysis of what’s required for SDP security for those use cases, along with architectural and deployment guidelines, published in the next year or two.

Posted by Brian Robinson on Jun 03, 2016 at 1:31 PM0 comments

Is predictive analytics really a game changer?

Is predictive analytics really a game changer?

A recent report painted a curious picture of the state of federal government’s cybersecurity stance a year after the attack on the Office of Personnel Management, and its massive breach of government employee data, was revealed.

The report, by the non-profit industry group (ISC)2, suggested overall that government is still struggling with cybersecurity and how to effectively protect its networks, systems and data. Critical offices in many agencies, which by now should understand security imperatives, still aren’t on board.

However, what the report indicated for one key security tool may be the most interesting part.

When it comes to the technologies agencies can use to improve security, a large wedge of those security and IT professionals surveyed said they are looking to predictive analytics as the most significant and “game-changing” solution available to them. Predictive analytics received over 40 percent of the votes, against just single-digit tips for other solutions such as next-generation, identity-based distributed firewalls.

The report itself pointed out that the predictive analytics hype generated by the security industry could be behind that response. No security solution today is complete without at least some mention of a powerful analytics engine at the heart of it that will help the user get ahead of the bad guys and the threats they pose.

Analytics, as in being able to sift through vast amounts of data and flag potential dangers, certainly is a vital tool for security organizations. It provides a way to automate threat detection and allows organizations to more quickly respond to threats and intrusions, which in itself can significantly limit the impact of cyberattacks.

Predictive analytics, on the other hand, promise those organizations an ability that’s a step or two beyond that. As one of the respondents to the (ISC)2 report said, although “the jury is still out,” it’s a key component in getting ahead of the threat and preventing malicious activity rather than just cleaning up after the fact. The verdict on these predictive tools “is coming soon,” this former federal CISO said.

The Department of Homeland Security, for one, certainly seems convinced of the potential. In its fiscal 2016 performance plan, the DHS Office of Inspector General put predictive analytics front and center in preventing terrorism and enhancing security.

It’s not just security that can benefit. Other industries, such as healthcare, also see enormous potential in predictive analytics, and it’s apparently already driving a transformation in the way medical professionals assess their patients’ risk of contracting various diseases and conditions.

There’s no question that big data (itself once a much-hyped term) and analytics are becoming a large part of how organizations set themselves up to respond to cybersecurity threats, particularly as the black hats continue to design more sophisticated threats. Gartner, for example, has regularly projected their uptake by companies over the past few years.

When it comes to predictive analytics, however, some Gartner analysts are less sanguine. The results of predictive analytics don’t make for a convincing argument so far, though  there’s always hope.

To be fair, the (ISC)2 report also makes that uncertainty clear. Another respondent to the survey noted that while predictive analytics may help, they can’t be considered a silver bullet because bad guys these days work very hard to mask their activities and to make themselves look like routine users of the network.

So is predictive analytics really the game changer many seem to think it is, or at least could be? It seems likely to be a part of the security toolkit, and possibly even a vital part.  But given the way the threat industry has managed to twist and morph itself around defenses so far, it’s unlikely to be the answer.

Unfortunately, even for it to get that far, government organizations need to get much more serious about their security overall. On that issue, at least, the (ISC)2 report seems to be certain: The situation is depressingly bad.

Posted by Brian Robinson on May 20, 2016 at 8:30 AM2 comments

IBM quantum computer

Prep for next-gen encryption should start yesterday

The National Institute of Standards and Technology is getting nervous about quantum computers and what they might mean for the cryptographic systems that protect both public and private data. Once seen as far off -- if not borderline science fiction -- quantum computing now seems a much closer reality.

A few days ago, IBM announced that students, researchers and “general science enthusiasts” can now use a cloud-based, 5-quibit quantum computing platform, which it calls the IBM Quantum Experience, to see how algorithms and various experiments work with a quantum processor.

IBM sees its approach to quantum computers as a first draft of how a universal quantum computer, which can be programmed to perform any computing task, will eventually be built. Getting people to experiment with this early quantum processor will, it clearly hopes, give it pointers on how to proceed in  building quantum applications.

Though that universal computer doesn’t exist today, IBM said it envisions a medium-sized quantum processor of 50-100 qubits being possible in the next decade. Putting early access to a quantum processor in the hands of anybody with a desktop, laptop or even mobile device and represents, IBM said grandly, “the birth of quantum cloud computing.”

There’s been a raft of recent announcements aimed at quantum computing, which has clearly emerged from its conceptual stage. Intel, for example, said it would put as much as $50 million over the next 10 years into QuTech, a research unit at the Technical University of Delft, to see how to marry Delft’s quantum computing work to Intel’s expertise in making chips.

NASA is one government agency that has gone full bore into quantum computing, hooking up with Canadian quantum computing company D-Wave to see how its systems can be used to solve difficult problems  and advance artificial intelligence and machine learning.  Google, which is also committing resources to quantum computing, is a partner in this NASA venture.

No one is saying quantum computing will be a major industry any time soon. IBM’s venture apart, which is as low as you can get on a quantum computing scale, it’s not something that the general public will be able to take advantage of any time soon. However, from NIST’s perspective, that 10-year horizon is still frighteningly close when it comes to developing quantum-resistant encryption.

Current encryption methods depend on the difficulty of factoring very large numbers, such as those that enable RSA public key encryption. With the current generation of computers, even large supercomputers, that takes a very long time. For all intents and purposes, therefore, existing encryption schemes are considered very sound.

Quantum computing throws that confidence to the wind, however. By manipulating qubits -- units of quantum information analogous to classical computing bits -- computers can take advantage of quantum entanglement to do certain calculations very quickly. Including those needed to break encryption schemes.

Earlier this year, the Massachusetts Institute of Technology and the University of Innsbruck said they had assembled a quantum computer that could eventually break the RSA (Rivest-Shamir-Adleman) public key encryption, the most popular form of encryption in the world.

NIST is getting nervous because it believes pulling any kind of quantum-resistant cryptography together to take the place of RSA and other forms of encryption might take too long, because scores of people would be involved in testing and scrutinizing such cryptosytems. Things have to start happening now if those trusted cryptosystems are to be developed in time.

At the end of April, NIST kicked off this effort to develop quantum-resistant cryptography with an initial report detailing the status of quantum computing research, in which it made its concerns clear.

“While in the past it was less clear that large quantum computers are a physical possibility, many scientists now believe it to be merely a significant engineering challenge,” NIST said in the report.

Crypto-breaking quantum computers might not arrive for another 20 years, but it took almost that long to deploy the public key cryptographic infrastructure we have now, NIST said. It will take a significant effort to ensure a smooth path from what we have now to “post-quantum cryptography,” and that effort has to start now.

When standards for quantum resistant cryptography become available, NIST said it will reassess the how close the quantum threat is to affecting existing cryptography and then decide whether to deprecate or withdraw the affected standards. Agencies “should therefore be prepared to transition away from these algorithms as early as 10 years from now.”

Is NIST overreacting? You might have been able to say that a couple of years ago but, as recent events have shown, the era of full-blown quantum computing looks to be arriving far sooner than people thought. If other areas of computing are any guide, that timeframe will likely only get shorter.

Posted by Brian Robinson on May 06, 2016 at 2:08 PM2 comments