A hackathon is a generic industry term used to describe online or in-person events where people work collaboratively on software development. They don’t always yield perfect solutions, but they often result in major advances on tough problems.
They’re also proving vital to development of security products. The Cloud Security Alliance (CSA) has used them for several years to prove the reliability of various proposals for software-defined perimeter security for cloud-based infrastructures. So far, that SDP security has been impregnable.
Earlier this year, in the fourth hackathon of the series, the CSA together with Verizon and security solutions company Vidder, tested the viability of cloud-based high availability infrastructure using an SDP front end to provide the access between various compute resources located across multiple public clouds.
First off, the event proved a cloud-based, high-availability infrastructure can be produced much quicker and for considerably less money than the equivalent hardware-based version. It also proved SDP is a solid security solution. Offering a $10,000 reward, CSA and its partners invited hackers around the world to try and break the SDP security. Despite some “highly sophisticated” attempts from the 191 identified participants who generated millions of attacks, it stood firm.
An additional demo was also included to showcase SDP’s capabilities for the U.S. government sector, where security requirements are more stringent than those for more general users of the infrastructure. A cloud-based flight management system for unmanned aerial vehicles was placed on the same network as CSA’s hackathon. Although the network was under constant attack, the UAV application was not disrupted.
SDP is an attractive solution for cloud security because it doesn’t require much new investment. It basically combines the existing device authentication, identity-based access and dynamically provisioned connectivity that most organizations should have in place under a software overlay. The CSA said the SDP model has been shown to stop all forms of network attacks, including distributed denial of service (DDoS), man-in-the-middle, SQL injection and advanced persistent threat.
Given the cost and resource constraints government is under, agencies see potential in SDP and have already launched various initiatives involving SDP security. Late last year, for example, the Department of Homeland Security selected Waverley Labs to develop the first open source SDP to defend against large and sophisticated DDoS attacks.
The CSA, together with Waverley Labs, has been working for a year on developing open source code for SDP, with the intent of getting information security and network providers to deploy SDP widely for cloud solutions. The goal is to take the inherently open approach of the internet -- a liability for security in the age of an Internet of Things -- and essentially make parts of it dark. In these appropriately named “Dark Clouds,” only those connections that can be definitely authenticated will be allowed.
None of the technologies used for SDP are new, and all of the concepts -- such as as geolocation and federation used for connectivity -- are well understood. However, most of the SDP implementations up to now have been highly customized and proprietary and designed for an organization’s specific needs. The push behind the CSA program is to develop a more general approach to SDP that can be readily applied across all organizations.
The CSA’s SDP Working Group has launched several use case initiatives, including the open-source DDoS effort and, more directly aimed at government, one that will use SDP to enable virtualized security for cloud infrastructures that complies with the “moderate” level described in the Federal Information Security Management Act. The latest initiative targets SDP that can be deployed for infrastructure as a service.
For anyone looking for examples of what it takes to erect an enterprise SDP infrastructure, Google has detailed the approach it used for its BeyondCorp initiative, which defines how employees and devices across Google access internal applications and data. With SDP, the company said, BeyondCorp essentially sees both internal and external networks as untrusted and allows access by dynamically asserting and enforcing various tiers of access.
As for CSA, the SDP Working Group hopes to get its analysis of what’s required for SDP security for those use cases, along with architectural and deployment guidelines, published in the next year or two.
Posted by Brian Robinson on Jun 03, 2016 at 1:31 PM0 comments
A recent report painted a curious picture of the state of federal government’s cybersecurity stance a year after the attack on the Office of Personnel Management, and its massive breach of government employee data, was revealed.
The report, by the non-profit industry group (ISC)2, suggested overall that government is still struggling with cybersecurity and how to effectively protect its networks, systems and data. Critical offices in many agencies, which by now should understand security imperatives, still aren’t on board.
However, what the report indicated for one key security tool may be the most interesting part.
When it comes to the technologies agencies can use to improve security, a large wedge of those security and IT professionals surveyed said they are looking to predictive analytics as the most significant and “game-changing” solution available to them. Predictive analytics received over 40 percent of the votes, against just single-digit tips for other solutions such as next-generation, identity-based distributed firewalls.
The report itself pointed out that the predictive analytics hype generated by the security industry could be behind that response. No security solution today is complete without at least some mention of a powerful analytics engine at the heart of it that will help the user get ahead of the bad guys and the threats they pose.
Analytics, as in being able to sift through vast amounts of data and flag potential dangers, certainly is a vital tool for security organizations. It provides a way to automate threat detection and allows organizations to more quickly respond to threats and intrusions, which in itself can significantly limit the impact of cyberattacks.
Predictive analytics, on the other hand, promise those organizations an ability that’s a step or two beyond that. As one of the respondents to the (ISC)2 report said, although “the jury is still out,” it’s a key component in getting ahead of the threat and preventing malicious activity rather than just cleaning up after the fact. The verdict on these predictive tools “is coming soon,” this former federal CISO said.
The Department of Homeland Security, for one, certainly seems convinced of the potential. In its fiscal 2016 performance plan, the DHS Office of Inspector General put predictive analytics front and center in preventing terrorism and enhancing security.
It’s not just security that can benefit. Other industries, such as healthcare, also see enormous potential in predictive analytics, and it’s apparently already driving a transformation in the way medical professionals assess their patients’ risk of contracting various diseases and conditions.
There’s no question that big data (itself once a much-hyped term) and analytics are becoming a large part of how organizations set themselves up to respond to cybersecurity threats, particularly as the black hats continue to design more sophisticated threats. Gartner, for example, has regularly projected their uptake by companies over the past few years.
When it comes to predictive analytics, however, some Gartner analysts are less sanguine. The results of predictive analytics don’t make for a convincing argument so far, though there’s always hope.
To be fair, the (ISC)2 report also makes that uncertainty clear. Another respondent to the survey noted that while predictive analytics may help, they can’t be considered a silver bullet because bad guys these days work very hard to mask their activities and to make themselves look like routine users of the network.
So is predictive analytics really the game changer many seem to think it is, or at least could be? It seems likely to be a part of the security toolkit, and possibly even a vital part. But given the way the threat industry has managed to twist and morph itself around defenses so far, it’s unlikely to be the answer.
Unfortunately, even for it to get that far, government organizations need to get much more serious about their security overall. On that issue, at least, the (ISC)2 report seems to be certain: The situation is depressingly bad.
Posted by Brian Robinson on May 20, 2016 at 8:30 AM2 comments
The National Institute of Standards and Technology is getting nervous about quantum computers and what they might mean for the cryptographic systems that protect both public and private data. Once seen as far off -- if not borderline science fiction -- quantum computing now seems a much closer reality.
A few days ago, IBM announced that students, researchers and “general science enthusiasts” can now use a cloud-based, 5-quibit quantum computing platform, which it calls the IBM Quantum Experience, to see how algorithms and various experiments work with a quantum processor.
IBM sees its approach to quantum computers as a first draft of how a universal quantum computer, which can be programmed to perform any computing task, will eventually be built. Getting people to experiment with this early quantum processor will, it clearly hopes, give it pointers on how to proceed in building quantum applications.
Though that universal computer doesn’t exist today, IBM said it envisions a medium-sized quantum processor of 50-100 qubits being possible in the next decade. Putting early access to a quantum processor in the hands of anybody with a desktop, laptop or even mobile device and represents, IBM said grandly, “the birth of quantum cloud computing.”
There’s been a raft of recent announcements aimed at quantum computing, which has clearly emerged from its conceptual stage. Intel, for example, said it would put as much as $50 million over the next 10 years into QuTech, a research unit at the Technical University of Delft, to see how to marry Delft’s quantum computing work to Intel’s expertise in making chips.
NASA is one government agency that has gone full bore into quantum computing, hooking up with Canadian quantum computing company D-Wave to see how its systems can be used to solve difficult problems and advance artificial intelligence and machine learning. Google, which is also committing resources to quantum computing, is a partner in this NASA venture.
No one is saying quantum computing will be a major industry any time soon. IBM’s venture apart, which is as low as you can get on a quantum computing scale, it’s not something that the general public will be able to take advantage of any time soon. However, from NIST’s perspective, that 10-year horizon is still frighteningly close when it comes to developing quantum-resistant encryption.
Current encryption methods depend on the difficulty of factoring very large numbers, such as those that enable RSA public key encryption. With the current generation of computers, even large supercomputers, that takes a very long time. For all intents and purposes, therefore, existing encryption schemes are considered very sound.
Quantum computing throws that confidence to the wind, however. By manipulating qubits -- units of quantum information analogous to classical computing bits -- computers can take advantage of quantum entanglement to do certain calculations very quickly. Including those needed to break encryption schemes.
Earlier this year, the Massachusetts Institute of Technology and the University of Innsbruck said they had assembled a quantum computer that could eventually break the RSA (Rivest-Shamir-Adleman) public key encryption, the most popular form of encryption in the world.
NIST is getting nervous because it believes pulling any kind of quantum-resistant cryptography together to take the place of RSA and other forms of encryption might take too long, because scores of people would be involved in testing and scrutinizing such cryptosytems. Things have to start happening now if those trusted cryptosystems are to be developed in time.
At the end of April, NIST kicked off this effort to develop quantum-resistant cryptography with an initial report detailing the status of quantum computing research, in which it made its concerns clear.
“While in the past it was less clear that large quantum computers are a physical possibility, many scientists now believe it to be merely a significant engineering challenge,” NIST said in the report.
Crypto-breaking quantum computers might not arrive for another 20 years, but it took almost that long to deploy the public key cryptographic infrastructure we have now, NIST said. It will take a significant effort to ensure a smooth path from what we have now to “post-quantum cryptography,” and that effort has to start now.
When standards for quantum resistant cryptography become available, NIST said it will reassess the how close the quantum threat is to affecting existing cryptography and then decide whether to deprecate or withdraw the affected standards. Agencies “should therefore be prepared to transition away from these algorithms as early as 10 years from now.”
Is NIST overreacting? You might have been able to say that a couple of years ago but, as recent events have shown, the era of full-blown quantum computing looks to be arriving far sooner than people thought. If other areas of computing are any guide, that timeframe will likely only get shorter.
Posted by Brian Robinson on May 06, 2016 at 2:08 PM3 comments
Talk about security these days often focuses on technology -- the tools agencies can deploy to keep intruders out of their networks and systems or, if they do get in, to mitigate the damage from those intrusions. Very little discussion is spent on users and how important they are to that security.
Officials, of course, point to the vigilance required of government employees who are assailed by phishing scams to get them to cough up their personal access details. Although agencies do try to educate their workers about these dangers, it’s questionable just how effective all of that is. There’s a reason social engineering attacks such as phishing are still so popular with the black hats -- they work.
Anecdotally, many federal employees are less than inspired by their agencies’ attempts to educate them about cybersecurity. Security training, when it’s given, seems to be more about meeting policy or compliance requirements. It’s often limited to just a few hours each year, and there’s rarely any follow up to drive the lessons home.
That’s unfortunate, because user confidence can be a big ally in agency efforts to make networks more secure. When employees have confidence in their agency’s overall security, they tend to pay more attention to what they can do to help improve it.
Perhaps, therefore, warning bells should be ringing over a recent survey that shows government employee confidence in agency cybersecurity is basically shot.
Over two years, a “confident or very confident” survey response about whether agencies can protect their information systems from intrusion has gone from 65 percent to just 35 percent. Confidence about whether agencies can protect the employees’ own personal information dropped even further, from 58 percent to 28 percent.
This is after an “annus horribilis” for government security in 2015, which saw major breaches at the Office of Personnel Management, the IRS and other agencies. Millions of government employee records were compromised as a result.
Then again, confidence goes both ways when it comes to security. Another survey of organizations around the world showed they lacked confidence in their employees’ cybersecurity skills, because employees frequently used a single password for different applications and shared their passwords with coworkers. Around one-fifth those workers said they would sell their passwords to an outsider.
And then there’s the confidence -- or lack thereof -- that organizations have in the security of their trading partners and suppliers. The stolen network credentials of an HVAC vendor were behind the infamous breach of retailer Target’s systems in 2013, which ended up compromising some 40 million customer debit and credit card accounts, and the company still hasn’t fully recovered.
A recent study showed that just over half of the respondents had a high confidence in their partners’ ability to protect data and access details. Think of the government’s increasing reliance on contractors and subcontractors and what any gap in security at those organizations could mean for agencies.
Good security, we are told over and over, is built on trust. Usually that refers to the trust between immediate partners in a secure data exchange, but it must also take into account the broader environment of providers and users. When that trust, as reflected in confidence, erodes as dramatically as some of these surveys seem to suggest, then it will take more than security tools to fix it.
Posted by Brian Robinson on Apr 22, 2016 at 6:03 AM0 comments