IBM quantum computer

Prep for next-gen encryption should start yesterday

The National Institute of Standards and Technology is getting nervous about quantum computers and what they might mean for the cryptographic systems that protect both public and private data. Once seen as far off -- if not borderline science fiction -- quantum computing now seems a much closer reality.

A few days ago, IBM announced that students, researchers and “general science enthusiasts” can now use a cloud-based, 5-quibit quantum computing platform, which it calls the IBM Quantum Experience, to see how algorithms and various experiments work with a quantum processor.

IBM sees its approach to quantum computers as a first draft of how a universal quantum computer, which can be programmed to perform any computing task, will eventually be built. Getting people to experiment with this early quantum processor will, it clearly hopes, give it pointers on how to proceed in  building quantum applications.

Though that universal computer doesn’t exist today, IBM said it envisions a medium-sized quantum processor of 50-100 qubits being possible in the next decade. Putting early access to a quantum processor in the hands of anybody with a desktop, laptop or even mobile device and represents, IBM said grandly, “the birth of quantum cloud computing.”

There’s been a raft of recent announcements aimed at quantum computing, which has clearly emerged from its conceptual stage. Intel, for example, said it would put as much as $50 million over the next 10 years into QuTech, a research unit at the Technical University of Delft, to see how to marry Delft’s quantum computing work to Intel’s expertise in making chips.

NASA is one government agency that has gone full bore into quantum computing, hooking up with Canadian quantum computing company D-Wave to see how its systems can be used to solve difficult problems  and advance artificial intelligence and machine learning.  Google, which is also committing resources to quantum computing, is a partner in this NASA venture.

No one is saying quantum computing will be a major industry any time soon. IBM’s venture apart, which is as low as you can get on a quantum computing scale, it’s not something that the general public will be able to take advantage of any time soon. However, from NIST’s perspective, that 10-year horizon is still frighteningly close when it comes to developing quantum-resistant encryption.

Current encryption methods depend on the difficulty of factoring very large numbers, such as those that enable RSA public key encryption. With the current generation of computers, even large supercomputers, that takes a very long time. For all intents and purposes, therefore, existing encryption schemes are considered very sound.

Quantum computing throws that confidence to the wind, however. By manipulating qubits -- units of quantum information analogous to classical computing bits -- computers can take advantage of quantum entanglement to do certain calculations very quickly. Including those needed to break encryption schemes.

Earlier this year, the Massachusetts Institute of Technology and the University of Innsbruck said they had assembled a quantum computer that could eventually break the RSA (Rivest-Shamir-Adleman) public key encryption, the most popular form of encryption in the world.

NIST is getting nervous because it believes pulling any kind of quantum-resistant cryptography together to take the place of RSA and other forms of encryption might take too long, because scores of people would be involved in testing and scrutinizing such cryptosytems. Things have to start happening now if those trusted cryptosystems are to be developed in time.

At the end of April, NIST kicked off this effort to develop quantum-resistant cryptography with an initial report detailing the status of quantum computing research, in which it made its concerns clear.

“While in the past it was less clear that large quantum computers are a physical possibility, many scientists now believe it to be merely a significant engineering challenge,” NIST said in the report.

Crypto-breaking quantum computers might not arrive for another 20 years, but it took almost that long to deploy the public key cryptographic infrastructure we have now, NIST said. It will take a significant effort to ensure a smooth path from what we have now to “post-quantum cryptography,” and that effort has to start now.

When standards for quantum resistant cryptography become available, NIST said it will reassess the how close the quantum threat is to affecting existing cryptography and then decide whether to deprecate or withdraw the affected standards. Agencies “should therefore be prepared to transition away from these algorithms as early as 10 years from now.”

Is NIST overreacting? You might have been able to say that a couple of years ago but, as recent events have shown, the era of full-blown quantum computing looks to be arriving far sooner than people thought. If other areas of computing are any guide, that timeframe will likely only get shorter.

Posted by Brian Robinson on May 06, 2016 at 2:08 PM3 comments


Confidence: The secret sauce for security

Confidence: The secret sauce for security

Talk about security these days often focuses on technology -- the tools agencies can deploy to keep intruders out of their networks and systems or, if they do get in, to mitigate the damage from those intrusions. Very little discussion is spent on users and how important they are to that security.

Officials, of course, point to the vigilance required of government employees who are assailed by phishing scams to get them to cough up their personal access details. Although agencies do try to educate their workers about these dangers, it’s questionable just how effective all of that is. There’s a reason social engineering attacks such as phishing are still so popular with the black hats -- they work.

Anecdotally, many federal employees are less than inspired by their agencies’ attempts to educate them about cybersecurity. Security training, when it’s given, seems to be more about meeting policy or compliance requirements. It’s often limited to just a few hours each year, and there’s rarely any follow up to drive the lessons home.

That’s unfortunate, because user confidence can be a big ally in agency efforts to make networks more secure. When employees have confidence in their agency’s overall security, they tend to pay more attention to what they can do to help improve it.

Perhaps, therefore, warning bells should be ringing over a recent survey that shows government employee confidence in agency cybersecurity is basically shot.

Over two years, a “confident or very confident” survey response about whether agencies can protect their information systems from intrusion has gone from 65 percent to just 35 percent. Confidence about whether agencies can protect the employees’ own personal information dropped even further, from 58 percent to 28 percent.

This is after an “annus horribilis” for government security in 2015, which saw major breaches at the Office of Personnel Management, the IRS and other agencies. Millions of government employee records were compromised as a result.

Then again, confidence goes both ways when it comes to security. Another survey of organizations  around the world showed they lacked confidence in their employees’ cybersecurity skills, because employees frequently used  a single password for different applications and shared their passwords with coworkers. Around one-fifth those workers said they would sell their passwords to an outsider.

And then there’s the confidence -- or lack thereof -- that organizations have in the security of their trading partners and suppliers. The stolen network credentials of an HVAC vendor were behind the infamous breach of retailer Target’s systems in 2013, which ended up compromising some 40 million customer debit and credit card accounts, and the company still hasn’t fully recovered.

A recent study showed that just over half of the respondents had a high confidence in their partners’ ability to protect data and access details. Think of the government’s increasing reliance on contractors and subcontractors and what any gap in security at those organizations could mean for agencies.

Good security, we are told over and over, is built on trust. Usually that refers to the trust between immediate partners in a secure data exchange, but it must also take into account the broader environment of providers and users. When that trust, as reflected in confidence, erodes as dramatically as some of these surveys seem to suggest, then it will take more than security tools to fix it.

Posted by Brian Robinson on Apr 22, 2016 at 6:03 AM0 comments


Government slow to mount defense against APTs

Government slow to mount defense against APTs

That advanced persistent threats are now the biggest cybersecurity problem government agencies face will not be news to many people. What may still be surprising, however, is just how long this problem has existed. The FBI believes at least one group of hackers has been using APTs against government agencies for at least the past five years -- and possibly much longer.

An alert posted online warned that the FBI “has obtained and validated information regarding a group of malicious cyber actors who have compromised and stolen sensitive information from various government and commercial networks,” and posted a list of domains the group had used to infiltrate networks and systems in the United States and abroad “since at least 2011.”

The domains have also been used to host malicious files, the FBI said, often through embedded links in spearphishing emails, and any activity related to the listed domains should be considered an indication of a compromise that needs mitigation.

The group identified by the FBI is thought to be a hacking unit called APT6, which various sources think is likely a nation-state-sponsored group based in China. It was a Chinese-sponsored group that was thought to have breached the systems at the Office of Personnel Management last year and compromised millions of government worker records.

The FBI alert highlights the still-yawning gap between the sophistication of those who want to get into government systems, and the government’s ability to defend against these APT attacks. The Department of Homeland Security’s Einstein and Continuous Diagnosis and Mitigation (CDM) programs, for example, have been touted as government’s main efforts to get effective security tools into agencies, but until now they’ve been based on known-signature detection, which is useless against APTs.

It was only this year that the DHS said Einstein would soon include tools that could track unknown threats, while the still-deploying CDM contract also only recently added behavior-based, non-signature tools to its product list.

Meanwhile, major government agencies continue to struggle to harden their network protection, even in the face of fallout from breaches. A recent internal review at the State Department, for example, found that the U.S. passport and visa Consular Consolidated Database was vulnerable to cyberattacks. That’s ranked as an unclassified but sensitive system that contains hundreds of millions of U.S. citizen passport and visa records that, if compromised, could threaten national security.

The potential danger of APTs is even greater considering the expansion of enterprise network boundaries. It used to be those were fairly well known and could therefore be well protected, but with the advent of mobile technology it’s become increasingly difficult to know just what network endpoints, and potential points of attack, exist at any time.

Now add the fact that organizations, particularly government agencies, are expanding the use of outside contractors, who themselves sub-contract to other companies, all of whom at some point might have access to agency networks. If that access is not well tracked, hackers could steal contractors’ access credentials and get access to agency networks.

The security company Bomgar has looked at the risk posed by third-party suppliers and found it to be quite high. While organizations tend to have a fairly high level of trust in their vendors, Bomgar said, only a third of those surveyed knew the number of logins to their networks made by third parties. Two-thirds admitted to having been breached in the past year because that vendor access was somehow compromised.

The inability to ’trust-but-verify’ is caused by the fact that so few organizations have the right technology in place. As Bomgar’s chief executive Matt Dircks pointed out, without the capacity to “granularly control access and establish an audit trail of who is doing what on your network,” you can’t protect yourself from those third-party security holes.

The APT threat, already large, will only grow. A recent report by the Institute for Critical Infrastructure Technology, a security think tank, pointed out that “at least” 100 APT groups are currently operational worldwide -- some state sponsored like APT6, but others run by criminals and mercenaries. It lists many of those groups, along with their histories, targets and methods of operation.

The conglomeration of hacktivists, state-sponsored hackers and for-hire cyber attackers are continuously targeting American corporations, organizations, universities and government networks, ICIT said, and are winning “because the United States lacks proper cyber hygiene and has yet to expedite a path to a cybersecurity-centric culture.”

That mindset could change, with the Obama Administration’s long-term strategy laid out earlier this year in its comprehensive Cybersecurity National Action Plan. How quickly that and other efforts will actually make a difference isn’t clear, however. Meanwhile, as the FBI APT6 alert shows, bad stuff has been (and probably still is) working away inside government networks, and there’s more on the way.

Posted by Brian Robinson on Apr 11, 2016 at 11:49 AM2 comments


Secure code before or after sharing?

Secure code before or after sharing?

The White House wants federal agencies to share more of their custom code with each other, and also to provide more of it to the open source community. That kind of reuse and open source development of software could certainly cut costs and provide more able software in the future, but is this also an opening for more bugs and insecure code?

The new draft policy, issued in support of the administration’s 2014 Open Government Partnership, is aimed at improving the way custom-developed code is acquired and distributed in the future. Before moving forward with this new policy, the government  wants to know just how it would “fuel innovation, lower costs, benefit the public and meet operational and mission needs of covered agencies” as well as what effect it could have on the software development market.

One thing the draft policy doesn’t address directly is what impact government code could have on the security of any open source software that results.  John Pescatore, director of emerging security trends at the SANS Institute, is one of those who has expressed concerns. In comments about the draft, he points out that government’s testing of its own code for vulnerabilities “has been minimal and inconsistent.”

That’s sparked an interesting back and forth about the government’s role regarding code released to the open source community. Pescatore believes scanning for vulnerabilities before code is released wouldn’t be that big of a deal. Others, however, think that responsibility belongs to the open source community, which has long maintained that “the more eyes, the more secure” open source code is.

Well, yes and no. That was the argument behind OpenSSL, for example, and yet a vulnerability that went unnoticed for years led to the global Heartbleed scare and fears of widespread data leaks and breaches.

However, it’s also true that open source code has consistently been found to be more secure than most proprietary code, though it’s not infallible by any means. In the case of government code released to open source, it will be interesting to see which would be the best way to go -- especially considering that some of that code may find its way back into government use at other agencies. So, sanitize before release, or trust to the community to eventually secure it?

Pescatore, at least, has doubts. Software is software, he believes, whether open source or proprietary. And if simple vulnerabilities are not removed before releasing it, “it is bad software.”

Posted by Brian Robinson on Mar 24, 2016 at 8:24 AM0 comments