Talk about security these days often focuses on technology -- the tools agencies can deploy to keep intruders out of their networks and systems or, if they do get in, to mitigate the damage from those intrusions. Very little discussion is spent on users and how important they are to that security.
Officials, of course, point to the vigilance required of government employees who are assailed by phishing scams to get them to cough up their personal access details. Although agencies do try to educate their workers about these dangers, it’s questionable just how effective all of that is. There’s a reason social engineering attacks such as phishing are still so popular with the black hats -- they work.
Anecdotally, many federal employees are less than inspired by their agencies’ attempts to educate them about cybersecurity. Security training, when it’s given, seems to be more about meeting policy or compliance requirements. It’s often limited to just a few hours each year, and there’s rarely any follow up to drive the lessons home.
That’s unfortunate, because user confidence can be a big ally in agency efforts to make networks more secure. When employees have confidence in their agency’s overall security, they tend to pay more attention to what they can do to help improve it.
Perhaps, therefore, warning bells should be ringing over a recent survey that shows government employee confidence in agency cybersecurity is basically shot.
Over two years, a “confident or very confident” survey response about whether agencies can protect their information systems from intrusion has gone from 65 percent to just 35 percent. Confidence about whether agencies can protect the employees’ own personal information dropped even further, from 58 percent to 28 percent.
This is after an “annus horribilis” for government security in 2015, which saw major breaches at the Office of Personnel Management, the IRS and other agencies. Millions of government employee records were compromised as a result.
Then again, confidence goes both ways when it comes to security. Another survey of organizations around the world showed they lacked confidence in their employees’ cybersecurity skills, because employees frequently used a single password for different applications and shared their passwords with coworkers. Around one-fifth those workers said they would sell their passwords to an outsider.
And then there’s the confidence -- or lack thereof -- that organizations have in the security of their trading partners and suppliers. The stolen network credentials of an HVAC vendor were behind the infamous breach of retailer Target’s systems in 2013, which ended up compromising some 40 million customer debit and credit card accounts, and the company still hasn’t fully recovered.
A recent study showed that just over half of the respondents had a high confidence in their partners’ ability to protect data and access details. Think of the government’s increasing reliance on contractors and subcontractors and what any gap in security at those organizations could mean for agencies.
Good security, we are told over and over, is built on trust. Usually that refers to the trust between immediate partners in a secure data exchange, but it must also take into account the broader environment of providers and users. When that trust, as reflected in confidence, erodes as dramatically as some of these surveys seem to suggest, then it will take more than security tools to fix it.
Posted by Brian Robinson on Apr 22, 2016 at 6:03 AM0 comments
That advanced persistent threats are now the biggest cybersecurity problem government agencies face will not be news to many people. What may still be surprising, however, is just how long this problem has existed. The FBI believes at least one group of hackers has been using APTs against government agencies for at least the past five years -- and possibly much longer.
An alert posted online warned that the FBI “has obtained and validated information regarding a group of malicious cyber actors who have compromised and stolen sensitive information from various government and commercial networks,” and posted a list of domains the group had used to infiltrate networks and systems in the United States and abroad “since at least 2011.”
The domains have also been used to host malicious files, the FBI said, often through embedded links in spearphishing emails, and any activity related to the listed domains should be considered an indication of a compromise that needs mitigation.
The group identified by the FBI is thought to be a hacking unit called APT6, which various sources think is likely a nation-state-sponsored group based in China. It was a Chinese-sponsored group that was thought to have breached the systems at the Office of Personnel Management last year and compromised millions of government worker records.
The FBI alert highlights the still-yawning gap between the sophistication of those who want to get into government systems, and the government’s ability to defend against these APT attacks. The Department of Homeland Security’s Einstein and Continuous Diagnosis and Mitigation (CDM) programs, for example, have been touted as government’s main efforts to get effective security tools into agencies, but until now they’ve been based on known-signature detection, which is useless against APTs.
It was only this year that the DHS said Einstein would soon include tools that could track unknown threats, while the still-deploying CDM contract also only recently added behavior-based, non-signature tools to its product list.
Meanwhile, major government agencies continue to struggle to harden their network protection, even in the face of fallout from breaches. A recent internal review at the State Department, for example, found that the U.S. passport and visa Consular Consolidated Database was vulnerable to cyberattacks. That’s ranked as an unclassified but sensitive system that contains hundreds of millions of U.S. citizen passport and visa records that, if compromised, could threaten national security.
The potential danger of APTs is even greater considering the expansion of enterprise network boundaries. It used to be those were fairly well known and could therefore be well protected, but with the advent of mobile technology it’s become increasingly difficult to know just what network endpoints, and potential points of attack, exist at any time.
Now add the fact that organizations, particularly government agencies, are expanding the use of outside contractors, who themselves sub-contract to other companies, all of whom at some point might have access to agency networks. If that access is not well tracked, hackers could steal contractors’ access credentials and get access to agency networks.
The security company Bomgar has looked at the risk posed by third-party suppliers and found it to be quite high. While organizations tend to have a fairly high level of trust in their vendors, Bomgar said, only a third of those surveyed knew the number of logins to their networks made by third parties. Two-thirds admitted to having been breached in the past year because that vendor access was somehow compromised.
The inability to ’trust-but-verify’ is caused by the fact that so few organizations have the right technology in place. As Bomgar’s chief executive Matt Dircks pointed out, without the capacity to “granularly control access and establish an audit trail of who is doing what on your network,” you can’t protect yourself from those third-party security holes.
The APT threat, already large, will only grow. A recent report by the Institute for Critical Infrastructure Technology, a security think tank, pointed out that “at least” 100 APT groups are currently operational worldwide -- some state sponsored like APT6, but others run by criminals and mercenaries. It lists many of those groups, along with their histories, targets and methods of operation.
The conglomeration of hacktivists, state-sponsored hackers and for-hire cyber attackers are continuously targeting American corporations, organizations, universities and government networks, ICIT said, and are winning “because the United States lacks proper cyber hygiene and has yet to expedite a path to a cybersecurity-centric culture.”
That mindset could change, with the Obama Administration’s long-term strategy laid out earlier this year in its comprehensive Cybersecurity National Action Plan. How quickly that and other efforts will actually make a difference isn’t clear, however. Meanwhile, as the FBI APT6 alert shows, bad stuff has been (and probably still is) working away inside government networks, and there’s more on the way.
Posted by Brian Robinson on Apr 11, 2016 at 11:49 AM2 comments
The White House wants federal agencies to share more of their custom code with each other, and also to provide more of it to the open source community. That kind of reuse and open source development of software could certainly cut costs and provide more able software in the future, but is this also an opening for more bugs and insecure code?
The new draft policy, issued in support of the administration’s 2014 Open Government Partnership, is aimed at improving the way custom-developed code is acquired and distributed in the future. Before moving forward with this new policy, the government wants to know just how it would “fuel innovation, lower costs, benefit the public and meet operational and mission needs of covered agencies” as well as what effect it could have on the software development market.
One thing the draft policy doesn’t address directly is what impact government code could have on the security of any open source software that results. John Pescatore, director of emerging security trends at the SANS Institute, is one of those who has expressed concerns. In comments about the draft, he points out that government’s testing of its own code for vulnerabilities “has been minimal and inconsistent.”
That’s sparked an interesting back and forth about the government’s role regarding code released to the open source community. Pescatore believes scanning for vulnerabilities before code is released wouldn’t be that big of a deal. Others, however, think that responsibility belongs to the open source community, which has long maintained that “the more eyes, the more secure” open source code is.
Well, yes and no. That was the argument behind OpenSSL, for example, and yet a vulnerability that went unnoticed for years led to the global Heartbleed scare and fears of widespread data leaks and breaches.
However, it’s also true that open source code has consistently been found to be more secure than most proprietary code, though it’s not infallible by any means. In the case of government code released to open source, it will be interesting to see which would be the best way to go -- especially considering that some of that code may find its way back into government use at other agencies. So, sanitize before release, or trust to the community to eventually secure it?
Pescatore, at least, has doubts. Software is software, he believes, whether open source or proprietary. And if simple vulnerabilities are not removed before releasing it, “it is bad software.”
Posted by Brian Robinson on Mar 24, 2016 at 8:24 AM0 comments
The possibilities and problems of quantum computing have figured more in science fiction than they have in government security, but that is gradually starting to change. The impact of quantum computing on cracking encryption schemes has long been debated, at least in concept, but now some are calling for government to take a more active role in mitigating that possibility.
The push for some action may get stronger after a recent announcement that computer scientists at the Massachusetts Institute of Technology and the University of Innsbruck had assembled a quantum computer that could eventually break RSA (Rivest-Shamir-Adleman) public key encryption, the most popular form of encryption in the world. What’s more, they did it with a calculation that used just five quantum bits (qubits), far fewer than had been thought necessary.
A qubit is a unit of quantum information, analogous to the on/off bit used in classical computing, although in the “spooky” universe of quantum mechanics (as Einstein put it) a qubit can be in both states at the same time. It’s by manipulating that property that quantum computers can do some kinds of computation very efficiently, such as factoring very large numbers.
Current encryption methods, such as RSA, depend on the difficulty of doing all that number crunching. A public key is the product of two very large prime numbers, known only to the key provider, and cracking the encryption requires factoring, or breaking down, the key to reveal those two numbers. That’s very hard and would require years’ worth of computations with classical computing, even with the help of a large parallel computer.
It’s not as if quantum computers that can break public key encryption will here tomorrow. The MIT/Innsbruck effort was aimed at developing a method to factor the number 15, which was thought to require 12 qubits. That was considered the smallest number needed to meaningfully demonstrate Peter Shor’s quantum factoring algorithm, which was developed several decades ago.
And building the quantum computer, which requires a complicated setup of lasers, gases and such things as ion traps, was not simple. However, the MIT/Innsbruck team built their system to scale so that it can eventually handle much larger prime numbers. The fact that they reduced the resources required for that work by a factor of three should make that easier.
A quantum computer capable of factoring the numbers behind RSA and other encryption methods may still be another decade away, but that’s substantially less than the 20 to 30 years many had figured it would take. Some experts are already concerned that there may not be enough time to prepare adequately for the arrival of those large-enough quantum computers.
At a meeting last year, for example, computer security specialists discussed what cryptographic schemes would be required to resist quantum computers. Some openly worried that there wasn’t enough time -- given all the detailed discussion between governments and industry that will be needed -- to develop the proper protections.
At the meeting, Stephen Jordan, a physicist at the National Institute of Standards and Technology, stressed that you need a lot of people to scrutinize and test any cryptosystem for flaws if it is to be trusted, which “takes a long time.”
Some parts of government are not waiting, at least to set things in motion. At the beginning of this year, the National Security Agency’s Information Assurance Directorate published a FAQ aimed at giving national security system (NSS) developers the information they’ll need to begin planning and budgeting for new cryptography that is quantum resistant.
The IAD warned that, especially in cases where government information needs to be protected for many decades, “the potential impact of adversarial use of a quantum computer is known and without effective mitigation is devastating to NSS.”
One thing the MIT/Innsbruck team proved is that the development of quantum computers that can break very complex encryption is no longer theoretical.
“It might still cost an enormous amount of money to build, [and] you won’t be building a quantum computer and putting it on your desktop anytime soon,” Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT, said in announcing the team’s accomplishment. “But now it’s much more an engineering effort and not a basic physics question.”
Posted by Brian Robinson on Mar 11, 2016 at 6:52 AM3 comments