The White House wants federal agencies to share more of their custom code with each other, and also to provide more of it to the open source community. That kind of reuse and open source development of software could certainly cut costs and provide more able software in the future, but is this also an opening for more bugs and insecure code?
The new draft policy, issued in support of the administration’s 2014 Open Government Partnership, is aimed at improving the way custom-developed code is acquired and distributed in the future. Before moving forward with this new policy, the government wants to know just how it would “fuel innovation, lower costs, benefit the public and meet operational and mission needs of covered agencies” as well as what effect it could have on the software development market.
One thing the draft policy doesn’t address directly is what impact government code could have on the security of any open source software that results. John Pescatore, director of emerging security trends at the SANS Institute, is one of those who has expressed concerns. In comments about the draft, he points out that government’s testing of its own code for vulnerabilities “has been minimal and inconsistent.”
That’s sparked an interesting back and forth about the government’s role regarding code released to the open source community. Pescatore believes scanning for vulnerabilities before code is released wouldn’t be that big of a deal. Others, however, think that responsibility belongs to the open source community, which has long maintained that “the more eyes, the more secure” open source code is.
Well, yes and no. That was the argument behind OpenSSL, for example, and yet a vulnerability that went unnoticed for years led to the global Heartbleed scare and fears of widespread data leaks and breaches.
However, it’s also true that open source code has consistently been found to be more secure than most proprietary code, though it’s not infallible by any means. In the case of government code released to open source, it will be interesting to see which would be the best way to go -- especially considering that some of that code may find its way back into government use at other agencies. So, sanitize before release, or trust to the community to eventually secure it?
Pescatore, at least, has doubts. Software is software, he believes, whether open source or proprietary. And if simple vulnerabilities are not removed before releasing it, “it is bad software.”
Posted by Brian Robinson on Mar 24, 2016 at 8:24 AM0 comments
The possibilities and problems of quantum computing have figured more in science fiction than they have in government security, but that is gradually starting to change. The impact of quantum computing on cracking encryption schemes has long been debated, at least in concept, but now some are calling for government to take a more active role in mitigating that possibility.
The push for some action may get stronger after a recent announcement that computer scientists at the Massachusetts Institute of Technology and the University of Innsbruck had assembled a quantum computer that could eventually break RSA (Rivest-Shamir-Adleman) public key encryption, the most popular form of encryption in the world. What’s more, they did it with a calculation that used just five quantum bits (qubits), far fewer than had been thought necessary.
A qubit is a unit of quantum information, analogous to the on/off bit used in classical computing, although in the “spooky” universe of quantum mechanics (as Einstein put it) a qubit can be in both states at the same time. It’s by manipulating that property that quantum computers can do some kinds of computation very efficiently, such as factoring very large numbers.
Current encryption methods, such as RSA, depend on the difficulty of doing all that number crunching. A public key is the product of two very large prime numbers, known only to the key provider, and cracking the encryption requires factoring, or breaking down, the key to reveal those two numbers. That’s very hard and would require years’ worth of computations with classical computing, even with the help of a large parallel computer.
It’s not as if quantum computers that can break public key encryption will here tomorrow. The MIT/Innsbruck effort was aimed at developing a method to factor the number 15, which was thought to require 12 qubits. That was considered the smallest number needed to meaningfully demonstrate Peter Shor’s quantum factoring algorithm, which was developed several decades ago.
And building the quantum computer, which requires a complicated setup of lasers, gases and such things as ion traps, was not simple. However, the MIT/Innsbruck team built their system to scale so that it can eventually handle much larger prime numbers. The fact that they reduced the resources required for that work by a factor of three should make that easier.
A quantum computer capable of factoring the numbers behind RSA and other encryption methods may still be another decade away, but that’s substantially less than the 20 to 30 years many had figured it would take. Some experts are already concerned that there may not be enough time to prepare adequately for the arrival of those large-enough quantum computers.
At a meeting last year, for example, computer security specialists discussed what cryptographic schemes would be required to resist quantum computers. Some openly worried that there wasn’t enough time -- given all the detailed discussion between governments and industry that will be needed -- to develop the proper protections.
At the meeting, Stephen Jordan, a physicist at the National Institute of Standards and Technology, stressed that you need a lot of people to scrutinize and test any cryptosystem for flaws if it is to be trusted, which “takes a long time.”
Some parts of government are not waiting, at least to set things in motion. At the beginning of this year, the National Security Agency’s Information Assurance Directorate published a FAQ aimed at giving national security system (NSS) developers the information they’ll need to begin planning and budgeting for new cryptography that is quantum resistant.
The IAD warned that, especially in cases where government information needs to be protected for many decades, “the potential impact of adversarial use of a quantum computer is known and without effective mitigation is devastating to NSS.”
One thing the MIT/Innsbruck team proved is that the development of quantum computers that can break very complex encryption is no longer theoretical.
“It might still cost an enormous amount of money to build, [and] you won’t be building a quantum computer and putting it on your desktop anytime soon,” Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT, said in announcing the team’s accomplishment. “But now it’s much more an engineering effort and not a basic physics question.”
Posted by Brian Robinson on Mar 11, 2016 at 6:52 AM3 comments
Mobile security is assumed to critical to an agency’s overall IT security, but details on the effectiveness of such programs are scarce, making it hard to assess the overall risk from mobile devices.
A study by the Ponemon Institute and cybersecurity company Lookout of nearly 600 IT and security executives at major organizations, including those in the public sector, shows the risk from mobile devices is great and increasing. In fact, the majority of the study respondents believe mobile is a root cause of breaches.
Some 83 percent say mobile devices are susceptible to hacking, and over two-thirds said it was certain or likely that their organization had a data breach caused by employees accessing sensitive and confidential information using mobile devices.
At the same time, only 33 percent of the respondents said their organization was vigilant in protecting data from unauthorized access. Even more startling, nearly 40 percent didn’t even consider protection of that data on mobile devices to be a priority.
Perhaps that’s not surprising when, according to the study, most of these IT security professionals didn’t know what their employees were really accessing on their devices. Those who said they did know thought the data was mostly email and text, when, in fact, personally identifiable information, customer records and confidential and classified documents made up a large part of it.
One of the biggest problems for security pros is translating this kind of information into the hard dollar damage that executive leaders look for to put a price on breaches. Ponemon takes a tilt at that figure, concluding that dealing with mobile devices with malware on them could cost over $26 million for the organizations in the study.
The inconsistent thinking over the utility of mobile devices and the security problems they pose is not new. A survey in 2014 by the Government Business Council found that 72 percent of federal government employees back then said they used mobile devices for work, and over half saw mobile security as one of the major challenges to expanding use of mobile. Yet less than one-third used any kind of mobile security app.
Despite all of this seeming inattention to mobile security, things seem to be improving. Last year, the Office of Management and Budget put out a cybersecurity memo that directly addressed mobile security, and the National Institute of Standards and Technology came out with a draft guide for securing mobile devices -- both moves indicating the importance of keeping mobile devices and the data they hold secure.
What, then, to make of the recent kerfuffle over the FBI getting a court order requiring Apple to break the strong encryption on an iPhone used by one of the terrorists who gunned down government workers in San Bernardino, Calif., in December?
The merits of the FBI’s argument (or of Apple’s pushback against that order) aside, this argument has implications for overall mobile security. If the FBI wins the debate and Apple must write iOS code that allows the FBI and other law enforcement and intelligence agencies to break into phones, that weaker security could compromise every other mobile user.
Strong encryption has been proposed as a universal solution for protecting data on mobile devices. It might not stop the most determined attacker, but it will prevent most of the bad actors from stealing whatever data is on a device. The Obama Administration itself has pushed for encryption, and the Ponemon report in its study found it was the most preferred means of securing data.
Recently, however, Bloomberg reported on what it called a “secret meeting” at the White House around Thanksgiving last year, where senior national security officials ordered government agencies to develop encryption workarounds so that investigators could get to user data as they needed.
All of this seems to throw the issue of mobile security risk -- one of the most important government IT issues -- into doubt, once again. With malware and the attackers who use it becoming ever more sophisticated and capable, any weaknesses will be found out and exploited. For agencies and mobile users, conflicting messages over security sow doubt and confusion.
So, where to now?
This blog was changed Feb. 29 to include Lookout, Ponemon Institute's partner in the mobile risk study.
Posted by Brian Robinson on Feb 26, 2016 at 10:46 AM0 comments
Is this the year when software-defined anything (SDx) becomes the template for federal agency IT security? It’s been knocking at the door for a while, and the spending outlook for government IT in President Barack Obama’s recent budget proposals could finally be the opening it needs.
In calling for a 35 percent increase in cybersecurity spending to $19 billion, the White House also proposed a $3.1 billion revolving fund to upgrade legacy IT throughout the government. Venting his frustration, and no doubt that of many others in the administration and Congress, Obama talked about ancient Cobol software running Social Security systems, archaic IRS systems and other old, broken machines and software at federal agencies.
That’s not a new story. Agency IT managers will readily tell you about the problems they have with trying to maintain legacy technology and the way that sucks up funds and manpower. They say they have too little time to focus on what they feel their jobs are really about, which is delivering better services to their users.
Security is just one item among many they must address, but it’s become a much more urgent one after a 2015 that saw major breaches at the Office of Personnel Management and elsewhere. That point was driven home again this year when the IRS revealed that over 100,000 attempts using stolen Social Security numbers had succeeded in generating the personal identification numbers used by tax payers to electronically file and pay taxes.
The revolving IT Modernization Fund in the White House budget proposal would pay for projects that will be prioritized based on the extent to which they lower the overall security risk of federal IT systems. The savings achieved by shifting to more cost-effective and scalable platforms will be recycled back into the fund.
Cost-effectiveness and scalablity are among the main advantages that proponents put forward for SDx architectures, along with agility in response to security threats. As threats become more targeted, more sophisticated and more numerous, protecting networks gets more difficult. With IT staff overwhelmed by just the legacy systems they have to keep running, organizations face much greater risk of damage from those attacks.
By simplifying infrastructure management with the software overlay that software-defined networking (SDN) brings, IT and security managers get a much better way of identifying when they are being attacked and a faster and more focused way of responding.
In a poll conducted earlier last year, ESG Research identified a significant percentage of enterprise security professionals who said they would use SDN to address network security across a wide range of different scenarios.
Researchers at the Idaho National Lab have already developed a proof-of-concept that uses SDx to emulate the use and security of the laboratory’s business systems. It’s already delivered “amazing outcomes” and demonstrates how SDx can be used to improve security, repeatability of processes and consistency in results, they said.
The future will only bring more security challenges for government, as the Internet of Things takes hold. That will introduce thousands of new avenues that attackers will use to try and penetrate networks. Given the kind of benefits that the IoT is expected to bring to government organizations, the trick will be in securing networks without limiting the facility of IoT.
One approach that won’t work is simply throwing the solution du jour at the problem, which has been the traditional answer. Bolting on more point-to-point, single-purpose devices simply won’t scale fast enough to deal with vulnerabilties and will be too costly. Those devices are also themselves proving more vulnerable than people thought, with Cisco joining Juniper and Fortinet in the list of manufacturers whose advanced firewalls apparently suffer from potential software problems.
Right now, the only viable solution in this brave new world of security seems to be through some kind of software-defined approach. It’s not a silver bullet by any means, and it must be part of an overall approach to security. IT and security professionals must also be convinced that it will provide for the kind of subtleties and granularity needed to weed out modern threats.
If -- and in an election year, it’s a big if -- Obama’s budget proposals make headway in Congress, SDx could prove the best way to tackle the security problems that otherwise threaten to overwhelm government.
Posted by Brian Robinson on Feb 12, 2016 at 10:46 AM0 comments