Another year is drawing to a close, and Congress — locked in partisan gridlock and unable to fulfill its most basic responsibilities — again has failed to update any of the nation’s cybersecurity laws.
The need for cybersecurity reform is cited repeatedly by lawmakers, IT professionals and privacy advocates, but nothing is done. According to a recent report from the Congressional Research Service, “More than 50 statutes address various aspects of cybersecurity either directly or indirectly, but there is no overarching framework legislation in place. While revisions to most of those laws have been proposed over the past few years, no major cybersecurity legislation has been enacted since 2002.”
Fortunately, you don’t have to depend on Congress to secure your systems. The National Institute of Standards and Technology, which can’t tell you what to do, can tell you how to do it. Under the Federal Information Security Management Act, NIST continues to update and expand security guidance for government and for private sector IT systems that choose to use it.
So far this year, NIST has published 10 new or updated special reports in its 800 series of computer security documents, released drafts of nine special pubs, and issued five Interagency Reports on cybersecurity.
These publications contain specs, guidelines and requirements for securing government systems and offer flexible and frequently updated information on technology-agnostic standards and best practices. Each agency must decide for itself what security features and controls to implement and how to do it, but NIST makes the information needed to do this available.
Highlights of this year’s work include:
The fourth revision of SP 800-53, the foundational catalog of security and privacy controls for federal IT systems. First published in 2005 and last updated in 2009, the latest revision is the most comprehensive to date and focuses on designing and acquiring trustworthy systems that have security built in.
The revised SP 800-124 guidelines for management of mobile device security. These sharpen the focus of the original 2008 publication, excluding laptops and low-end cell phones, to zero-in on high-end phones and tablets. It also explains security concerns inherent in mobile devices, with recommendations for centralized management technologies to address these risks.
A draft of the new guidelines for supply chain risk management in SP 800-61. With increasing government reliance on off-the-shelf hardware and software produced through a global supply chain, agencies need better understanding of possible risks. The publication gives guidance for identifying, assessing and mitigating these threats.
A new version of SP 800-40, with revised guidelines for enterprise patch management. Patching vulnerabilities is critical to maintaining security, but managing the process at the enterprise level is complex. This document describes the challenges as well as the technology available for meeting them.
Production of these documents does not ensure adequate security for government IT systems. But any agency with the will to assess and manage the risks in its systems can get the up-to-date information it needs to do its job. That’s not necessarily an easy job, but help is available.
Posted by William Jackson on Nov 22, 2013 at 11:50 AM1 comments
The relationship between the United States and China in cyberspace has been anything but chummy lately. Many in this country see China as a major source of sophisticated attacks against our commercial and government infrastructures. China responds that it’s not coming from them, and that they are getting hacked also.
This has resulted in a poisonous atmosphere that the EastWest Institute calls a “serious challenge” to the friendship and prosperity of both countries. “Such accusations and arguments have fueled escalations so that the relationship is now strained, making even routine dialog apprehensive,” says a report produced for EWI’s recent World Cyberspace Cooperation Summit IV. “Neither side is comfortable with the policies and practices of the other.”
The paper, written by Karl Frederick Rauscher and Zhou Yonglin, offers what they call “practical, down to earth guidance” for normalizing cyber relations between the two countries. What it boils down to is, “stuff happens;” cyberspace is no different from any other political or diplomatic domain and each country should accept that.
The report does not address who is responsible for launching attacks against whom, and nowhere does it suggest that either side stop hacking the other. But it does acknowledge that unrestrained hacking for criminal or political purposes strains relationships. Both the United States and China are rich in potential targets and attack platforms, and the prevailing tone of discussion between them has been one of suspicion and blame. Ten recommendations are offered to help establish trust and develop effective countermeasures to improve cybersecurity.
The initial recommendations establish a framework of trust, based both on formal policy and behavior. “Each party is evaluated based on adherence to its stated policy and plan of action.” These are basic steps, the authors say, but basics to date have been neglected, creating a crisis environment.
The remaining recommendations define how each country addresses threats and national interests in cyberspace. The most interesting are:
- Separate critical humanitarian assets in cyberspace. This would remove noncombatants from the line of fire in a cyberwar, much like giving institutions such as hospitals special status in a war zone so that they are not attacked.
- De-clutter espionage expectations. Basically, this means accept the fact that espionage will occur in cyberspace and that national security assets will be targets, just as in the real world. We might not like it, but we have learned to live with it in the three-dimensional world, and can live with it online as well.
- Prepare sufficiently, react quickly and summarize seriously. In other words, defend adequately rather than just complaining after the fact of a breach.
What the report essentially recommends is extending existing models for political and diplomatic relationships into cyberspace. These models are based on a recognition that every nation will act in its own self-interest. The interests of nations often will conflict, but we can deal with that if we know what to expect. We should have frank statements of what those self-interests are in cyberspace so that we know what to expect and can make decisions about what is acceptable.
Human history demonstrates that political and diplomatic relationships can fail, resulting in military action. But it also shows that these relationships can avoid war, as in the case of the major superpowers since 1945. The recommendations in the report might not stop any hacking, but they could help produce a healthier environment for addressing the issue.
Posted by William Jackson on Nov 08, 2013 at 12:42 PM0 comments
How can you tell if you are making any progress if you don’t know where you are or where you’re going? That is the situation cybersecurity professionals find themselves in, according to a paper being released this week by the EastWest Institute (EWI).
It’s impossible to know whether security is working with no reliable measurements for the scope of the cybersecurity problem. “We do not have even an order-of-magnitude estimate of some of the most basic aspects of the cybersecurity problem that can be validated,” say the authors of the paper, “Measuring the Cybersecurity Problem.” The paper proposes an international voluntary scheme for gathering and interpreting meaningful statistical data about attacks, breaches and incidents in cyberspace.
“While these recommendations are primarily for the private sector, governments can benefit significantly from their implementation,” the authors say.
The paper is being released at the World Cyberspace Cooperation Summit IV, being held this week at Stanford University in California. Although this is the fourth annual summit produced by EWI, the name has changed this year to reflect changes in focus. What had been cybersecurity summits is now a cyberspace cooperation summit.
“We are discussing key areas of cyberspace cooperation,” said Harry Raduege, chairman of Deloitte LLP's Center for Cyber Innovation. “We are discussing what is possible.”
The EastWest Institute is an international think tank focusing on multilateral cooperation. Cybersecurity was identified about four years ago as a critical international issue and the cyber summits were initiated in Dallas in 2010. High-level industry and government officials attended subsequent summits in London and New Delhi, and this year’s summit returns to the United States in the Silicon Valley.
The gatherings have produced a number of papers on subjects ranging from the reliability of undersea cables to rules for government conflicts in cyberspace, but the most important result to date has been the relationships established, said Raduege, a retired Air Force general and former director of the Defense Information Systems Agency. “Just the fact that we’re getting to know each other is important,” he said. “The first step is figuring out who the key players are who can make things happen.”
He described the summits as track 2 diplomacy, informal talks that identify areas of international agreement that can be passed on to traditional diplomatic channels for development. Issues this year include critical infrastructure protection as well as the economic and legal impacts of cybersecurity.
Determining impact requires metrics, and despite the billions of dollars being spent on it there are no adequate metrics for cybersecurity. That lack spurred the proposal for setting up a way to measure the problem. The paper makes three recommendations:
- The private sector should establish a trusted environment for gathering worldwide statistical data that supports measurements of the cybersecurity problem.
- Private-sector companies should voluntarily provide statistical data to this trusted entity, which could use the data to produce meaningful statistics.
- Qualified subject-matter experts should develop statistical methods for analyzing this data. This could provide a quantitative framework for reliable benchmarks.
One of the most interesting topics likely to come up at this year’s summit is not on the formal agenda: Friction between the United States and much of the rest of the world generated by reports of National Security Agency surveillance of cyberspace. “What the impact of these reports will be has yet to be learned,” Raduege said. “It will be very revealing to see and hear from those who are attending.”
Posted by William Jackson on Nov 06, 2013 at 11:34 AM1 comments
So, just how bad was the roll-out of the Affordable Care Act portal, HealthCare.gov? It depends.
As a production launch, it was really bad. Failures on the site frustrated users and burned up a lot of goodwill that the administration could ill afford to lose.
As a beta test, it still was pretty bad; but that’s what beta tests are for. There is no disgrace in finding problems in an application during testing. “This is pretty standard,” said David Lindsay, senior security product manager at Coverity, a development testing company. The failure with HealthCare.gov is that the beta test started Oct. 1 rather than two months earlier.
Any large application project requires testing throughout the planning and development phases. This begins with the individual components as they are developed and continues as they are assembled into working parts. “There is still going to be a need for production testing,” when the entire application is completed, Lindsay said. “A final pass before things go out the door.”
There is a paradox in large project developments, however. The more complex the application, the more testing it needs. But the more complex it is, the later all of the parts come together and the less time there is for final testing. And it is not enough to squirt bits at the software on a laboratory testbed. It should be exposed to real users who can put it through its paces to expose the unintended consequences of development decisions. This is the beta test, the last stop before production release and the last chance to fix problems before you disappoint users and delight antagonists.
HealthCare.gov certainly qualifies as a complex application. It serves as a gateway for 36 state programs and must securely access sensitive information in multiple state and federal databases. The results will determine whether and what kind of health care millions of people have access to, so the stakes are high.
Doing a beta test for a large-scale public Web application is not simple, but it can be quietly and slowly rolled out in a number of locations. Given the current projected fix date of Nov. 30 for the site, this testing phase should have begun no later than Aug. 1 in order to be ready for production by Oct. 1. But the current fix is being done on a crash basis, which is not a good way to develop and fix software. So a July date for beta release would have made better sense.
This would not have been simple to do. All of the states involved and the insurance companies offering coverage would have needed to have their parts ready three months early. That kind of scheduling has to be built into the project from the beginning, not brought up as an afterthought. Synchronizing the release date with the launch date is a recipe for failure, and that is what happened with HealthCare.gov.
There was time to do it right. The Affordable Care Act was passed in early 2010. The task was complicated, however, by opponents of the program who helped to talk many states out of offering their own healthcare exchanges, making the federal portal that much more complex. Those opponents share much of the blame for the botched launch of the site. But that does not excuse those who planned and developed the system from doing it right.
Posted by William Jackson on Nov 01, 2013 at 6:20 AM15 comments