cloud security

Locking down cloud applications

Cloud computing is useful because it offers a new approach to IT, leveraging shared resources to maximize productivity and cut overhead. But with new approaches come new threats. How can agencies minimize risk in this environment?

The Cloud Security Alliance and the Software Assurance Forum for Excellence in Code (SAFECode) have collaborated to identify a set of best practices for developing applications that meet the unique security requirements of cloud computing. The resulting paper, Practices for Secure Development of Cloud Applications, applies established methods of producing secure code to the architectural requirements of the cloud.

“For cloud computing to reach its true potential, all parties involved – both consumers and providers – will need new ways of thinking about security needs and related standards,” the paper says.

Eric Baize, senior director of the product security office at EMC Corp. who participated in the study for SAFECode, says the new guidelines are an addendum to the existing security practices identified in SAFECode’s Fundamental Practices for Secure Software Development.

About 70 percent of cloud development work is common with other application environments, Baize said. The difference in the remaining 30 percent lies primarily in the fact that the cloud is a multitenant environment in which trust boundaries are required because software running in one entity can be used by another.

The CSA and SAFECode working group spent about six months reviewing existing development practices to identify gaps that should be filled for the cloud environment. Representatives from member companies shared their experiences and lessons learned to identify a consistent set of practices that address issues in the cloud. The working group focused on the platform-as-a-service model and identified a basic set of threat areas that needed to be addressed differently in the cloud:

  • Data breaches: Compromises in the virtual infrastructure can pose threats to co-tenants in the cloud, and techniques such as SQL injection threaten more serious consequences with multiple applications sharing an underlying database system. A flaw in one application could expose all. 
  • Data leakage and data loss: When data is kept in the cloud, the system needs to be designed, implemented and deployed so that it can withstand attacks on various levels in the multitier architecture. Changes to data should be detectable and traceable, and the data should be able to be restored. If encryption is used to protect data, at what layer is it performed and how are keys managed?
  • Insecure interfaces and APIs: Improperly designed application programming interfaces can create vulnerabilities when used by third parties.
  • Denial of service: This can occur at several layers, expanding the attack surface in a cloud environment.

The paper describes the security practices in the context of the unique requirements of the cloud. Recommendations are mapped to specific threats to provide detailed illustrations of the security issues they resolve, with specific action items for development and security teams.

Like many best practices, those identified for secure development of cloud applications are often common sense. “For us, it’s not a surprise,” Baize said of the recommendations. “I don’t expect this to be a surprise to anybody.”

Posted by William Jackson on Dec 13, 2013 at 8:14 AM0 comments


powerplant

DHS drafts network protection plan, but Congress dithers

The Homeland Security Department is the lead agency in the government’s effort to protect the nation’s privately owned critical infrastructure, but the department still is struggling to define its relationship with entities over which it has little or no authority. This is not all DHS’ fault; Congress and the National Security Agency have hampered DHS efforts and sown distrust of government in industry.

Essential to the job of protecting the privately owned networks and utilities vital to the nation’s security is information sharing. This always has been a touchy subject. Government is unwilling to share what it sees as sensitive information with the private sector without security clearances, and industry is leery of exposing confidential information to government. Reports that the NSA has been helping itself to information from private networks haven’t helped the situation. On top of that, inadequate funding and day-to-day budgeting make it difficult for DHS to develop and execute a coherent program.

Technical guidance for baseline cybersecurity in critical infrastructure networks is being developed by the National Institute of Standards and Technology, but NIST is not a regulatory agency and cannot create the necessary relationship between government and industry.

Congress, in its 2012 DHS appropriation, ordered the department’s National Protection and Programs Directorate to provide a report on efforts to streamline information sharing. But this was not addressed in the department’s annual report to Congress.

“The department has reorganized divisions, altered programmatic activities, and reviewed current and past NPPD Office of Infrastructure Protection outreach efforts to federal, state, and local governments and private-sector partners ... ,” DHS wrote by way of explanation in its reply to a Government Accountability Office assessment of the report.

DHS is in the middle of developing a new National Infrastructure Protection Plan (NIPP) under Presidential Policy Directive 21 on Critical Infrastructure Security and Resilience. This will replace the original NIPP created in 2006 and supply a framework for streamlining the sharing of information between government and industry.

“Additional progress will be made during calendar year 2014,” with the creation of a formal communications program, DHS said.

Let’s hope so. To date there has been too little coordinated effort between industry and DHS. Under the current structure, DHS can only offer its help to privately owned systems. While this help has sometimes been accepted, too often it has been after the fact, when an organization is responding to or recovering from a breach. The president can go only so far in correcting this situation. Executive Office policy can set out goals, but it cannot replace legislation.

There is nothing wrong with voluntary cooperation, as far as it goes. But if the networks supporting the nation’s power systems and other critical infrastructure are to be protected there needs to be a clear legal framework laying out the authorities, responsibilities and liabilities of both government and industry to enable − and require − cooperation and information sharing. This is something Congress has failed to provide.

Posted by William Jackson on Dec 06, 2013 at 12:05 PM0 comments


security

Improving federal cybersecurity need not depend on Congress

Another year is drawing to a close, and Congress — locked in partisan gridlock and unable to fulfill its most basic responsibilities — again has failed to update any of the nation’s cybersecurity laws.

The need for cybersecurity reform is cited repeatedly by lawmakers, IT professionals and privacy advocates, but nothing is done. According to a recent report from the Congressional Research Service, “More than 50 statutes address various aspects of cybersecurity either directly or indirectly, but there is no overarching framework legislation in place. While revisions to most of those laws have been proposed over the past few years, no major cybersecurity legislation has been enacted since 2002.”

Fortunately, you don’t have to depend on Congress to secure your systems. The National Institute of Standards and Technology, which can’t tell you what to do, can tell you how to do it. Under the Federal Information Security Management Act, NIST continues to update and expand security guidance for government and for private sector IT systems that choose to use it.

So far this year, NIST has published 10 new or updated special reports in its 800 series of computer security documents, released drafts of nine special pubs, and issued five Interagency Reports on cybersecurity.

These publications contain specs, guidelines and requirements for securing government systems and offer flexible and frequently updated information on technology-agnostic standards and best practices. Each agency must decide for itself what security features and controls to implement and how to do it, but NIST makes the information needed to do this available.

Highlights of this year’s work include:

The fourth revision of SP 800-53, the foundational catalog of security and privacy controls for federal IT systems. First published in 2005 and last updated in 2009, the latest revision is the most comprehensive to date and focuses on designing and acquiring trustworthy systems that have security built in.

The revised SP 800-124 guidelines for management of mobile device security. These sharpen the focus of the original 2008 publication, excluding laptops and low-end cell phones, to zero-in on high-end phones and tablets. It also explains security concerns inherent in mobile devices, with recommendations for centralized management technologies to address these risks.

A draft of the new guidelines for supply chain risk management in SP 800-61. With increasing government reliance on off-the-shelf hardware and software produced through a global supply chain, agencies need better understanding of possible risks. The publication gives guidance for identifying, assessing and mitigating these threats. 

A new version of SP 800-40, with revised guidelines for enterprise patch management. Patching vulnerabilities is critical to maintaining security, but managing the process at the enterprise level is complex. This document describes the challenges as well as the technology available for meeting them.

Production of these documents does not ensure adequate security for government IT systems. But any agency with the will to assess and manage the risks in its systems can get the up-to-date information it needs to do its job. That’s not necessarily an easy job, but help is available.

Posted by William Jackson on Nov 22, 2013 at 11:50 AM1 comments


US and China at odds over cyberspace

Can't the United States and China just get along in cyberspace?

The relationship between the United States and China in cyberspace has been anything but chummy lately. Many in this country see China as a major source of sophisticated attacks against our commercial and government infrastructures. China responds that it’s not coming from them, and that they are getting hacked also.

This has resulted in a poisonous atmosphere that the EastWest Institute calls a “serious challenge” to the friendship and prosperity of both countries. “Such accusations and arguments have fueled escalations so that the relationship is now strained, making even routine dialog apprehensive,” says a report produced for EWI’s recent World Cyberspace Cooperation Summit IV. “Neither side is comfortable with the policies and practices of the other.”

The paper, written by Karl Frederick Rauscher and Zhou Yonglin, offers what they call “practical, down to earth guidance” for normalizing cyber relations between the two countries. What it boils down to is, “stuff happens;” cyberspace is no different from any other political or diplomatic domain and each country should accept that.

The report does not address who is responsible for launching attacks against whom, and nowhere does it suggest that either side stop hacking the other. But it does acknowledge that unrestrained hacking for criminal or political purposes strains relationships. Both the United States and China are rich in potential targets and attack platforms, and the prevailing tone of discussion between them has been one of suspicion and blame. Ten recommendations are offered to help establish trust and develop effective countermeasures to improve cybersecurity.

The initial recommendations establish a framework of trust, based both on formal policy and behavior. “Each party is evaluated based on adherence to its stated policy and plan of action.” These are basic steps, the authors say, but basics to date have been neglected, creating a crisis environment.

The remaining recommendations define how each country addresses threats and national interests in cyberspace. The most interesting are:

  • Separate critical humanitarian assets in cyberspace. This would remove noncombatants from the line of fire in a cyberwar, much like giving institutions such as hospitals special status in a war zone so that they are not attacked.
  • De-clutter espionage expectations. Basically, this means accept the fact that espionage will occur in cyberspace and that national security assets will be targets, just as in the real world. We might not like it, but we have learned to live with it in the three-dimensional world, and can live with it online as well.
  • Prepare sufficiently, react quickly and summarize seriously. In other words, defend adequately rather than just complaining after the fact of a breach.

What the report essentially recommends is extending existing models for political and diplomatic relationships into cyberspace. These models are based on a recognition that every nation will act in its own self-interest. The interests of nations often will conflict, but we can deal with that if we know what to expect. We should have frank statements of what those self-interests are in cyberspace so that we know what to expect and can make decisions about what is acceptable.

Human history demonstrates that political and diplomatic relationships can fail, resulting in military action. But it also shows that these relationships can avoid war, as in the case of the major superpowers since 1945. The recommendations in the report might not stop any hacking, but they could help produce a healthier environment for addressing the issue.

Posted by William Jackson on Nov 08, 2013 at 12:42 PM0 comments