How agencies can take social media through the FISMA process
- By (ISC)2 Government Advisory Board Executive Writers Bureau
- Nov 05, 2013
As they did with many cloud-based technologies, agencies have rushed to social media platforms — Facebook, Twitter, Flickr, YouTube and others — often without involvement from the CIO or chief information security officer.
While social media services may be freely available, two questions are being considered by government agencies: 1) are the services subject to the Federal Information Security Management Act, and 2) what should an agency do to understand the risk it is assuming with social media? Several agencies are finding the answer by simply following the existing the National Institute of Standards and Technology’s process for FISMA.
FISMA is a very broad law and encompasses federal agencies as well as those who process, store, transmit or disseminate information on behalf of federal agencies. Therefore, when looking at an agency’s social media use, it is important to ask that question: “Does this platform process, store, transmit or disseminate information on behalf of the agency?” If the answer is “yes,” FISMA most likely applies. If the social media platform is in the cloud, the Federal Risk and Authorization Management Program may also apply. Agencies should always consult their CISO and legal departments when determining the applicability of FISMA and FedRAMP.
Understand the impact
The first step in NIST’s risk management framework is to understand the impact of the information and the system on the agency and the activities it supports. When social media is used only for information dissemination, often this results in a “Low” impact categorization, unless there is a special need for integrity or availability. Such needs may arise if social media is being relied upon during national emergencies or similar situations.
If social media is being used as an interactive platform for internal agency work, agencies should consider at least a “Moderate” impact rating for the system. No agency, of course, should use freely available social media tools such as Facebook or Twitter for “High” impact missions or information. (You wouldn’t coordinate the nuclear missile defense system through Facebook, for example.) FedRAMP reinforces this position by only recognizing Low and Moderate impact clouds..
Baselines, tailoring, scoping
Once the impact is known, a control baseline should be selected from NIST Special Publication 800-53 or FedRAMP, depending on the implementation. This baseline should then be scoped to ensure that only existing capabilities are addressed with the controls. Other controls should be tailored based on the capabilities offered by the social media provider or platform. A control should not be scoped out of the baseline just because it cannot be met. In these situations, the risk should be determined as part of the risk assessment process.
Assessment, risk, authorization
Once the baseline is determined, the most challenging aspect of the process begins. Typically, a cloud provider would be assessed by a third-party assessment organization (3PAO) or, if hosted internally, the platform would be assessed by an independent assessor of choice. Many social media platform owners will not allow assessments or will state that testing their system is prohibited in their terms of service.
Organizations have two options when confronted with these challenges. First, they can find out if the social media platform has an existing assessment completed as part of a similar audit process, such as ISO or Sarbanes-Oxley. These results can often be used to map back to the NIST controls and assessments. Many social media providers have also clashed with the Federal Trade Commission over violations of privacy and security. As a result, most of these organizations have agreed to build robust security programs and perform ongoing audits as part of their agreements with the FTC. If an agency can get information related to the security programs, it can greatly help the assessment process.
If an agency is unable to get assessment information, it should conduct its own assessment with any information it can get from the provider, even if information is limited. Any control that does not have evidence should be considered “not in place,” as the agency cannot say with any level of certainty that the provider has implemented the control. The results of the assessments collected or performed are a key component of the overall risk assessment.
The risk assessment should take the impact from the first step — Low or Moderate impact — and determine the likelihood of exploitation by a threat and the expected overall impact to the organization. In social media platforms used for public dissemination, it is likely that all residual risks will be low and will simply be accepted by an agency’s authorizing official. IModerate or high-risk findings should be reviewed to determine if the social media platform can be used a different way to lessen the impact or if the organization is willing to accept the risk.
Finally, organizations should monitor their social media platforms like any other IT system. In many cases, agencies will not have the same luxury of automation they have with internal or contracted systems. Automated scanning, patching and reporting are not possible with the majority of free social media platforms. Agencies should therefore apply compensating controls and ensure responsible parties are following any system status pages, emails, alerts or blogs that may give insight as to the state of the social media system.
FISMA can help agencies better understand the risks they are assuming when using social media. The frameworks and methodologies already exist for assessment and risk management; agencies must simply use them effectively.