Trusted computing can help agencies manage geographically dispersed servers and emerging OpenStack deployments.
Federal IT professionals used to be able to depend on keeping their information safe in secure on-premise data centers, but times have changed. Traditional data centers could be depended upon for consistent control, visibility and security, but today’s cloud-based centers offer new variants of those capabilities – a fact that IT administrators struggle with every day.
This is particularly problematic for those managing two adjuncts of today’s cloud-driven environment: geographically dispersed servers and emerging OpenStack deployments. Both pose their own unique challenges, including the need for assurance that systems are adhering to location-specific laws and security concerns in general.
Employing trusted computing can help overcome these challenges. Through trusted computing, administrators can verify the trust of the virtual infrastructure, identify the location of servers and audit for compliance with federal mandates. They can also gain better visibility into and understanding of their OpenStack deployments.
The process starts with what’s known as the “hardware root of trust,” a unique measurement that allows managers to be able to tell if the platform has been modified and where a particular cloud server is located, allowing them to enforce location-based restrictions. In the words of the National Cybersecurity Center of Excellence, the hardware root of trust “determines the integrity of the compute hardware and restricts the workload to cloud servers within a location.” It’s the foundation for the National Institute of Standards and Technology’s “Trusted Geolocation in the Cloud” proof-of-concept implementation.
Trust in the cloud
Clouds know no boundaries and are variably smart and efficient. But different governments have specific laws pertaining to security and application management in the cloud. The Federal Information Security Management Act, for example, requires that applications be run only within the United States. As a result, compliance can be challenging.
Security challenges occur because data and workloads can migrate from system to system, as well as across cloud deployments that span country boundaries. It is critical these environments be correctly managed. Users of the cloud need to assess the possible solutions that can help them manage their security, risk and compliance requirements while maintaining the efficiencies that the cloud provides.
Trusted Geolocation in the Cloud allows administrators to identify the location of a particular server, verify its trustworthiness and set configuration management and policy enforcement parameters to ensure the server is adhering to geographic restrictions. Periodic audits can be performed – automatically, minimizing the need for human intervention – to ensure that the server remains trustworthy and continues to adhere to regulations.
Additionally, Trusted Geolocation in the Cloud is a highly efficient and necessary means of verifying trusted servers, regardless of where they are located. It’s an ideal solution for federal managers dealing with highly distributed workloads.
The OpenStack challenge
The benefits of trusted computing are not just relegated to geography, however. Trusted computing can also help federal IT professionals gain insight into and control over their on-premise or cloud OpenStack deployments.
The U.S. government’s adoption of OpenStack as a cloud platform has gained significant traction over the last few years, but many users remain concerned with being able to verify their virtualization stacks. Trusted computing minimizes that problem by allowing managers to integrate secure controls and workload management protocols into their OpenStack environments.
The hardware root of trust provides the foundation for machines running OpenStack. It enables the machines to attest that they are running the correct version of the infrastructure and virtualization software and that the software hasn’t been tampered with or changed.
As an acknowledgement of OpenStack’s growing popularity, many vendors are now actively supporting the software through their trusted computing solutions. Given that, it’s becoming increasingly easy for federal IT managers to identify products to help them verify their OpenStack deployments.
Two steps forward
All of this may sound complex, but, in truth, there are really only two key steps toward implementing trusted computing. The first step is likely obvious: managers must acquire and procure systems that support trusted computing technologies.
The second step involves a slight shift in workplace culture. IT professionals must work with their security teams to clearly understand and delineate the policies that should be put in place. Verification can then coalesce with these specific policies. This should be done in the very beginning and serve as part of the foundation of a trusted computing initiative.
There’s a reason why NIST and the National Cybersecurity Center of Excellence have been pushing trusted computing as of late. As government agencies continue to move to the cloud, federal IT professionals need to do everything they can to ensure that their infrastructure, and the data it houses, remains secure. Trusted computing does that, making it a solution that every federal IT manager should take into consideration.