Bob Otto

COMMENTARY

Is a private cloud the smartest thing to do?

Cloud computing’s emergence has not been without controversy. Many people, particularly those in the public sector, have argued that shared infrastructure isn’t reliable or secure enough for mission-critical work.

Their advocacy for “private clouds” — internally developed and hosted cloud infrastructure — has generated its own controversy. Specifically, some argue that private-cloud advocates miss the point of cloud computing, which is to shift consumption to more elastic, commoditized external sources. This allows internal teams to focus their more extensive domain knowledge on solving organization-specific challenges.


Related coverage:

Cloud security fears outweigh savings, but perhaps not for long

CBP moving e-mail to DHS’ private cloud


As a result, some view private clouds as simply the justification used by traditionalists and “server huggers” to avoid necessary change. And in some cases, they have a point. Undoubtedly, some advocates for private clouds are just trying to preserve the existing status quo within their organization.

However, this unfortunate reality shouldn’t diminish the potential that private clouds offer — as they can be used to drive the necessary IT changes that some people fear.

Need for real-world strategies

Opinions differ on how large a role cloud computing can play in the enterprise. We can agree that security and other concerns will make some aspects of government computing off-limits to the public cloud for the foreseeable future. This means that many of our most critical applications and systems will not realize the cloud’s benefits, which include greater scalability, higher throughput, automated provisioning and support, and lower operating costs.

This points to the need for a hybrid strategy — relying on external resources where possible but using trusted internal resources where needed. Although a private-cloud option may not produce the same level of benefits as a public cloud, our research has shown that it still produces a very sizeable return-on-investment. Furthermore, it provides the foundation for modernizing your IT operations to become more accountable with the added ability to support more dynamic requirements.

What’s often overlooked is that cloud computing is also a de facto set of standards and architecture for more scalable and efficient computing, as well as a procurement model. Key technical components include virtualization, multicore commodity servers, multitenant application models, robust peer-to-peer networking and flexible storage.

Regardless of where the hosting and provisioning occurs, cloud-compliant applications and systems must be designed and configured to meet these requirements. In other words, managing your internal and external infrastructure to the same standards will simplify and streamline your operations.

Does your agency need pay-per-use?

One of the major knocks against private clouds is the belief that they introduce extraneous or costly features that you don’t need to manage an internally hosted system. As Gartner analyst Andrea Di Maio recently asked in a recent blog post:

“.. [Is] building a private cloud the smartest thing to do? Does the business really need all the scalability, elasticity, pay-per-use delivery style? . . . Reality is that for different kinds of applications, security requirements, workloads, different services may be needed to get the best possible value for money. Investing on building one’s own private cloud means investing capital, skills and credibility on one single basket and, for how good that can be, this may prevent (some) from seizing better cloud service opportunities as they become available.”

The answer to Di Maio’s first question is yes. Consider this analogy: What do you call a manufacturer with minimal insight into their costs, inventory or demand? Bankrupt, of course. Unfortunately, this is the situation too frequently in IT, where we often lack a metrics-based understanding of demand, consumption and other constraints. As a result, we provision on the basis of guesstimates. This leads to resource imbalances, performance bottlenecks or both.

The ultimate goal should be a hybrid strategy — public, community and private clouds — so that the environment can be optimized for specific requirements and constraints. The reality is that not all applications are suitable for the public cloud. However, nearly all systems will benefit from the cost, agility and performance advantages of a cloud-based environment. 

It may not be possible — or even desirable, as Di Maio suggests — to fully duplicate the level of automation and standardization offered by the public cloud. However, the growing maturity of the cloud means that most of the technology and expertise needed to build a private cloud is increasingly available off-the-shelf today.

Ultimately, the ability to segment, provision and monitor consumption at a more granular level is fundamental to managing IT like a business. Organizations are not monolithic, but have widely divergent priorities, resources and requirements. Shifting toward a cloud-based environment internally will allow you to accommodate these differences in the most optimal manner possible.

Reader Comments

Mon, Aug 29, 2011 John Annapolis

Creating a ‘cloud’ is tough, but not impossible. What’s important is scaling it appropriately for the opportunity and requirements at hand. The big issue is that some systems will always be internally hosted, managed and provisioned. As such, do you aspire to provide world-class quality-of-service for these mission-critical systems or do they operate on sub-standard infrastructure? As a reminder, David said the same thing about SOA earlier and he was right, but as we all know, these challenges were not insurmountable.

Thu, Aug 25, 2011

David Linthicum’s ‘Why you’re not ready to create a private cloud’ (http://www.infoworld.com/d/cloud-computing/why-youre-not-ready-create-private-cloud-458.) The notion that the expertise is largely available for running large-scale, secure private clouds is a fallacy. The majority of data center teams don’t currently have the internal experience required to execute effectively on private-cloud architectures. It’s a near impossibility to have and maintain that internal expertise as most of it is on the job training and learning. If teams have never run multitenant systems at scale, any configuration change in any part of the technology stack this is not calculated correctly leads to amplified errors and magnified recovery challenges. Also, if the hardware and software used are not purpose built from the ground up for cloud operations, you will inherit the limitations of traditional client /server architectures and experience recurring downtime more often. Even those with expertise who try to scale legacy traditional client/server architectures have this problem. See http://www.zdnet.com/blog/microsoft/outage-hits-microsoft-crm-online-office-365-customers/10359

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above