Recently at the defrag conference I moderated a panel looking at the cloud. One of the panelists works for a traditional data center operator that is moving to providing cloud solutions (although, as we will see, one could argue about that). During his presentation, the staffer reiterated some FUD that I’ve heard many times before from traditional players, saying that companies looking to utilize the cloud need to be concerned about vendors who use cheap commodity hardware and who aren’t “proven long term providers”.

When questioning the presenter, I pointed out that Google is a company that has become powerful by using commodity hardware en masse, in an approach that sees each individual component as unimportant, being as it is an infinitesimally small part of the total infrastructure. While it was pointed out that in fact Google uses commodity hardware with some specific technology built into it, my point still stands – if a cloud provider has sufficient scale and redundancy, than the quality of individual micro components can almost be forgotten – the provider relies on the fact that their infrastructure can “self-heal” in the event of any one box failing.

The second piece of CloudWash we heard was that companies have no real need for instant provisioning and that generally an email to a provider with a 24 hour turnaround time should be sufficient. This sort of approach really annoys me – it’s unhelpful as it muddies the water around what is, and what isn’t cloud. A major part of the benefit of the cloud is the fact that it can scale at will – any time there is a significant lag time or a manual intervention required to do that, much of the value of cloud is lost.

It seems then appropriate to (yet again) remind people what does, and what does not, constitute cloud. Fortunately the always impressive Simon Wardley posted recently with this checklist:

IF :-

  • your data centre is full of racks or containers each with volumes of highly commoditised servers
  • you’ve stripped out almost all physical redundancy because frankly it’s too expensive and only exists because of legacy architectural principles due to the high MTTR for replacement of equipment
  • you’re working on the principle of volume operations and provision of standardised “good enough” components with defined sizes of virtual servers
  • the environment is heavily automated [read “allows for deployment of infrastructure automatically via API]
  • you’re working hard to drive even greater standardisation and cost efficiencies
  • you don’t know where applications are running in your data centre and you don’t care.
  • you don’t care if a single server dies

It’s a worthwhile list and one that is helpful in the face of the FUD of some vendors. With regards to the “trusted provider” comment, this is yet another subject that comes up repeatedly. Yes, I absolutely agree that customers need to perform a due diligence and to ensure that providers are robust, give good SLAs and provide the support when needed – however to suggest that traditional vendors have somehow got some sole franchise on quality is plain wrong. I know of traditional vendors whose performance is shocking and whose cloud offerings are laughable. Similarly I know of startups who have the performance, service and professionalism that gives customers ultimate confidence. One should judge a vendor by its performance, not some arbitrary measure.

It’s time to end the FUD – it does nothing for the credibility of our industry and is ultimately unhelpful, even for the originator of the FUD.

Enhanced by Zemanta
Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.