Cross posted from Diginomica

The other day I wrote a post reflecting on a couple of conversations I’ve been a part of in recent months around the validity or otherwise of the entire “cloud broker” concept. The rationale behind cloud brokerages is that there is a disconnect between what vendors ideally want to sell, and what customers ideally want to buy. That might be a timing disconnect, a pricing one, a currency one or whatever. In traditional industries, where such a disconnect is extant, the opportunity arises for a third party to build a business brokering between the two parties – we see it occur again and again in industries as diverse as electricity, oil and, yes, pork bellies.

The conversation greatly expanded when Simon Wardley wrote a post opining on the importance of open source standards. In the post he reiterated his long held view that the industry should settle on the de facto standard of Amazon Web Services, He believes that the open source AWS clones are the most logical and appropriate way of creating a successful competitive market. Wardley believes that full interoperability is required and that simple API compatibility is not sufficient – as he put it:

For semantic interoperability then systems have to behave in the same way which is also why a reference model (running code) must be the standard and not a piece of paper.

Wardley goes on to state his personal view that CloudStack is the ideal reference standard due to its aim of AWS compatibility, it’s Apache Software Foundation backing and its history of large-scale implementations. I tend to disagree with Wardley, not because I believe some other stack will become the standard to chose, but because I disagree with the very notion that a cloud standard is a requirement for success of cloud computing. This is a theme that I was reminded of when reading a post from Phil Wainewright in which he talks about TOGAF’s attempt to promulgate some kind of “cloud standard”. The initiative aims to identify:

…a set of new platform capabilities, and architecting and standardizing an IT platform by which enterprises can reap the business benefits of Platform 3.0. It is our intention that these capabilities will enable enterprises to:

  • Process data “in the Cloud”
  • Integrate mobile devices with enterprise computing
  • Incorporate new sources of data, including social media and sensors in the Internet of Things
  • Manage and share data that has high volume, velocity, variety and distribution
  • Turn the data into usable information through correlation, fusion, analysis and visualization

The forum will bring together a community of industry experts and thought leaders whose purpose it will be to meet these goals, initiate and manage programs to support them, and promote the results.

To which I groaned. For two different reasons.

The only standard that matters is one that is hewn at the coal face of IT practice

As I said, somewhat flippantly, in a comment to Phil’s post, if a standard flaps its wings in a forest somewhere, but no one in the world actually cares or adopts that standard, does it really exist? Or, put another way, the only standards that matter are those that rise naturally through industry use and not by the creation of big committees filled with stuffed shirts. Despite not agreeing with Wardley’s view that we should all settle on the AWS standard and move on, I do believe that, should a particular use case or organizational requirement determine that a standard reference architecture is necessary, that standard should be one which has earned its place through real world adoption.

For enterprises that require an interoperable series of public and private cloud infrastructures for a particular problem-set, clearly a standard architecture is important. I’m not as quick as Wardley to anoint the AWS approach as the correct route however, one could easily argue that OpenStack, with its widespread following – both sell-side and buy-side – could also be considered the de facto standard.

Beyond a detailed argument of which standard is the one to beat however, it strikes me that the coming reality of enterprise IT will make standards less important.

The rise of multi cloud and the distributed organization

I recently wrote a review of a session I took part in at the recent GigaOm Structure conference. The session sought to draw some conclusions as to the make up of enterprise infrastructure going forward. The basic premise was that organizations are going to be less focused on infrastructure interoperability giving them the ability to move workloads between discrete pools of resources, and instead would focus on discrete best of breed solutions for particular needs. I believe that the organization of the future will be a much more modular places, with individual teams coming and going on a project-driven basis. If this is indeed the case, the spread of IT assets will be far more organic and not the more monolithic stack that currently typifies big IT.

To this point, multicloud is a far more mature and pragmatic approach towards cloud infrastructure and, in my view, directly delivers upon the “composable enterprise” notion that Warner Music group CTO Jonathan Murray spoke of at Structure. An organization embracing a multi-cloud strategy will likely have infrastructure resources spanning multiple vendors, potentially public and private, and likely to be across a variety of operating stacks. In the multi-cloud scenario it’s not a shock to see organizations using AWS, some public OpenStack and a private cloud build on one o the stack products.

This multicloud view is one which I’m seeing borne out by real world examples. At Structure we heard from PayPal who is using both VMware and OpenStack at scale – those commentators who support an orthodox view that interoperability is a core requirement within an IT establishment were left speechless at the information that not only does PayPal use both these products, but that at least 20% of its production workloads are on OpenStack and that it envisages a continuing multi-cloud infrastructure makeup for the foreseeable future.

Even Netflix, the absolute poster child for AWS, is by default a multi cloud consumer – it uses many of its own tools to augment the AWS portfolio and is actively looking to vendors outside of AWS for products that meet its needs. Scuttlebutt holds that Netflix is taking a deep look at Google’s compute Engine, yet another infrastructure option that isn’t interoperable with AWS, to host some of their workloads.

Looking to the future – A world relaxed about standards

While is may be anathema to the many people who spend their days in meeting of different standards groups, the changing landscape of the enterprise generally, and a more mature view on enterprise IT in particular are making standards less relevant. While there are undoubtedly some specific examples of organization use-case where a common architecture is necessary, this will be achieved through market forces – An AWS/Eucalyptus approach perhaps, or a combination of public and private OpenStack clouds. But for the majority of organizations in the future, interoperability will take a back seat to highly specific niche offerings that meet a particular requirement.

Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.