Matt Asay (a good guy and a respected technology thinker) recently wrote an article positing that serverless computing is one of the biggest threat to containers. For those who don’t geek out on this stuff constantly, serverless computing was popularized by Amazon Web Services (AWS) with its Lambda product introduced a few years ago. Since then, however, all the public cloud vendors have spent time building the scope and scale of their serverless offerings.
Which take us through to today and a post last week that Asay wrote. In the post, which coincided with KubeCon, the event celebrating all things Kubernetes and containers, he referenced a tweet from Brian Leroux, the founder of Begin. Leroux’s tweet, one assumes designed to heckle the Kubernetes crowd, was as follows:
Things I won’t be doing today. Provision[ing] an instance, spawn[ing] additional instances, us[ing] ssh to investigate an instance, [or] roll[ing] upgrades to a fleet of instances.
Of course, his Tweet was designed to elicit questions about how he had found such a magical existence and, happy to help, Leroux detailed that through his leveraging serverless technologies, he was able to:
- Pay only for services utilized
- Focus on business logic only
- Know immediately where any issues are because of per function isolation
- Deploy upgrades seamlessly in seconds
His tweet was somewhat jarring, especially given that several thousand people were meeting up on the other side of the world to extol the virtues of Kubernetes and celebrate just how empowering containers are generally.
I get it, Serverless is shiny and new and reminds us all of rainbow-farting unicorns
I was sitting in the audience at re:Invent, Amazon Web Services’ annual user conference, a few years ago when AWS CTO Werner Vogels introduced Lambda. The idea that developers could abstract everything related to infrastructure to a third party was alluring. Having the ability to forget about servers forever, and to instead think from an input/action perspective was amazing. While virtual machines are a nice improvement on physical servers (no more shiny lights but, Hey! at least we get to forget about racking and stacking servers) and cloud in its first iteration was a development of VMs, serverless goes beyond that. With VMs you still need to think about operating systems and all of that good stuff, serverless takes that all away once and for all.
I’ve had many a conversation in which I’ll be talking about the future of infrastructure and suggest that serverless is a big part of it and that many, many developers will increasingly leverage serverless in the future.
New approaches don’t nullify old ones
Through human history, epochs have risen and generally, in doing so have subsumed the ones before. There’s not too many toga-wearing Greeks around the traps these days, and the Roman Empire, while awesome in its heydey, is no more. Ditto for the Philistines, the Assyrians, and the Ying and Ming epochs. Epochs come and, importantly for this point, epochs go.
Technology doesn’t work this way. Sure cloud arose and is growing super fast and is becoming the default way of building new applications. But as I like to point out to people, there’s still a huge number of mission-critical applications running on mainframe computers, as there are on physical servers. 10 years from now, while the numbers may be a bit lower, that fact will still remain.
The reality on the ground
At the same time as many of my friends were geeking out on Kubernetes in Copenhagen, and Leroux was happily enjoying serverless in San Francisco, I was sitting in Las Vegas where I was chairing the Interop ITX conference. Now Interop is one of the last traditional vendor-neutral conferences on the calendar. While conference season is awash with vendor events, and there are hundreds of “Growth Hacking,” “Disruption,” and “The Future of Business” events, there are very few IT-centric events that don’t fall under the banner of a particular vendor.
As such, Interop is a really good bellwether for how real practitioners in the real world (and by real world I don’t mean Silicon Valley startups or other tech companies, I mean organizations in traditional industries – manufacturing, banking, engineering etc)
A few of the sessions at Interop revolved around containers generally and Kubernetes more specifically. Presenters, in a display of best practice, ascertained the awareness of the audience about Kubernetes and the level of adoption of containers among the crowd. While everyone had heard of containers and Kubernetes and was very interested in hearing more about it (they were in the session, after all) the dearth of raised hands when the presenters enquired as to adoption was telling.
Remember that these practitioners are a good cross-section of the iT world. The fact that they were attending Interop is an indication that professional development is top of mind for them. These aren’t dinosaurs who resist change at all costs, rather these are progressive practitioners who balance progress with the realities of day to day life in existing organizations.
And that reality is that, while there are certainly greenfield opportunities that will potentially find a good fit with new technology approaches, there are massive numbers of existing applications that, to a greater or lesser extent, will remain in place for the foreseeable future.
Sometimes changing paradigms can go too far
I’m a serverless fanboy, I think it’s an amazing thing. But I can say that from a conceptual and purist perspective. For an IT practitioner who is looking to develop software faster than before, there are a different set of lenses to look at things through.
As Asay pointed out in his post, while containers are, at least to an extent, a development of existing paradigms, serverless is something completely new. Serverless fundamentally changes the way developers think about software and operations teams run that software. As he wrote:
For an enterprise world steeped in virtual machines, containers have been a revelation, in large part because, while different, they still use familiar metaphors. Containers deploy in minutes instead of hours, taking far less time to boot up—among other things—but they leave developers working with servers. Serverless breaks that server metaphor.
And therein lies the problem. The ability to work within the context of a completely broken metaphor relies on many things coming together – organizational maturity, skills availability, culture, and leadership – none of which are slam dunks. I would posit that moving from VMs to containers, for example, relies less on many variables being in place than does a move from VMs (or even containers) to serverless.
Frankly, it strikes me as a little flippant, and even disrespectful of the thousands of IT practitioners out there working in constrained environments, to suggest anything else. Even Asay reflects upon this fact when he writes that:
for developers who have had to live in VM land, the container shift is evolutionary, not revolutionary. That’s a good thing, overall.
Serverless will crush Kuberenetes about as much as x86 servers crushed mainframes – that is, not much at all. IBM and others are still enjoying revenue from mainframe sales, the companies that offer tools for mainframe applications are doing OK and, as I pointed out before, the flight I’m about to board has many touch points enabled by mainframe technologies. Sure serverless is amazing and will see increasing adoption, especially by the most forward-looking f organizations. But Silicon Valley success does not equate to overall domination, and if I were the CNCF, the foundation that leads the Kubernetes initiative, I wouldn’t be panicking any time soon.