Software as a Service, Infrastructure as a Service, on-demand platforms – all these things are examples of what we conveniently lump into the category of “Cloud Computing.” For over 15 years I’ve been advocating for the benefits that the cloud can bring. Lower cost, reduced barriers to entry, an ability to deliver agility – all are very real advantages of using cloud-based solutions rather than more traditional approaches.

For the first few years that I was working as an independent industry expert, I was one of a small number of folks advocating for the move to the cloud. As part of a very small global group who has loosely coalesced into a group known as The Clouderati, we felt that we were revolutionaries at war with the orthodox view of the industry.

Indeed, I can remember countless industry conferences where we, The Clouderati, would gather together. We did so in the face of traditional technology vendors who told us that we were charlatans. These vendors were quick to suggest that the moves we were pushing were dangerous, would not drive the benefits that we claimed and were in-fact dangerous for credible organisations to take heed of.

Fast forward to today and the world has, indeed, met the predictions we made over a decade ago. Almost every organisation takes advantage of cloud in some form – from those doing their office productivity through Microsoft’s Office 365 or Google’s Workspaces to those buying infrastructure from global mega-vendor Amazon Web Services or hometown hero Catalyst IT, cloud is now the default way to deliver technology.

But, despite all the benefits that cloud brings, as with any technology decision, people need to assess the flip side, the unintended consequences and the potential collateral damage that any decision may drive. Before anyone thinks that I might have changed my tune and be turning away from a cloud-first approach, this article is in no way intended to lessen the positive impacts that people understand cloud can bring. Rather it is an educational piece, intended to help people to make informed decisions.

I was thinking about unintended consequences recently, after I read an article about someone who suffered almost unimaginable unintended consequences from a life firmly embedded in the cloud. To cut a long story short, Mark is a caring father who also happens to live his life in the cloud – from photo sharing to his calendar, from his phone service to document creation, Mark does it all in the cloud, and in this case, with Google.

Mark also had a personal situation where his son had some health issues. Specifically a possible infection of his genitals. Given the world was in the midst of a pandemic, and physical examinations were unavailable, Mark did what any technologically aware individual would and booked a virtual appointment. At this virtual consultation, he messaged the clinician a photo he had taken of the affected area. There’s plenty of detail over at the original article but smart readers will guess the unintended consequence of photographing and sharing said photograph of a child’s genitals – Google’s incredibly smart, if very binary AI-powered sex abuse algorithm came into play.

Said algorithm decided that by sharing an image of a youngster’s genitals, Mark must obviously be a pedophile. As such, and in a well-intentioned attempt to weed out exploitive behavior, Google tagged Mark’s account and froze it. Mark, who had over the previous decade or more increasingly come to rely upon Google to fuel his digital life, was hamstrung – from his phone service to his contact list, from the totality of his online images to his document history, Mark was entirely locked out from his digital life.

To make matters worse, since Google hosted his email address, Mark was unable to reset passwords or change notification addresses for all his other digital services. The nuking of his Google account thus had the unintended consequences of cascading down into almost every digital service he used.

Now of course, in an ideal world, a company like Google that took advantage of machine-based automation to make it possible to scan billions of images to identify child porn would also realise that having human intervention within these automated processes is a wise safeguard. In this sort of ideal world, Google would use the old military approach of Trust but Verify. They would let this automated algorithm flag suspicious content, but then a real life human being would look at that flag. This human would be able to understand nuance and context in a way that an algorithm cannot and, in this example, would have obviously been able to see that there was a very genuine reason for this image and, as such, Mark’s digital life should not be vaporized.

Alas there is another trait that goes alongside these technology masters of the universes’ desire to automate everything. This trait is to having an almost unwavering confidence in the answers that algorithms produce. Like modern-day versions of religious converts, these companies seem to have the attitude that if the algorithm flags something as dodgy, it must indeed be dodgy. An ability to separate the signal from the noise is sadly lacking from these folks.


Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.