Much of the work I do revolves around talking to organizations about how they’ll start their journey to the cloud – discussions around barriers to adoption, dealing with cultural issues and security concerns all typically the main topics of interest. Recently, however, I was pointed in the direction of a forum discussion that raised an issue that, while not affecting cloud alone, is a valid issue to talk about.
In the post, the IT practitioner said that:
I’ve been involved on the periphery of several RFP responses, Statements of Work etc. that relate in one way or another to government agencies and large corporations outsourcing their infrastructure hosting to a provider. I’m not talking here about equipment co-location, but rather scenarios where the hosting provider owns the tin and truly delivers the Data Centre Infrastructure as a Service, including the storage. No doubt this is a growing trend both here in NZ and globally.
Nowhere during my involvement, which as I said has not been in-depth, have I come across descriptions, or requirements for that matter, on the “exit approach”
So I’ve been sitting here pondering, when the relationship between the customer and provider comes to an end after say, three, five or 10 years, and the customer has 100, 200, 600 TeraBytes of data sitting on the equipment (a SAN one would expect) owned by the incumbent provider; how on earth would they go about migrating this to the new infrastructure or archiving it off for later access?
Keen to hear people’s thoughts on this.
It really is a valid point. We all spend a lot of time worrying about getting our organization to start using a cloud service, but what happens when we amass vast quantities of data on that service and want to shift? Network limitations are such that this can cause real issues. It’s interesting to note that many cloud vendors allow customers to send data on tape or drive to the vendor as the fastest practicable way to migrate data across – if this approach needs to be taken at the start of a cloud relationship, imagine how much more important it is after a period of usage.
I’d be keen to understand whether organizations are thinking about this issue and, if so, what they’ve done to resolve any potential issues. Feel free to comment below.
I’ve wondered about this from a slightly different point of view: ideally cloud contracts would include clauses about these end-of-term arrangements. Does this actually happen in practice?
Just as important is the format that the data is presented on export for instance in, table relationships in the case of database migration. These are questions most often never asked at the outset but result in a lot of time spent to migrate to a new product. Some big cloud based database vendors, conveniently it seems, leverage this to disuade people moving to competitors offerings.
I’m working with two “cloud database” suppliers at this moment who do include contractual provision for their client’s exit strategy; both in terms of data availability and format. What is less certain is what happens if these suppliers fail. Despite what the contract says about clients owning their data. The practicalities of getting at your data when dealing with receivers and other creditors may take longer to sort out than business continuity allows.
Whatever the technology is used organisations need to manage their data risks. Ensuing data policies are in place and tested is a governance responsibility analogous to ensure there are robust financial controls in place. It’s a shame boards do not give it as much attention. But then I suspect they don’t usually have the skills avaialable to do so. The IT profession needs to step up.