An Expensive Big Data Cloud Mistake

An Expensive Big Data Cloud Mistake

The move to the cloud is continuing to accelerate. While there are many advantages to the cloud, it is necessary to exercise caution and mitigate the risks of the cloud while pursuing the advantages. One approach that can be quite costly is the transfer of code and processes to the cloud without an increased focus on efficiency.

A short time ago, I wrote about some of the challenges that analytics organizations face when migrating to the cloud. When asked how the challenges would apply more broadly, it became clear that they apply directly to a number of different areas, including the processing and analysis of big data.

Our Code Is “Efficient Enough”!

Historically, in an on-premise environment, teams making use of big data aren’t necessarily known for the efficiency of their processes as opposed to the results those processes generate. In practice, processing was “free” because there was paid-for equipment sitting on the data center floor ready to be utilized. Big data processing was often run during off-peak times, which made use of what would have been otherwise idle capacity while enabling the extraction of value from big data. This was a win for all.

A highly inefficient process that used a lot of extra temporary disk space and CPU didn’t always raise concerns as long as it completed and released those resources before other processes needed them. This led to a baseline state where efficiency only needed to be “good enough” and focus when building big data processes was on crossing two low bars:

  1. It would complete within the timeframe required
  2. It didn’t conflict with other critical processes

I can speak from personal experience coding “good enough” processes on this one!

Sticking with an approach of efficiency being “good enough” for big data processes can lead to disaster on the cloud. Why? Because in a cloud environment you’ll explicitly pay for every byte and CPU cycle used. A big advantage of the cloud for processing big data is the ability to access powerful systems while paying only for what you use. This leads to the associated disadvantage that you’ll literally pay for everything you use. Suddenly, those “good enough” processes have a hard, tangible cost. The inefficiency is no longer hidden by unused, pre-paid, otherwise idle capacity.

On The Cloud, “Good Enough” Isn’t “Good Enough”

I once met with the leader of a team that worked a lot with big data within a major cloud provider. Her team was tasked with moving everything they did to the cloud so that an example could be set for the company’s clients. The migration was mostly painless and seamless, at least initially. The team’s data was migrated, and the team started doing its work against the cloud platform. Everything seemed to be going extremely well … until the first bill came!

The team hadn’t thought through well enough how much what they were doing would cost under the new model. Prior to the migration, the team was charged a set monthly fee to access internal systems. Within those systems, they were free to use all the resources they needed as long as it didn’t conflict with other needs. The team was not concerned about efficiency beyond getting to “good enough”.

That first month’s bill represented a substantial portion of the annual budget for processing, which led to panic to get things under control. The reality struck that the team’s computing costs were no longer fixed and inefficient processing was no longer “free”. The bill demonstrated just how true this was! The team began to test processes on small data samples, allow efficiency experts to review code before it was deployed, and to carefully consider what a process might cost before hitting “submit”. After some pain and effort, the team got their cloud costs under control.

When Migrating To The Cloud, Focus On Efficiency Up Front

The lesson to be gained from the prior story is that if big data processing is to be migrated to a cloud environment, then code efficiency must become a major focus. “Good enough” code on the cloud can lead to budget-breaking bills that can also cost people their jobs. While few people ignored efficiency altogether in the past, it was often easy to get away with minimal focus on it. If every byte and CPU cycle will be added to your bill, the focus on the efficiency of big data processing becomes absolutely crucial.

Make a point to offer, if not require, code efficiency training for anyone who creates big data processes. It can also make sense to create roles focused solely on code and process efficiency. People in these roles are tasked with tuning and blessing any big data process before it is released for use. Traditional resources can focus on getting the necessary logic down in code, while the efficiency experts focus on optimizing that base code. The worst thing to do is to migrate to the cloud without placing additional focus on efficiency. That can be a very costly mistake!

https://www.iianalytics.com/