Cloud computing is hot, but cloud wastage is chilling

‘Doing the right things’ and ‘doing the things right’ are two nearly identical sentences; yet, by swapping a few words around, the meaning can change drastically – especially in the world of Information Technology.

Take cloud computing. Migrating to cloud computing falls under ‘doing the right things’; using the cloud effectively and efficiently falls under ‘doing the things right’. And while the former category is certainly a good place to start, it’s the latter which lays the foundations for sustainable value-add.

By the end of 2020, the enterprise workloads running from cloud will be about 83 percent and worldwide public cloud revenue will be around $257.9 billion, growing to $307 billion in 2021 and $364 billion in 2022, according to Gartner.  Compared to five years ago, you would be hard pressed to find an organization reporting doubts about cloud adoption. In fact, what we see now is that 55 percent of organizations are currently using multiple public clouds, according to an IDG survey – indicating that the business and cost benefits of moving to cloud are gaining recognition. Fast forward five years from now and the statistics may still show you an upward trend with about 98 percent of enterprise workloads now on the cloud. Public cloud revenue would be around $600 billion and multi-cloud adoption around 80 percent. However, this is only one side of the story.

Adoption ≠ optimization

Against a backdrop of the fast-growing adoption of cloud technologies, one major challenge and concern that most CTOs have considered over the past few years is cloud wastage. Look at any of the surveys in 2020 from Gartner, ParkMyCloud or Flexera.  They all point to the biggest challenge: “Cost Savings”. Increasing focus on this area over the last two years is understandable. When looking to adopt a new technology, the first goal is to ensure that this technology is going to solve most, if not all, of the technical challenges that it sets out to. Secondly, we want to ensure that the adoption of this technology fulfils the business’ goals.

Thirdly, we look at how much it will cost to transition or adopt the newer technology. Once we have ascertained these and have run the application for – generally- up to a year, we start thinking about optimization. Even the ever-emerging technology of Artificial Intelligence works on the same principle — first learn and then run. 

It is estimated that 30 percent of the cloud spend done by organizations is wasted.  Let’s take only IaaS (Infrastructure-as-a-Service) into consideration, with 2020 cloud revenue forecasted by Gartner as $50.4 billion. If we break down the IaaS model, it consists of compute (virtual or bare metal) storage; networking; and networking services such as firewalls. Assuming that two thirds of the Gartner’s estimated spend is on compute, total wastage is around $10 billion.

Slimming down the wastage

There are various ways through which this cloud wastage can be reduced. Below are some of the factors to consider:

  • Utilize the ‘Cost Effective’ Region – The cloud providers have different pricing for the same instances across regions in the same country. If you can compromise a few milliseconds of latency, in many cases, could be looking at cost savings of up to 20 percent-25 percent. For example, the same services from provider AWS report a ~20 percent cost benefit when switching to an Oregon-based program compared to Los Angeles; with similar cost savings for the same services available when switching from Wyoming to Washington on provider Azure. For companies willing to think outside the box – or state, at least! – there may be compelling cost efficiencies to be gained.
  • Utilize Through the Economics of ‘Supply and Demand’ – Cloud Service providers like AWS and Azure have a concept of ‘Spot’ instances which are unused VM capacity that is available at a much lower price (up to 90 percent) as compared to an ‘On Demand’ or ‘Pay-as-you-go’ model. The whole principle is based on the ‘Supply and Demand’ dynamic: the cloud provider has more supply than the demand, so will sell service capacity at a much lower price. . The only issue is that the ‘Spot’ instance can be taken back by the Cloud service provider when demand increases. This said, there are various strategies that can be used to utilize the spot instances in an effective way:  
  • Use ’defined duration workloads’ – On AWS, when you request a spot instance, there is a provision for ensuring that instances are available to you for a stipulated period of time (between 1 and 6 hours) and will not be taken back by the cloud provider even if the demand increases. And with savings of 40 percent- 80 percent – depending on the number of hours blocked – on the cost per hour when using a spot cost defined duration workload with AWS, this setup can be very effective for development and PoC environments on cloud.
  • Combine ’On Demand’ and ‘Spot instances’ – On AWS, it is referred to as ’Spot Fleet’. The Spot fleet attempts to launch the number of spot instances and on demand instances to meet the target capacity that you need. For example, if you need to run 5 application servers under a load balanced environment, you don’t need to have all 5 of them under the ‘On Demand’ pricing. Choose a combination of 3 ‘On Demand’ and 2 ‘Spot’ instances. The ‘On Demand’ ensures that you always have instance capacity and still save on cost by running ‘spot’ instances.  
  • Optimize Data Storage Cost – Storage is one of the cheapest cloud services and also the most ignored. There are different storage types as defined by the cloud providers and you should decide on what to avail. It is typically observed that the newly created data is used heavily for the first 30 to 60 days based on the business scenario. After this period either the data is rarely referred or in the worst case, never used. A typical example of that are the log files that generate a huge amount of data but are only relevant for 30 to 60 days. Even the credit reports are fetched again by the financial institutions after 90 days.

To effectively manage the storage cost, define life cycle policies for the data. Keep the data for 30 days in the ‘Frequent Access’ but then move it to ‘Infrequent Access’ and then to long term archival. By implementing this strong data management practice, you can also gain notable cost savings: for AWS, US-East Region, cost variance per storage type per month is as follows:

  • S3 Standard – $0.023 per GB * 100 GB = $2.30
  • S3 Standard – Infrequent Access – $0.0125 per GB * 100 GB = $1.25
  • S3 Glacier – $0.004 per GB * 100 GB = $0.40
  • S3 Glacier Deep Archive – $0.00099 per GB * 100 GB = $0.099

In addition to this, also consider:

  • Deleting files that are not required
  • Deleting old versions of the files and only keeping the current version
  • Deleting incomplete multipart file uploads
  • Utilize Enterprise Discounts – Both AWS and Azure have discount programs for companies that are investing in bulk for a minimum of 3 years with them. Your organization is required to make an upfront monetary commitment for each of the years of the agreement. Typically, a large spend over a long period results in good discounts from the cloud providers. However, if you don’t spend the committed amount, you have to pay the remaining amount (since it’s part of the contract). In the case of Azure, the enterprise agreement may yield savings ranging from 15 percent up to 45 percent.

Continuous optimizations are key

This is no way to provide a complete list of all the optimization strategies for cloud utilization technology across all enterprises. But surely these are the most common ones that we have seen across the organizations we work with. Here are some key takeaways toward maintaining your cost optimization approach:

  • Keep a dedicated set of people responsible for ensuring that the cloud expenditure is within the limits and is equally essential.
  • Define budgets and set an alert when you are approaching the threshold.
  • The resources on the cloud are infinite but the organization’s spend is not. It is the responsibility of every individual involved to ensure that every penny spent on the cloud is worth it.

Eklove Mohan, Senior Director – Technology, Synechron

Source link