Top 3 public cloud costs to watch

By Alex Hawkes|24 July, 2023

“We moved to the cloud and now everything is more expensive!” 

Unfortunately, this is a common utterance from companies that have made the leap to a public cloud-based architecture, and a slowdown in adoption growth seen during the pandemic is hitting providers big and small as organisations rationalise cloud spend in the face of rising costs. 

The good news is these costs can typically be reined in without too much disruption or the need to retreat from the cloud. But what it boils down to is having a thorough understanding of your cloud usage on a very granular level, and this is often easier said than done. 

Lift-and-shift can be a recipe for disaster 

While there are lots of articles and research pieces out there showing a diminishing ROI for cloud investments, it’s true that a lot of the pain is inadvertently self-inflicted.

Overspend can happen in any part of cloud provisioning but seems more prevalent for compute due to a lack of proper planning for the migration. In the rush to move applications to this shiny new cloud environment, many businesses make missteps that result in under optimisation of the cloud infrastructure. 

Applications that were designed and built in an on-premises data centre need significant refactoring to be cloud-ready. A ‘lift-and-shift’ approach can create inefficiencies and break architecture as code written for one environment won’t work as well, or at all, in another. 

There could also be a period of double spend - when a company is paying for the new cloud environment but is unable to decommission the old environment until the migration is complete. Without proper planning the project can drag on, hitting frequent snags, and this period will require some rationalisation of the new environment so costs rise in line with actual usage. This way you’re not paying for storage and compute that’s sitting idle. 

1. Getting compute provisioning right

Aside from the above issues of moving applications into the cloud without refactoring them for the new environment, the main culprits for inefficiencies are inadequate monitoring of compute usage and lack of accountability and spending discipline.  

Part of the challenge when it comes to selecting a cloud provider for IaaS or PaaS is the sheer volume of product choice. For compute alone the top three providers - AWS, Azure, and Google Cloud, have over 30 different options between them and that’s before we even start looking at containers such as Kubernetes and Docker. 

Relevance will depend on your specific Virtual machine (VM) use case, as each provider also offers customisation of the CPU and memory, and new products are added all the time with different features. 

This intimidating level of choice can result in some missteps as companies spin up environments but then find more elegant solutions later on. This is why monitoring solutions are essential, with a burgeoning industry dedicated to solving this exact problem of keeping track of your cloud assets. 

To give some idea, cloud cost optimisation and alerting tools such as Datadog, Spot by NetApp, Splunk, VMware or Yotascale are becoming commonplace, with some claiming to deliver average per-month savings of 33%.

These cloud monitoring tools can also help you break through the complexity wall of having so many products adopted by so many stakeholders that it is becoming difficult to understand which services are legitimately being consumed, and which are still being billed for despite not being in use. Context behind consumption is essential for budgeting, planning and managing cloud spend.

For companies that run a lot of batch compute jobs or jobs that can be easily interrupted and restarted, you can look into spot instances - this is spare compute capacity that is auctioned off by the CSP in real time so they can maximise usage of their own assets. Ideal use cases might include video rendering or big data analytics jobs. The caveat here is that the provider can also retrieve the capacity with little notice. 

2.    Getting storage provisioning right

As with compute, cloud storage offerings can be bewildering and prices change depending on the size of the objects or assets you are storing, how long you store the objects during the month, and the storage class, which covers features such as ease of access or how frequently an object is accessed during the month. 

It is cheaper to put data into deep or ‘cold’ storage, which is accessed no more than a couple of times a year, than it is to retain data that needs to be accessed frequently and with minimal latency. But the penalties for getting this wrong can be expensive. So it is imperative to understand the context of the data and make sure it is stored appropriately. 

The main cloud providers will also charge additional fees for adding tags or other index features to storage buckets, but these can help you organise and structure your data. 

  • AWS has a page explaining its pricing structure for its S3 storage buckets, here
  • Microsoft Azure has a page dedicated to its Blob storage pricing, here
  • And Google Cloud has a page explaining its data storage options, here 

It should be noted that pricing for storage changes in line with demand and as providers introduce new features to differentiate their services. 

In 2022, Google announced a significant price increase across several products as it changed the structure of its offerings. “Some of these changes will provide new, lower-cost options and features for Google Cloud products. Other changes will raise prices on certain products,” the company said. 

But while costs in general are rising, some providers are bucking the trend. According to market intelligence firm Liftr, over the past year to the start of 2023, there had been a 23% increase in average prices of on-demand compute instances at AWS. 

In comparison, Liftr saw a decline of 9.1% for Azure on-demand compute prices in 2022, although Azure pricing did go up in 2020. 

The cloud ecosystem extends to dozens of providers beyond the top three however, and smaller more specialist players such as Wasabi and Backblaze have made a name for themselves with enterprise storage options that if not more affordable, are at least more predictable cost-wise. 

The physical location of your selected storage is also important as it is tightly coupled to other cloud functions, such as compute, and moving data between instances or to a private data centre can incur significant costs. Although most cloud providers let you upload as much data as you like for free (ingress), there is a charge for extracting that data (egress). 

Want to know how you can save up to 50% on your egress charges? Download our e-book to find out.

3.    Understanding egress charges 

Charges for transferring data from the cloud to your private data centre or on-premises location can range from 6c to 26c per GB. The specific cost depends on factors such as the cloud service provider and their policies, as well as the location, geography, and nature of the data being transferred, says Neil Templeton, SVP of Console Connect, adding that it's important to note that if you're moving larger volumes of data to a different region, the fee may increase accordingly.

Data egress fees are often described as ‘the hidden fees’ in cloud provisioning because they are billed in arrears and are often not budgeted for - a nightmare for financial planning. 

Depending on how your infrastructure is set up, applications, workloads, and users may be able to extract considerable amounts of data from your cloud instances and run up hefty bills before anyone realises the costs incurred. In large organisations with a global spread of offices it can be particularly challenging to monitor and manage data egress and the associated fees. 

Of course, each CSP has its own pricing framework for egress fees, and generally speaking, egress fees will vary depending on the volume of data you’re moving and where it goes. 

Location and geography is also important to note. Transferring data between availability zones or within regions will result in lower fees, while transferring data across different regions or continents will present the highest fees.

The solution is to bypass the public internet and inter-regional CSP architecture by establishing direct physical connections between your on-premises network and the cloud provider’s network (or between two clouds), avoiding the CSP’s provided network. 

Software Defined Cloud Interconnects (SDCI) like Console Connect offer a more flexible way for businesses to privately connect to a cloud provider, and come pre-integrated with public clouds so a lot of the heavy lifting with network configuration and management is already taken care of. 

Organisations also benefit from the pay-as-you-go pricing structure and flexibility of SDCIs -  rather than having a long standing contact for a private connection or leased line with a set capacity to the cloud provider, with Console Connect you only pay for the time the dedicated link is active, as well as for data transferred out of the cloud. 

 

Keeping track of cloud spend

Many businesses are hitting, or have gone through, the complexity wall, with so many cloud services adopted by so many stakeholders that they are struggling to operationalise them within existing budgets and resources. 

A big part of this puzzle is understanding what services are legitimately in use and then monitoring them to understand the financial impact. For projects in flight, planning is paramount to ensure cloud deployments are not under optimised, while accountability and spending discipline needs to become part of the business fabric. 

 

Topics: Cloud
Don’t forget to share this post!

Sign up for our latest blog updates direct to your inbox

Subscribe