Unforeseen Costs Of The Public Cloud & How To Avoid Them
By The Console Connect Team|16 January, 2020
Migrating assets from the private data center to the public cloud as a means of saving costs has long been debunked as a myth. Most of the savings come in the form of agility, efficiency, and optimization, which may at some point turn into revenue gains but it’s not really the main aim. Now that many organizations have a few years’ experience of being in the cloud, however, one thing that’s becoming clear is that if not managed correctly, public cloud assets can become very expensive indeed.
If you’re only just getting into the cloud you may find it very agreeable price wise but a word to the wise is that hidden costs are everywhere and tend to appear once you’ve got locked in.
It costs very little to move your data into the cloud and in many cases a cloud provider might help you transport or ingress your data into its infrastructure. Depending on the amount of data you have to move however you may be looking at locking up a circuit for a significant period of time. If we’re talking petabytes of data it could take weeks, maybe months to move and while that data is in transit it will be unavailable.
The internet might not cut it here and a slower dedicated circuit could end up being costly as well as inefficient. One option might be to use an SDN powered service like Console Connect to flex the bandwidth up on a connection until the transfer is complete and then flex down or even cease the circuit, so you only pay for what you need.
The main cost is in the egress
The real costs hit when you want to move your data out of the cloud. Again the more you have in the cloud the more expensive it is to get it out, or egress it.
There are a few ways to minimize the impact here, however.
As much as 80% of enterprise data is typically unstructured. The other 20% exists in some kind of database and due to its nature is accessed and modified frequently. One approach would be to free up your on premise storage by moving the bulk of your data - the unstructured stuff - into the cloud, and leave the transactional structured data on site where you can access it without price penalty.
You could also look at data that is rarely accessed and might only be kept around for compliance purposes. In this case it makes sense to store it in the cloud as you won’t be pulling it back very often.
Another challenge is moving workloads between clouds. It’s acknowledged that many organizations will have multiple cloud providers but these don’t necessarily play nicely with each other. Each public cloud provider uses its own storage protocols, which means rewriting scripts as you move from one to another but also migrating data between these disconnected protocols can result in increased egress costs as well.
Federate your data
One way of avoiding this is to federate your data some way. You could use a network edge appliance to cache your most active data and metadata on premise and make sure that most of your file access is local thus reducing egress costs.
You can also aggregate native storage from different providers into a storage repository, effectively abstracting the access layer and using the instance that is best depending on the application, the workloads, the cost, and various needs determined in the moment.
This works best with multi-cloud environments but with interconnection fabrics like Console Connect now having Points of Presence in so many cloud data centers it’s possible to spin up connections into those clouds almost instantaneously to meet demand.
Finally, monitor your usage and put some reporting tools in place. You will then be able to identify which parts of your business are incurring the costs when it comes to cloud access and either work with them to optimize it or put in place a cost usage mechanism so they can be re-charged for access.