Google Cloud Outage Serves As A Reminder About Redundancy

By The Console Connect Team|6 June, 2019


The large scale outage of Google Cloud services in early June demonstrated the challenges and some of the lingering concerns around moving to an all-cloud infrastructure.

On June 2, Google released a status update that said “high levels of network congestion in the eastern USA” were affecting multiple services in Google Cloud, G Suite and YouTube. The knock-on effect meant that some customers of Google Cloud services with instances in the eastern US were also impacted by the outage, with Snapchat, Uber, Vimeo and Discord feeling the brunt. All told, millions of users for several services worldwide were feeling the pain for a number of hours.

Learn everything about managing a hybrid network with our e-book: The Essential Hybrid Networking Guide

Although it seems like businesses are making a mass migration to public cloud services, this shift is not necessarily at the expense of on-premise infrastructure and applications. Indeed, while public cloud compute and storage has lowered the barriers to starting up new internet-based businesses, the migration trend in general is far more cautious and will occur over the course of many years due to the slow moving nature of traditional enterprise IT.

The importance of a multi-cloud strategy

What this means in practice is that enterprises are, and will continue to, operate not just a mix of public and private clouds but also multiple different public clouds.

The recent Google Cloud outage serves as a reminder of why this strategy is important - and why it can be dangerous to put all your eggs in one basket.

Business leaders want to reduce TCO and increase the agility and scalability of on-premises storage and compute by extending their data infrastructure to the public cloud. In terms of connectivity, the network mix has to shift in parallel.

The network has to support multi-cloud access

The on-premise data centers and public clouds now need to talk to each other. So the biggest drivers of increased bandwidth demands are coming in part from the adoption of business software applications being consumed in the public cloud, alongside those in the private data center and the need to move workloads seamlessly between public and private cloud platforms, while creating a consistent architecture across both environments.

Understandably, many conversations in the industry focus on the fact that the enterprise today is very nervous about dependencies on a single cloud. Despite Amazon's strong leadership, the growth rates of Microsoft Azure and, more recently, Google Cloud make it clear that all three are going to have a pretty strong competitive differentiation depending on the workload.

According to RightScale’s 2018 State of the Cloud report, companies on average are using almost five public and private clouds in their arsenal and 81 per cent of survey respondents with over 1,000 employees have a multi-cloud strategy in place. Within this demographic, 51 per cent have what they describe as a hybrid cloud strategy, that is to say a mix of public and private clouds, while 21 per cent will use multiple public clouds and 10 per cent will use multiple private clouds.

As a result, businesses need to deploy data center interconnection that is as flexible and agile as the dynamic digital assets they already rely on in order to protect themselves against service failures and outages.

Don’t forget to share this post!

Sign up for our latest blog updates direct to your inbox