Common interoperability challenges for multi-cloud
By Alex Hawkes|27 June, 2022
As cloud technology has matured, hybrid and multi-cloud approaches have become commonplace. Enterprises have discovered that not all applications are suitable for the public cloud, meaning on-premise data centres are still active, and each public cloud has its own strengths and weaknesses, driving businesses to mix-and-match to suit their needs. This results in interoperability challenges for multi-cloud environments.
The adoption of multi-cloud has enabled organisations to take a best-of-breed approach to cloud infrastructure, and according to the Flexera 2022 State of the Cloud Report, enterprises are broadly embracing this multi-cloud strategy, with an estimated 89% of companies adopting a multi-cloud approach.
Yet it’s interesting that 45% of respondents to the same survey said their apps are siloed on different clouds, speaking to the difficulties in getting different clouds to talk to each other and obstacles migrating workloads from one environment to another with minimal effort.
While yesterday’s challenge was vendor lock-in, today’s environments mean multi-vendor infrastructures and multi-cloud architectures that can be challenging for network engineers who are faced with proprietary systems, different interfaces and specialist requirements.
Cloud interoperability versus portability
When talking about multi-cloud environments:
- Interoperability means the ability of two distinct cloud environments to talk to each other - to exchange information in a way that both can understand.
- Cloud portability is the ability to move applications and data from one cloud environment to another with minimal disruption.
Although the two terms are not synonymous, cloud interoperability has become a catch-all for the act of getting two distinct clouds to work together, and is an endeavour recognised as requiring shared processes, APIs, containers and data models across the multi-cloud environment to enable communication between application components.
Cloud-to-cloud communication
The underlying technology stack varies for each cloud platform, including workflows, processes, APIs, and more, and each cloud provider has evolved on its own path over the last 20 years. Initially there was little incentive for for cloud providers to take a cohesive or standardised approach.
But as multi-cloud became the norm, and enterprises adopted two, three, or more clouds, demand for interoperability grew, resulting in some strategic partnerships.
For example, Microsoft and Oracle announced a partnership to ease interoperability across their respective cloud services, enabling customers to migrate and run the same enterprise workloads across both Microsoft Azure and Oracle Cloud.
The aim was to allow customers to connect Azure services, such as Analytics and AI, to Oracle Cloud services, including Autonomous Database and is achieved in part via high-speed dedicated links between their respective data centres, which meant that the partnership was restricted to regions able to provide a data conduit between the two clouds.
In another example, a partnership between Hewlett Packard Enterprise (HPE) and Google Cloud combined a variety of different HPE products and services with Google Cloud's expertise in containerized applications and multi-cloud transportability enabled by Google's Anthos.
Containers in multi-cloud interoperability
HPE and Google were focusing on what they call a hybrid cloud for containers, which allows companies to create containerized workloads, but due to Anthos' ability to work across multiple cloud providers, those workloads could be run on AWS or Azure, not just on Google Cloud.
Yet these strategic partnerships remain far from a standardised approach to multi-cloud management.
To this end, containers have evolved as a key tool in multi-cloud environments. Containers are used to abstract applications from the physical environment in which they are running, but themselves work across multiple layers of abstraction. For example, Anthos can be used as a meta-control plane for containers deployed in the Kubernetes engine.
Meanwhile, attempts at standardisation via meta-control layers have been made through projects like OpenStack, which is an open source cloud computing infrastructure software project to help with multi-cloud management.
But as the saying goes, the wonderful thing about standards is there are so many to choose from, and we are still some way from a standardised approach to multi-cloud management at any layer of abstraction.
The importance of APIs in multi-cloud
A hallmark of cloud adoption and by extension, multi-cloud, is the decentralisation of IT. The IT function no longer owns business applications, as these are now decided on, procured, and owned by individual business functions.
But this means the IT, infrastructure, and network functions are left with the challenge of being business enablers and innovators connecting the disparate clouds together.
Generally speaking, Application Programming Interfaces (APIs) exist to help ensure that connectivity between different assets remains strong enough to enable efficient collaboration even as workflows and processes are changing. According to the State of the API Integration report 2021 from Cloud Elements, speed in meeting business demands (44%) & innovation (40%) are the key drivers for leveraging APIs.
Without access to APIs, partners can’t easily interact with an organisation’s data and business, and collaboration and connectivity to infrastructure assets will be compromised, leaving these organisations vulnerable to competition.
In short, APIs have emerged as the most accessible way for businesses to extract value out of their data and develop innovative business models by combining with enterprise connectivity to actually unlock the data from otherwise isolated systems.
Integration with these separate technologies is typically done via APIs accessed through point-to-point connections whenever the need arises. But the frequency with which these systems change is increasing, with some shifting weekly, daily, or hourly. Not only is it difficult for engineers to keep up with the pace of change using a point-to-point model, the resultant system is prone to single points of failure and requires a great deal of time and resources to maintain.
API-led connectivity is helping cloud-hungry businesses overcome significant investment challenges. Enterprises absolutely need software to connect to clouds but they also need a network interconnect.
The premise of Console Connect was that the vast majority of enterprises would not be able to sustain the breadth and depth of infrastructure required to connect to all of the availability zones across all of the cloud providers. So there's a high barrier to entry to gain access to that automated connectivity with physical infrastructure costs that the enterprise can't bear. Second, there's the software complexity associated with each of the clouds.
According to Paul Gampe, CTO of Console Connect, the provider is creating an API ecosystem that is abstracting the rate of change that happens in these cloud API endpoints and providing a consistent experience - a stable API endpoint for the enterprise, and as much as possible trying to unify that experience.
Each cloud platform needs such a different form of connectivity that it requires a completely different implementation, resulting in Console Connect refactoring the way that it does cloud automation as an independent microservice.
So that whether an organisation is connecting into Azure or Alibaba it's basically the same API endpoints they see on Console Connect, even though there are wildly different workflows.
As discussed, this is a by-product of the fact that there's no incentive for cloud providers to provide a simple, single defined open API for connectivity. However, the benefits of a Network-as-a-Service platform like Console Connect is that it abstracts all that from a customer user’s perspective. The API endpoints look almost identical and can be carefully and strategically orchestrated so that productivity and impact are optimised.