The real cost of latency

By Alex Hawkes|7 November, 2022

In networking, latency - sometimes referred to as 'lag' - is the delay between a client request and the service provider's response. In a cloud environment this could be a developer or end-user client request, and the cloud service provider’s response. Or for multi-cloud, it could be one application in a cloud instance, talking to another application in another cloud instance.

But no matter which type, latency can have a real impact on an organisation.


What is latency?


In terms of physics, network latency is the delay between communicating entities, and it is affected by a number of things, including geographic distance, the number of routers on the path (referred to as ‘hops’) and transitions between hardware and software.

The longer the geographic distance, and the number of layers or assets the data must pass through, the higher the latency and the slower the response time.

Generally speaking, lower latency is always desirable, but of course it comes at a cost, so it may not be required. In some cases though, low latency is a necessity - financial trading and online gaming are two great examples - and the investment is justified in the business model.

Want to know how you can save up to 50% on your egress charges? Download our e-book to find out.

Also, while it sounds like they could be related, latency and network capacity are two different things and high latency can actually reduce the benefits of spending more on network capacity. This is an extreme example, but there would be little point in upgrading a 100Mbps connection to a 1Gbps connection if the latency is very high, because the potentially high failure rate could still mean the connection is unusable.

In more real world terms, the biggest bugbear created by high latency is inconsistent network performance. It’s not just slower speeds all the time, it’s a passable experience one moment and a showstopper the next, and no way of knowing when, or how badly, the latency will fluctuate.

 

How can you measure network latency?

Simple ways of measuring network latency include Time to First Byte and Round Trip Time and can be carried out using the Ping command.

  • Time to First Byte: Time to First Byte (TTFB) records the time that it takes for the first byte of data to reach the client from the server after the connection is established. It is the time the web server takes to process the request and create a response and the time the response takes to return to the client. So, this also incorporates server response time.
  • Round Trip Time: Round Trip Time (RTT) is the time that it takes the client to send a request and receive the response from the server.

 

How does latency affect business?

Networks are becoming more complex. The ‘work from anywhere’ culture created by the pandemic has moved employees and consumers away from concentrated network access points like offices and towns, to a long tail of scattered connectivity. Wireless - whether 5G, 4G, or WiFi, is increasingly part of the network mix, and IoT is adding to this with Bluetooth and other short-range technologies.

Meanwhile, the adoption of cloud and multi-cloud by enterprises means more network assets to connect to and more networks as a result.

What this is leading to is a complex spider web of connectivity, on networks operated by different providers, each with their own SLAs and capabilities.

This can be a real headache for a business, especially one that has any high-performance computing use cases, because these require low network latency to function. But even for non-high-performance applications, high enough latency can cause the application performance to degrade, and even fail.

For enterprises this can be particularly noticeable internally when using cloud-based communications applications. High-latency network access to Zoom or Skype will result in choppy video and audio, making for a poor user experience.

High latency on the network used by a cloud gaming or streaming media company can render the service unusable for consumers, who would have to contend with sluggish controls and stuttering video.

Latency issues for a high frequency trading company would spell disaster, when the entire business is predicated on first mover advantage - a millisecond can make all the difference.

 

Importance of latency when it comes to accessing cloud services

All of these problems can be magnified when it comes to cloud services, for a number of reasons.

The number of router hops on the way to the target server is one of the main issues. Because cloud providers’ data centres can be physically located anywhere in the world, you may want to find out the geographic location and work out if you will be impacted by distance and number of hops.

In a multi-cloud environment, latency in data exchanges between cloud services operated by different providers across the internet can be much higher. This can result in higher costs to fix for multiple cloud services, or can mean that an organisation is effectively locked into a single provider.

Furthermore, once you get to the platform and infrastructure level in a cloud environment, you can experience greater variability in service delivery due to virtualisation. Multiple layers of software can introduce packet delays, especially if virtual machines (VMs) are on separate networks.

Which applications require low network latency?

As we have discussed, low latency is more crucial for specific industries and applications.

  • Streaming analytics applications: Streaming analytics applications, such as real-time auctions, online betting, and multiplayer games, consume and analyse large volumes of data from various sources and produce outputs to be actioned in real-time. Network lag can mean financial consequences.
  • Real-time data management: Enterprise applications often merge data from different sources, like other software, transactional databases, cloud, and sensors, and need to process data changes in real time. Network latency can interfere with application performance.
  • API integration: Different systems communicate with each other using an application programming interface (API). System processing often stops until an API returns a response and so network latency creates application performance issues. This might be pertinent in a ticket booking application for example.
  • Video-enabled remote applications: Some applications such as medical cameras, and drones for search-and-rescue, require an operator to control a machine remotely over video. In these instances, high-latency networks could be a matter of life and death.

 

How to resolve latency issues

For any company with a requirement to keep latency in check, the public internet is a great example of what not to use. The traditional public internet is a mish mash of networks from different operators connected together and service is best effort rather than guaranteed.

When it comes to the internet, you cannot guarantee any sort of latency, jitter, or pathing whatsoever. This makes it entirely unsuitable as a way for enterprises to connect both their private data centres and their public cloud assets. It’s also less than ideal for providing access to cloud-based apps and SaaS solutions, especially those that are mission critical.

In a multi-cloud world, a lot of considerations have inter-dependencies and latency may be one of them - perhaps when sending the data from storage on an AWS instance to compute on a Google Cloud instance. Or perhaps the organisation’s users need low-latency access to the data within the Google instance to derive real-time value from it.

Connectivity is also an important component of the data storage and lifecycle management process, as well as for getting the most value out of your resources. There’s no point setting up a real-time compute instance if you can’t get data into or out of the instance in a low-latency (timely) manner - your connectivity becomes the bottleneck.

The requirement for low-latency high-reliability connectivity between all the different cloud environments in use is a highly relevant use case for a private network that gives your enterprise access to connectivity unaffected by the fluctuations of public internet, resulting in a far more stable and secure connection.

A good private connectivity provider will also have a highly redundant network that reroutes your workloads in case of outages.

Through these direct connections, an organisation has a more reliable and low-latency path to the cloud that delivers a more reliable and consistent network performance for those important applications and services. It is an excellent fit for organisations with latency-sensitive workloads, or those requiring intermittent or unpredictable connectivity to the cloud such as media and broadcast providers.

But setting up and managing such connections is a mammoth task and Network-as-a-Service (NaaS) offerings, such as Console Connect, have emerged to help organisations manage and optimise their network assets from a single pane of glass.

The on-demand, flexible and scalable private connections established via NaaS are one way to ensure consistently reliable and high-quality delivery of services and applications. Private links can be configured to prioritise certain classes of service, ensuring applications that are more latency-sensitive have their specific demands met.

Topics: Networking
Don’t forget to share this post!

Sign up for our latest blog updates direct to your inbox

Subscribe