What AI workloads really need from your network

By Lily Bennett|7 May, 2025

What AI workloads really need from your network

The rapid advancement of generative AI has brought with it new challenges and complexities - particularly when it comes to networking. As organisations globally rush to leverage large language models (LLMs) to transform their operations, it’s imperative to understand that AI isn’t just about algorithms and data science, it’s also about the network that underpins it all.

To unpack this further, our CTO, Paul Gampe shares his insights into two critical AI use cases: training LLMs and delivering low-latency inference. What becomes clear is that a robust, high-performance, and secure network isn’t just a nice-to-have - it’s essential.

LLM training: Moving massive volumes of data securely and efficiently

LLM training is a data-intensive process. Unlike general generative AI models trained on open internet data (like ChatGPT or Gemini), enterprise-grade models often require domain-specific knowledge that isn’t harvested on general models. This means training on proprietary data sets.

Take, for example, an enterprise needing to migrate five petabytes of data from one major cloud provider to another to leverage the cost efficiencies of specialised hardware. That’s not terabytes, it’s petabytes. This data is essential for training large language models to generate meaningful responses.

Transferring such massive datasets requires a highly reliable, high-bandwidth network connection to ensure speed and consistency. Moving this volume of data over the public internet would have been painfully slow, unreliable, and prohibitively expensive due to egress charges.

Instead, the enterprise used a private, dedicated 10Gbps link provided by Console Connect which allowed very large volumes of data to be moved at a consistent rate between different cloud providers. The result? A seamless and secure migration of data in under a month, saving them $75,000 in egress fees.

It may sound trivial but you need to put the data next to the compute. And to do that at scale, you need high bandwidth and reliable connectivity which a dedicated, private, direct connection will provide,” says Paul.

The lesson here is that training AI models isn’t just about having access to GPUs - it’s about ensuring your network can reliably deliver that data to where the training of the LLM will take place.

Inference: Delivering real-time performance with guaranteed latency

Once your model is trained, you enter the inference stage – when the AI begins answering questions and generating responses in real-time. This is what most end users interact with, whether through a chatbot, a voice assistant, or embedded in a SaaS platform.

And when it comes to inference, latency matters. If an AI assistant takes too long to respond, users abandon the experience.

Your lived experience with AI, whether that’s with ChatGPT or a custom chatbot, is only as good as the speed and reliability of your connection to the model,” Paul explains.

The challenge of using the public internet? Consistent latency or bandwidth can’t be guaranteed as there is competing traffic (think video calls, YouTube streaming, or general business usage) and this can degrade performance unpredictably.

This is where platforms like Console Connect offer guaranteed bandwidth and Quality of Service, allowing enterprises to isolate and prioritise traffic specifically for AI inference workloads.

In another real-world example, a global enterprise deployed an internal chatbot hosted in Google Cloud and integrated it with their CRM (a SaaS application) via Slack. By using dedicated cloud routing, it enabled fast, reliable responses across platforms, something not possible over the public internet.

Without direct cloud-to-cloud connectivity, inference times would have been much longer. Leveraging a CloudRouter® gave the company control over performance and latency, ensuring the chatbot remained responsive. It’s a strong reminder that when deploying AI such as chatbots that connectivity across multiple locations such as cloud and SaaS environments is critical to the user experience.

Key network requirements for AI workloads

Whether you're training models or deploying them into production for inference, here are some of the key networking capabilities to consider:

  • High bandwidth: Essential for moving petabyte-scale data sets, particularly in LLM training where performance hinges on proximity to compute.
  • Guaranteed latency and QoS: Critical for inference workloads, especially in customer-facing applications where speed directly impacts user experience.
  • Secure, private connectivity: Helps organisations avoid the unpredictable security posture, ensures compliance, and protects sensitive data.
  • Multi-cloud connectivity: Many AI workloads rely on hybrid or multi-cloud strategies. The ability to interconnect across clouds and SaaS platforms is essential.
  • Scalability: As AI adoption grows, so does the need for more powerful connections. Enterprises need to future-proof with pathways from 10Gbps to 100Gbps and more.

Maximise your AI strategy

Despite the hype, integrating generative AI into an IT strategy continues to present significant challenges for businesses.  According to IBM, only 47% of IT leaders said their AI projects were profitable in 2024. That’s because running these models are resource and energy-intensive, requiring significant infrastructure, compute power, and bandwidth.

We're at the point in the hype cycle where expectations are high, but the return is still being proven," says Paul.

Gartner hype cycle(Graph depicting the Gartner hype cycle)

Therefore, organisations now face a dilemma: they can’t risk falling behind by ignoring Gen AI, but a misstep in strategy can be just as costly. Successfully implementing an AI strategy is crucial for enterprises and it requires a smart, holistic approach. It’s not just about what models they train or use, but how they connect and support those models with the right networking infrastructure to maximise performance and value.

Final thought: Gen AI is a network story too

In the rush to adopt AI, it’s easy to overlook the role of the network. But as Paul Gampe makes clear, whether you’re streaming telemetry data for model training or delivering chatbot responses in milliseconds, your network can be the difference between success and failure.

To stay ahead in the age of AI, organisations must rethink their connectivity – not as a background service, but as a core enabler of innovation.

Topics: AI
Don’t forget to share this post!

Sign up for our latest blog updates direct to your inbox

Subscribe