Key Highlights
- Backblaze upgrades infrastructure to handle AI-driven traffic spikes.
- Shift from 100G to 400G networking reflects changing data flow patterns.
- Growth of CoreWeave and Lambda is driving unpredictable workloads.
Artificial intelligence is reshaping how data moves within data centers, forcing companies like Backblaze to rethink long-standing networking strategies. Unlike traditional cloud traffic, which follows relatively stable, predictable patterns, AI workloads generate bursty, highly variable data flows, particularly during training cycles.
These sudden spikes can overwhelm infrastructure built for steady-state demand. According to Backblaze, even a 400-gigabit data transfer can expand into terabits per second internally, as data is distributed across storage arrays, databases, and compute systems that must communicate simultaneously.
Infrastructure Upgrades to Handle Bursty Traffic
To manage this shift, Backblaze has been replacing its 100-gigabit network links with 400-gigabit connections, significantly increasing throughput and capacity. The company is also deploying higher-density switching infrastructure, including upgraded Arista Networks systems, enabling more devices to connect while maintaining performance.
This transition reflects the need for networks that can scale dynamically rather than rely on predictable usage patterns. Engineers at Backblaze note that traffic loads can vary dramatically even within the same week, making capacity planning more complex than ever.
Rise of Neoclouds and Changing Traffic Patterns
The growth of AI-focused cloud providers such as CoreWeave and Lambda is a key driver behind these changes. These “neoclouds” specialize in GPU-based workloads and are generating traffic patterns that differ significantly from traditional hyperscale cloud environments.
According to industry data, neocloud revenues reached $25 billion in 2025 and are projected to grow rapidly in the coming years. Their appeal lies in flexible contracts, faster provisioning, and lower GPU costs than those of hyperscalers.
However, this rapid growth is also exposing gaps in networking infrastructure. Studies show that many neocloud providers still lag in areas such as connectivity, redundancy, and traffic management.
Geographic Shifts and Latency Considerations
AI workloads are also influencing where infrastructure is deployed. Proximity between storage and compute resources has become critical, as closer distances reduce latency and improve throughput.
Regions like Virginia and California are emerging as key hubs for AI infrastructure, prompting companies like Backblaze to expand their presence there. This clustering effect is creating a feedback loop, attracting more providers and increasing demand for both compute and storage.
The rise of AI is fundamentally altering data center networking requirements. For companies like Backblaze, adapting to unpredictable, high-intensity traffic is no longer optional but essential. As AI adoption accelerates, the industry will need to continue evolving its infrastructure to keep pace with increasingly complex and dynamic workloads.
