Skip to main content

Understand Ultra Ethernet Three Types of Networks Explained

·519 words·3 mins
Ultra Ethernet UE Networking Data Center Scale Up Scale Out
Table of Contents

Ultra Ethernet (UE) is an emerging standard designed to meet the growing needs of high-performance computing (HPC), data centers, and AI workloads. In its latest paper “Ultra Ethernet’s Design Principles and Architectural Innovations”, the authors (who are also the lead contributors to the UE 1.0 specification) describe how UE is structured to support three fundamental types of networks.

The diagram below (from the paper) illustrates these three categories:

  • Local Network (Scale-Up, Purple)
  • Backend Network (Scale-Out, Blue)
  • Frontend Network (Green)

Local Network (Scale-Up, Purple)
#

The local network connects CPUs with accelerators such as GPUs, FPGAs, or specialized AI processors.
Key characteristics:

  • Typical technologies: CXL, NVLINK, or Ethernet
  • Deployment range: up to 10 meters
  • Latency goal: <1 microsecond

These networks are usually node-level or rack-level connections, enabling extremely low-latency communication for tightly coupled computing environments.


Backend Network (Scale-Out, Blue)
#

The backend network connects computing devices—primarily accelerators—into a high-performance cluster.
Key characteristics:

  • Transmission distance: up to 150 meters
  • Latency target: <10 microseconds
  • Often grouped together with frontend networks as scale-out networks

UE supports two deployment models:

  1. Converged – backend and frontend combined on the same physical network.
  2. Separated – backend and frontend deployed as independent networks.

This flexibility makes backend networks the core target of UE 1.0, optimized for high-bandwidth (400+ Gbps) and large-message transmission.


Frontend Network (Traditional Data Center, Green)
#

The frontend network represents the traditional data center fabric, handling both:

  • East-West traffic (between servers within the data center)
  • North-South traffic (between the data center and the outside world)

Key characteristics:

  • Transmission distance: up to 1500 meters
  • Latency: typically >100 microseconds

While critical for connecting the data center to external networks, frontend networks have different design priorities compared to backend or local connections.


UE 1.0 Design Goals
#

According to the paper, Ultra Ethernet 1.0 is primarily optimized for backend networks, rather than local or frontend networks.

  • Core assumptions:

    • Bandwidth of 400+ Gbps
    • Medium link lengths (10–150 meters)
    • Support for large message transfers
  • Priorities:

    1. Low-cost, high bandwidth
    2. Scalability for ultra-large systems
    3. Secondary factors: header size and per-packet latency

Future Roadmap: Beyond Backend Networks
#

While UE 1.0 centers on backend performance, future versions of Ultra Ethernet are expected to broaden scope:

  • Local network optimizations – targeting ultra-low latency and efficiency for small packet transfers.
  • Frontend network improvements – focusing on simplified operations, scalability, and data center-wide adaptability.

This evolution reflects UE’s ambition to unify networking technologies across HPC, AI, and cloud infrastructure, while tailoring features to specific deployment needs.


Conclusion
#

Ultra Ethernet introduces a layered view of networking—local, backend, and frontend—each with unique roles in modern computing systems.

  • Local networks deliver sub-microsecond latency for CPU-accelerator links.
  • Backend networks power scale-out AI and HPC clusters with massive bandwidth.
  • Frontend networks provide the broader data center connectivity.

By prioritizing backend performance in UE 1.0, the standard sets a foundation for scalable AI and HPC systems, while leaving room for future optimizations in local and frontend deployments.

As the demand for AI supercomputers, hyperscale data centers, and exascale HPC systems continues to grow, Ultra Ethernet is poised to play a central role in shaping the next generation of networking infrastructure.

Related

AMD Unveiled Latest CPU Roadmap of Data Center Product Line
·1169 words·6 mins
AMD CPU Roadmap Data Center EPYC Venice Instinct MI400
Introduction to 400G Optical Modules
·891 words·5 mins
DataCenter Data Center AI Optical Module
Intel’s Next-Gen Jaguar Shores Chip Unveiled: 18A Process + HBM4 Memory
·599 words·3 mins
Intel 18A HBM4 AI Chips NVIDIA AMD HPC Semiconductors