Introduction and Objectives

This testbed is designed to support advanced research in network performance, Software-Defined Networking (SDN), Network Function Virtualization (NFV),and 5G deployment.

It leverages four Dell PowerEdge servers (each running Proxmox VE hypervisor) interconnected by a P4 Programmable Switche, with an Open vSwitch (OVS) bridge on each server. The environment will enable experiments with software-defined networking (through OVS and SDN controllers), NFV (via virtual network functions on the VMs), and a 5G standalone network (with disaggregated RAN and core components).

All servers run Proxmox VE (Debian-based), and each is equipped with a SmartNIC (programmable NIC) to offload packet processing and support P4 programs in hardware.

One server also provides WAN connectivity through a VyOS router VM (using a Wi-Fi uplink to campus network), and another runs a BIND 9 (DNS) service to support the on-premises OKD cluster name resolution.

Physical Topology and Components

Proxmox VE

All Proxmox servers are part of a single Proxmox cluster (for ease of management, enabling features like VM live migration across servers). They share the management network for cluster coordination. VMs on any server can reach VMs on another server via the switch.

Management Ethernet Switch (dedicated)

A dedicated management Ethernet switch connects the Integrated Dell Remote Access Controller (iDRAC) out-of-band management ports of all servers on an isolated management network (for remote power/reset and monitoring).

spine-leaf architecture

All of servers have their primary Proxmox host NICs (the SmartNICs) connected to ports on individual leaf switches. These leaf switches, in turn, are interconnected via a high-speed spine switch, forming a classic spine-leaf topology.

Each server connects to its respective leaf switch through the SmartNIC, which supports P4-programmable hardware offloads. The OVS bridge on each Proxmox host bridges the internal VMs to the physical SmartNIC interface, which uplinks to the leaf switch. This architecture allows for traffic from VMs on different servers to be routed through the spine switch, enabling scalable and low-latency east-west communication.

The SmartNICs on each server can filter, route, or encapsulate packets in hardware using their P4-programmable pipeline before sending them out. This effectively distributes switching and network logic between the edge (SmartNICs) and the fabric (leaf and spine switches). The programmable nature of both the NICs and the switches provides flexibility for implementing SDN policies, slicing, and advanced telemetry in the data center fabric.

VyOS WWAN - External Network Gateway

One of the servers has an external Wi-Fi card that links to the university’s Wi-Fi network. This interface is passed to a VyOS VM, which acts as the gateway router for the testbed.

The VyOS VM has two interfaces: one connects to the Wi-Fi WAN (providing Internet access/DHCP from campus network), and the other connects to the Proxmox OVS bridge (LAN). This VM provides NAT, firewall, and routing between the testbed’s internal LAN and the outside network.

Kubernetes/OpenShift(OKD) Cluster

The research project will deploy Aether 5G components on a Kubernetes cluster. We will create several VMs to act as master and worker nodes for this cluster. For instance, 3 control-plane VMs and 2 worker VMs (depending on resource needs) distributed across the all of servers.

NOTE:

  1. The BIND 9 DNS VM provides the required DNS records for this cluster’s operation
  2. The Kubernetes network (for pod communication) will be handled by an Calico, but that is separate from our physical topology – so, we require BGP
  3. Needs a Load Balancer to distribute traffic across all control plane nodes.

Storage Network (Optional)

If shared storage or Ceph is used for the VMs or containers, we might consider a separate VLAN or even direct links for storage traffic. However, for this design, we assume local storage on each server for simplicity.

Central Unit (CU)/Distributed Unit (DU)

https://docs.aetherproject.org/master/onramp/gnb.html#gnodeb-setup

User Equipment (UE)

Precision Time Protocol (PTP) Synchronization

SmartNIC DPU

Reference List

  1. data center infrastructure