Aether is an open source 5G edge cloud platform that supports enterprise deployments of Private 5G.

SD-RAN

Quick Start

https://docs.aetherproject.org/master/onramp/start.html#quick-start

Prep Environment

Ubuntu 20.04 or 22.04 Ubuntu Server (as opposed to Ubuntu Desktop) at least 4 CPU cores and 16GB RAM

To install Aether OnRamp, you must be able able to run sudo without a password,

sudo EDITOR=vim visudo
your_username ALL=(ALL) NOPASSWD: ALL

and there should be no firewall running on the server. You can verify this is the case by executing the following, which should report Status: inactive:

$ sudo ufw status
Status: inactive

Your server should use systemd-networkd to configure the network, which you can verify by typing:

systemctl status systemd-networkd.service

OnRamp depends on Ansible, which you can install on your server as follows:

sudo apt install sshpass python3-venv pipx make git
pipx install --include-deps ansible
pipx ensurepath
source ~/.bashrc

Download Aether OnRamp

https://docs.aetherproject.org/master/onramp/start.html#download-aether-onramp Once ready, clone the Aether OnRamp repo on this target deployment server:

git clone --recursive https://github.com/opennetworkinglab/aether-onramp.git
cd aether-onramp

Taking a quick look at your aether-onramp directory, there are four things to note:

  1. The deps directory contains the Ansible deployment specifications for all the Aether subsystems. Each of these subdirectories (e.g., deps/5gc) is self-contained, meaning you can execute the Make targets in each individual directory. Doing so causes Ansible to run the corresponding playbook. For example, the installation playbook for the 5G Core can be found in deps/5gc/roles/core/tasks/install.yml.

  2. The Makefile in the main OnRamp directory imports (#include) the per-subsystem Makefiles, meaning all the individual steps required to install Aether can be managed from this main directory. The Makefile includes comments listing the key Make targets defined by the included Makefiles. Importantly, the rest of this guide assumes you are working in the main OnRamp directory, and not in the individual subsystems.

  3. File vars/main.yml defines all the Ansible variables you will potentially need to modify to specify your deployment scenario. This file is the union of all the per-component var/main.yml files you find in the corresponding deps directory. This top-level variable file overrides the per-component var files, so you will not need to modify the latter. Note that the vars directory contains several variants of main.yml, where we think of each as specifying a blueprint for a different configuration of Aether. The default main.yml (which is equivalent to main-quickstart.yml, except with non-default settings commented out) gives the blueprint for the Quick Start deployment described in this section. We’ll substitute the other blueprints in later sections.

  4. File hosts.ini (host inventory) is Ansible’s way of specifying the set of servers (physical or virtual) that Ansible targets with various installation playbooks. The default version of hosts.ini included with OnRamp is simplified to run everything on a single server (the one you’ve cloned the repo onto), with additional lines you may eventually need for a multi-node cluster commented out.

Set Target Parameters

https://docs.aetherproject.org/master/onramp/start.html#set-target-parameters The Quick Start deployment described in this section requires that you modify two sets of parameters to reflect the specifics of your target deployment.

hosts.ini

The first set is in file hosts.ini, where you will need to give the IP address and login credentials for the server you are working on. At this stage, we assume the server you downloaded OnRamp onto is the same server you will be installing Aether on.

node1  ansible_host=10.76.28.113 ansible_user=aether ansible_password=aether ansible_sudo_pass=aether

In this example, address 10.76.28.113 and the three occurrences of the string aether need to be replaced with the appropriate values.

Note that if you set up your server to use SSH keys instead of passwords, update the hosts.ini with your private key (accordingly adjust the location and filename of your private key)

node1  ansible_host=10.76.28.113 ansible_user=aether ansible_ssh_private_key_file=~/.ssh/id_rsa

vars/main.yml

The second set of parameters is in vars/main.yml, where the two lines currently reading

data_iface: ens18

need to be edited to replace ens18 with the device interface for your server, and the line specifying the IP address of the Core’s AMF needs to be edited to reflect your server’s IP address:

amf:
   ip: "10.76.28.113"

You can learn your server’s IP address and interface using the Linux ip command:

yanboyang713@aether:~/aether-onramp$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether bc:24:11:a6:4b:08 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 192.168.88.20/24 brd 192.168.88.255 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:fea6:4b08/64 scope link
       valid_lft forever preferred_lft forever

Troubleshooting Hint Due to a limitation in gNBsim (the RAN emulator introduced later in this section), it is necessary for your server to be configured with IPv6 enabled (as the inet6 line in the example output indicates is the case for interface ens18). If IPv6 is not enabled, the emulated RAN will not successfully connect to the AMF.

Note that vars/main.yml and hosts.ini are the only two files you need to modify for now, but there are additional config files that you may want to modify as we move beyond the Quick Start deployment. We’ll identify those files throughout this section, for informational purposes, and revisit them in later sections.

At this point, the only verification step you can take is to type the following:

make aether-pingall

The output should show that Ansible is able to securely connect to all the nodes in your deployment, which is currently just the one that Ansible knows as node1.

Install Kubernetes

https://docs.aetherproject.org/master/onramp/start.html#install-kubernetes The next step is to bring up an RKE2.0 Kubernetes cluster on your target server. Do this by typing:

make aether-k8s-install

Note that the Ansible playbooks triggered by this (and other) make targets will output red results from time-to-time (indicating an exception or failure), but as long as Ansible keeps progressing through the playbook, such output can be safely ignored.

Many of the tasks specified in the various Ansible playbooks result in calls to Kubernetes, either directly via kubectl, or indirectly via helm. This means that you may want to run some combination of the following commands to verify that the right things happened:

kubectl get pods --all-namespaces
helm repo list
helm list --namespace kube-system

The first reports the set of Kubernetes namespaces currently running; the second shows the known set of repos you are pulling charts from; and the third shows the version numbers of the charts currently deployed in the kube-system namespace.

If you are not familiar with kubectl (the CLI for Kubernetes), we recommend that you start with Kubernetes Tutorial.

If you are interested in seeing the details about how Kubernetes is customized for Aether, look at deps/k8s/roles/rke2/templates/master-config.yaml. Of particular note, we have instructed Kubernetes to allow service for ports ranging from 2000 to 36767 and we are using the multus and canal CNI plugins.

Install SD-Core

https://docs.aetherproject.org/master/onramp/start.html#install-sd-core We are now ready to bring up the 5G version of the SD-Core. To do that, type:

make aether-5gc-install

kubectl will now show the omec namespace running (in addition to kube-system), with output similar to the following:

yanboyang713@aether:~/aether-onramp$ kubectl get pods -n omec
NAME                          READY   STATUS    RESTARTS   AGE
amf-79b7d7c58c-g9dpc          1/1     Running   0          2m10s
ausf-768fdc8d68-rblxv         1/1     Running   0          4m41s
init-net-jmcr7                1/1     Running   0          4m42s
kafka-0                       1/1     Running   0          4m41s
metricfunc-85bfbdb74d-ncjmd   1/1     Running   0          4m41s
mongodb-0                     1/1     Running   0          4m41s
mongodb-1                     1/1     Running   0          3m39s
mongodb-arbiter-0             1/1     Running   0          4m41s
nrf-6d844646c-4bkh6           1/1     Running   0          4m41s
nssf-84697647d4-t592m         1/1     Running   0          4m41s
pcf-778544f4d8-6g4n7          1/1     Running   0          4m41s
sctplb-689bb6dd57-7dc8q       1/1     Running   0          4m41s
sd-core-zookeeper-0           1/1     Running   0          4m41s
simapp-6bf8f4b765-xq777       1/1     Running   0          4m41s
smf-58c9b47f5-mp5sg           1/1     Running   0          4m41s
udm-b987c785d-b66kb           1/1     Running   0          4m41s
udr-668849d4cf-t8wws          1/1     Running   0          4m41s
upf-0                         5/5     Running   0          4m41s
webui-798c755b7b-jdrjn        1/1     Running   0          4m41s

If you see problematic pods that are not getting into the Running state, a reset usually corrects the problem. Type:

make aether-resetcore

Once running, you will recognize pods that correspond to many of the microservices discussed is Chapter 5. For example, amf-5887bbf6c5-pc9g2 implements the AMF. Note that for historical reasons, the Aether Core is called omec instead of sd-core.

If you are interested in seeing the details about how SD-Core is configured, look at deps/5gc/roles/core/templates/radio-5g-values.yaml. This is an example of a values override file that Helm passes along to Kubernetes when launching the service. Most of the default settings will remain unchanged, with the main exception being the subscribers block of the omec-sub-provision section. This block will eventually need to be edited to reflect the SIM cards you actually deploy. We return to this topic in the section describing how to bring up a physical gNB.

Run Emulated RAN Test

https://docs.aetherproject.org/master/onramp/start.html#run-emulated-ran-test

We can now test SD-Core with emulated traffic by typing:

make aether-gnbsim-install
make aether-gnbsim-run

Note that you can re-execute the aether-gnbsim-run target multiple times, where the results of each run are saved in a file within the Docker container running the test. You can access that file by typing:

docker exec -it gnbsim-1 cat summary.log

If successful, the output should look like the following:

time="2024-10-07T04:07:21Z" level=info msg="Profile Name: profile2 , Profile Type: pdusessest" category=Summary component=GNBSIM
time="2024-10-07T04:07:21Z" level=info msg="Ue's Passed: 5 , Ue's Failed: 0" category=Summary component=GNBSIM
time="2024-10-07T04:07:21Z" level=info msg="Profile Status: PASS" category=Summary component=GNBSIM

This particular test, which runs the cryptically named pdusessest profile, emulates five UEs, each of which: (1) registers with the Core, (2) initiates a user plane session, and (3) sends a minimal data packet over that session. In addition to displaying the summary results, you can also open a shell in the gnbsim-1 container, where you can view the full trace of every run of the emulation, each of which has been saved in a timestamped file:

yanboyang713@aether:~/aether-onramp$ docker exec -it gnbsim-1 bash
3d416de1045d:/gnbsim/bin# ls
gnbsim                          gnbsim1-20241007T040649.config  summary.log
gnbsim.log                      gnbsim1-20241007T040649.log
3d416de1045d:/gnbsim/bin# more gnbsim1-20241007T040649.log
2024-10-07T04:06:55Z [INFO][GNBSIM][App] App Name: GNBSIM
2024-10-07T04:06:55Z [INFO][GNBSIM][App] Setting log level to: info
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB][gnb1] GNodeB IP:  GNodeB Port: 9487
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB][UserPlaneTransport] User Plane transport listening on: 172.20.0.2:2152
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB] Current range selector value: 65
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB] Current ID range start: 1090519040 end: 1107296255
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB][ControlPlaneTransport] Connected to AMF, AMF IP: 192.168.88.20 AMF Port: 38412
2024-10-07T04:06:55Z [INFO][GNBSIM][GNodeB][ControlPlaneTransport] Wrote 61 bytes
2024-10-07T04:06:55Z [INFO][GNBSIM][Stats] Received Event: MSG_OUT:  &{2024-10-07 04:06:55.222792096 +0000 UTC m=+0.050126819  1 0}

Troubleshooting Hint If summary.log is empty, it means the emulation did not run due to a configuration error. To debug the problem, open a bash shell on the gNBsim container (as shown in the preceding example), and look at gnbsim.log. Output that includes failed to connect amf and err: address family not supported by protocol indicates that your server does not have IPv6 enabled.

Troubleshooting Hint If summary.log reports UEs Passed: 0 , UEs Failed: 5 then it may be the case that SD-Core did not come up cleanly. Type make aether-resetcore, and after verifying all pods are running with kubectl, run gNBsim again.

Another possibility is that you have multiple SD-Cores running in the same broadcast domain. This causes ARP to behave in unexpected ways, which interferes with OnRamp’s ability to establish a route to the UPF pod.

If you are interested in the config file that controls the test, including the option of enabling other profiles, take a look at deps/gnbsim/config/gnbsim-default.yaml. We return to the issue of customizing gNBsim in a later section, but for now there are some simple modifications you can try. For example, the following code block defines a set of parameters for pdusessest (also known as profile2):

- profileType: pdusessest         # UE Initiated Session
profileName: profile2
enable: true
gnbName: gnb1
execInParallel: false
startImsi: 208930100007487
ueCount: 5
defaultAs: "192.168.250.1"
perUserTimeout: 100
plmnId:
   mcc: 208
   mnc: 93
dataPktCount: 5
opc: "981d464c7c52eb6e5036234984ad0bcf"
key: "5122250214c33e723a5dd523fc145fc0"
sequenceNumber: "16f3b3f70fc2"

You can edit ueCount to change the number of UEs included in the emulation (currently limited to 100) and you can set execInParallel to true to emulate those UEs connecting to the Core in parallel (rather than serially). You can also change variable defaultAs: “192.168.250.1” to specify the target of ICMP Echo Request packets sent by the emulated UEs. Selecting the IP address of a real-world server (e.g., 8.8.8.8) is a good test of end-to-end connectivity. Finally, you can change the amount of information gNBsim outputs by modifying logLevel in the logger block at the end of the file. For any changes you make, just rerun make aether-gnbsim-run to see the effects; you do not need to reinstall gNBsim.

Clean Up

We recommend continuing on to the next section before wrapping up, but when you are ready to tear down your Quick Start deployment of Aether, simply execute the following commands:

make aether-gnbsim-uninstall
make aether-5gc-uninstall
make aether-k8s-uninstall

Note that while we stepped through the system one component at a time, OnRamp includes compound Make targets. For example, you can uninstall everything covered in this section by typing:

make aether-uninstall

Reference List

  1. https://aether-project.square.site/
  2. https://docs.aetherproject.org/aether-2.0/edge_deployment/pronto.html