This guide documents the complete setup of OKD (The Community Distribution of Kubernetes that powers Red Hat OpenShift) as a Single Node Operator (SNO) cluster on a virtual machine hosted by Proxmox VE.
Prerequisites
- Proxmox Testbed with AMD-V) enabled in the BIOS/UEFI.
- Sufficient Resources:
- CPU: Minimum 8 vCPUs recommended for SNO.
- RAM: Minimum 32GB RAM recommended for SNO.
- Storage: Minimum 120GB-150GB fast storage (SSD/NVMe) for the OKD VM.
- Administrative Machine (Client): A Linux machine (e.g., Ubuntu, Fedora) to run openshift-install, oc, podman, and other client tools. This will be referred to as your “Admin Client Machine”.
- Red Hat Pull Secret: Obtain a pull secret from Red Hat OpenShift Cluster Manager (a free Red Hat developer account is sufficient). This is needed for some certified operators and images.
- Admin access to the local PowerDNS with reverse DNS
- An ssh public key.
- Kea DHCP must works, because every nodes must set hostnames, could be via DHCP reservations.
Phase 1: Preparation on Admin Client Machine
All commands in this phase are executed on your Admin Client Machine - cloud init enabled VM, not Linux Containers (LXC) (Disk: 20G; RAM: 8G(8192M); CPU: 2 cores).
- OS: Ubuntu 24.04
- CPU Type: x86-64-v3
- IP: 192.168.1.50/24
- Gateway: 192.168.1.4
- PowerDNS:
- DNS Domain: okd.admin.testbed.com
- DNS Servers: 192.168.1.23
Set Environment Variables
Define the OKD version and architecture for consistency.
export OKD_VERSION=4.20.0-okd-scos.15
export ARCH="$(uname -m)" # x86_64 on Intel/AMD
echo "$ARCH"NOTE:
- Check for the latest stable SCOS release: https://github.com/okd-project/okd/releases
- ec mean preview version
Download OpenShift Client (oc)
curl -L "https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-client-linux-${OKD_VERSION}.tar.gz" -o oc.tar.gz
tar zxf oc.tar.gz
chmod +x oc kubectl # kubectl is also included
sudo mv oc kubectl /usr/local/bin/ # Optional: Move to PATH for global access
oc versionDownload OpenShift Installer (openshift-install)
curl -L "https://github.com/okd-project/okd/releases/download/${OKD_VERSION}/openshift-install-linux-${OKD_VERSION}.tar.gz" -o openshift-install-linux.tar.gz
tar zxvf openshift-install-linux.tar.gz
chmod +x openshift-install
sudo mv openshift-install /usr/local/bin/ # Optional: Move to PATH
openshift-install versionCreate a directory for your installation files and the install-config.yaml file.
mkdir okd-sno-install
cd okd-sno-installGet CentOS Stream CoreOS / SCOS Live ISO
The installer will determine the correct FCOS ISO URL matching the OKD version.
export ISO_URL="$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')"
export ISO_SHA256="$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.sha256')"
echo "Downloading ISO from: ${ISO_URL}"
echo "Downloading ${ISO_SHA256}"curl -L "${ISO_URL}" -o fcos-live.iso
# compute the ISO hash
sha256sum fcos-live.iso
# verify against the expected hash (prints "OK" if it matches)
echo "${ISO_SHA256} fcos-live.iso" | sha256sum -c -Note: This fcos-live.iso will be uploaded to Proxmox later.
Prepare install-config.yaml
please, perpare ssh public key Create install-config.yaml with the following content:
apiVersion: v1
baseDomain: okd.lan # Your local base domain
metadata:
name: okd4sno # Your cluster name
compute:
- name: worker
replicas: 0 # Essential for SNO
controlPlane:
name: master
replicas: 1 # Essential for SNO
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.1.0/24 # Your Proxmox VM network subnet
serviceNetwork:
- 172.30.0.0/16
platform:
none: {} # For bare-metal/VM installations not managed by a cloud provider
bootstrapInPlace:
# IMPORTANT: Identify your Proxmox VM's target disk for installation.
# This can be /dev/vda, /dev/sda, or a more stable WWN path.
# Example for a VirtIO disk, often /dev/vda:
installationDisk: /dev/sda
# Example using WWN (more robust, get this from Proxmox VM's disk details or from FCOS live env):
# installationDisk: /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2
pullSecret: '<PASTE_YOUR_PULL_SECRET_JSON_HERE>' # Replace with your actual pull secret
sshKey: |
ssh-rsa AAAA...your_public_ssh_key_here # Replace with your public SSH key
Important Notes for install-config.yaml:
- baseDomain and metadata.name: These will form your cluster’s FQDNs (e.g., api.okd4sno.okd.lan).
- machineNetwork.cidr: Ensure this matches the subnet your OKD VM will reside in.
- installationDisk:
- For Proxmox VirtIO disks, this is typically /dev/vda. For SCSI, it might be /dev/sda.
- Using dev/disk/by-id/wwn-0x… is more robust if disk names might change. You can identify the correct WWN or device path by booting the FCOS Live ISO on the target VM and using commands like lsblk or ls /dev/disk/by-id.
- pullSecret: Paste the entire JSON string from Red Hat.
- sshKey: Your public SSH key to access the core user on the FCOS node.
Generate Single Node Ignition Configuration
This command uses the install-config.yaml in the current directory.
# Still in okd-sno-install directory
openshift-install create single-node-ignition-configThis will create bootstrap-in-place-for-live-iso.ign in the current directory.
Embed Ignition into FCOS Live ISO
We need coreos-installer for this, which can be run via Podman.
# Ensure you are in the directory containing fcos-live.iso and the bootstrap ignition file
# (which should be okd-sno-install if you followed above)
podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v "$PWD":"$PWD" -w "$PWD" \
quay.io/coreos/coreos-installer:release \
iso ignition embed -fi bootstrap-in-place-for-live-iso.ign fcos-live.iso
“Writing manifest to image destination” means success.
NOTE:
- If you don’t want to embed Ignition into ISO, you also could manual install, all nodes use same ISO image.
sudo coreos-installer install /dev/sda --ignition-file /path/to/master.ign --copy-network
sudo reboot- —copy-network tells coreos-installer to carry over the network configuration that is active in the Live environment (the ISO booted system) into the installed OS on disk.
Embed static networking (option, if does not have DHCP)
Create static.nmconnection (adjust interface-name, IP, gateway, DNS):
[connection]
id=static
type=ethernet
interface-name=ens18
autoconnect=true
[ipv4]
method=manual
address1=192.168.1.51/24,192.168.1.4
dns=192.168.1.23;8.8.8.8;
dns-search=okd.lan;
may-fail=false
[ipv6]
method=disabledHow to confirm the NIC name (ens18 vs something else) when booted into the live ISO:
ip -br linkEmbed static networking
podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v "$PWD":"$PWD" -w "$PWD" \
quay.io/coreos/coreos-installer:release \
iso network embed -f -k static.nmconnection fcos-live.iso“Writing manifest to image destination” means success.
Phase 2: Proxmox VM Creation and FCOS Installation
Upload Modified ISO to Proxmox
Upload the modified fcos-live.iso (the one with the embedded Ignition config) to your Proxmox VE “ISO Images” storage (e.g., local storage).
scp -p ./fcos-live.iso root@192.168.1.15:/var/lib/vz/template/iso/Create the OKD Virtual Machine in Proxmox
- General:
- Name: okd4sno-vm (or similar)
- Guest OS Type: Linux, Version: 6.x - 2.6 Kernel (or latest)
- System:
- Machine: q35
- BIOS: OVMF (UEFI)
- EFI Storage: Select your Proxmox storage for the EFI disk, and Uncheck Pre-Enroll keys.
- Enable Qemu Agent.
- Disks:
- Create a virtual hard disk (VirtIO Block or SCSI) with at least 120GB-150GB on fast storage. This is the disk you specified in installationDisk (e.g., /dev/sda).
- CPU:
- Cores: 8 (or more)
- Type: host (for best performance)
- Memory:
- 32768 MiB (32GB) or more. Disable “Ballooning Device”.
- Network:
- Model: VirtIO (paravirtualized)
- Bridge: Your Proxmox bridge connected to your LAN (e.g., vmbr0).
- IP: 192.168.1.51/24
- Gateway: 192.168.1.4
- PowerDNS:
- DNS Domain: okd.lan
- DNS Servers: 192.168.1.23
- CD/DVD Drive:
- Select the modified fcos-live.iso you uploaded.
- Boot Order:
- Set the CD/DVD drive (with the FCOS ISO) as the first boot device.
NOTE: before you start VM, please set-up DNS first.
Phase 3: DNS and Cluster Access
DNS Setup (PowerDNS)
For this guide, let’s assume the OKD VM gets the IP 192.168.1.51 (via DHCP reservation or manually configured static IP if you adapted the Ignition).
Create the okd.lan zone and add the OKD records
root@PowerDNS:~# pdnsutil zone create okd.lan ns1.okd.lan
Dec 30 09:46:59 [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed
Creating empty zone 'okd.lan'
Also adding one NS recordAdd the required records (SNO points them all at the single node IP)
root@PowerDNS:~# pdnsutil rrset add okd.lan api.okd4sno.okd.lan A 192.168.1.51
Dec 30 09:49:13 [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed
New rrset:
api.okd4sno.okd.lan. 3600 IN A 192.168.1.51
root@PowerDNS:~# pdnsutil rrset add okd.lan api-int.okd4sno.okd.lan A 192.168.1.51
Dec 30 09:49:48 [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed
New rrset:
api-int.okd4sno.okd.lan. 3600 IN A 192.168.1.51
root@PowerDNS:~# pdnsutil rrset add okd.lan '*.apps.okd4sno.okd.lan' A 192.168.1.51
Dec 30 09:50:22 [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed
New rrset:
*.apps.okd4sno.okd.lan. 3600 IN A 192.168.1.51Add PowerDNS Recursor Zone correct
(Optional but nice) Add a node hostname too
root@PowerDNS:~# pdnsutil rrset add okd.lan okd4sno.okd.lan A 192.168.1.51
Dec 30 09:52:38 [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed
New rrset:
okd4sno.okd.lan. 3600 IN A 192.168.1.51OKD 4.4+ does not require separate etcd SRV records in DNS, which simplifies your lab setup.
Restart
systemctl restart pdns
systemctl status pdns --no-pager
systemctl restart pdns-recursor
systemctl status pdns-recursor --no-pagerValidate before installing
dig +short api.okd4sno.okd.lan @192.168.1.23
dig +short api-int.okd4sno.okd.lan @192.168.1.23
dig +short console-openshift-console.apps.okd4sno.okd.lan @192.168.1.23Set hostname (options, if you don’t have DHCP server)
hostnamectlsudo hostnamectl set-hostname okd4sno.okd.lanMonitor Installation
Monitor Installation from SNO host
- Start the VM. It will boot from the modified FCOS Live ISO.
On CoreOS, you typically cannot log in from the VM console with a password. The system is designed for SSH key–based login.
Login method: SSH as core from your Admin Client
ubuntu@OKD-Admin-Client-Machine:~/okd-sno-install$ ssh -i ~/.ssh/id_rsa core@192.168.1.51 The authenticity of host '192.168.1.51 (192.168.1.51)' can't be established. ED25519 key fingerprint is SHA256:+FaXbzR18oR+HTOUlGDNAeBblhjc8EYkRb4pEQo8ys8. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.1.51' (ED25519) to the list of known hosts. This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service
Common patterns:
- node-image-pull.service: downloads payload images (can take a while)
- release-image.service: prepares the release image
- bootkube.service: brings up the initial control-plane components
You are successfully logged in. That message is normal for bootstrap-in-place: the system starts in a temporary “bootstrap” mode and then transitions to the final single-node control-plane. Because the Ignition config is embedded and bootstrapInPlace.installationDisk is set, FCOS should automatically install itself to the specified disk (/dev/sda) and then reboot.
Watch the bootstrap-in-place services
journalctl -b -f -u release-image.service -u bootkube.service -u node-image-pull.serviceBootstrap completed, server is going to reboot.
The system will reboot at Thu 2025-12-25 05:12:26 UTC!Monitor Installation from Admin Client Machine
Once the OKD VM has booted from its hard disk and the FCOS installation + Ignition processing is complete, the OKD bootstrap process will start.
# On your Admin Client Machine, in the okd-sno-install directory
openshift-install wait-for bootstrap-complete --log-level=info
# This can take 20-40 minutes.
# Once bootstrap is complete:
openshift-install wait-for install-complete --log-level=info
# This can take another 30-60+ minutes.Web Console Access
After install-complete finishes:
- Navigate to: https://console-openshift-console.apps.okd4sno.okd.lan
- Login using:
- Username: kubeadmin
- Password: Found in okd-sno-install/auth/kubeadmin-password on your Admin Client Machine.
DNS
Forward DNS definitions
; OKD
haproxy IN A 192.168.88.8
helper IN A 192.168.88.8
helper.okd IN A 192.168.88.8
api.okd IN A 192.168.88.8
api-int.okd IN A 192.168.88.8
*.apps.okd IN A 192.168.88.8
bootstrap.okd IN A 192.168.88.12
master0.okd IN A 192.168.88.9
master1.okd IN A 192.168.88.10
master2.okd IN A 192.168.88.11Reverse DNS
; okd
8 IN PTR haproxy.yanboyang.com.
8 IN PTR helper.yanboyang.com.
8 IN PTR helper.okd.yanboyang.com.
8 IN PTR api.okd.yanboyang.com.
8 IN PTR api-int.okd.yanboyang.com.
12 IN PTR bootstrap.okd.yanboyang.com.
9 IN PTR master0.okd.yanboyang.com.
10 IN PTR master1.okd.yanboyang.com.
11 IN PTR master2.okd.yanboyang.com.Reference List
- https://github.com/gardart/okd-proxmox-scripts
- https://github.com/pvelati/okd-proxmox-scripts
- https://github.com/pvelati/ansible-okd-proxmox
- https://www.pivert.org/deploy-openshift-okd-on-proxmox-ve-or-bare-metal-tutorial/
- https://andrearaponi.it/devops/deploy-okd-on-proxmox/
- https://docs.okd.io/latest/installing/installing_platform_agnostic/installing-platform-agnostic.html