Lab Environment

This section describes how to deploy your own lab environment.

If you are a Red Hatter, you can order a lab environment on the Red Hat Demo Platform. You just need to order the lab named 5G RAN Deployment on OpenShift.

Lab Requirements

RHEL 8.X box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:

  • 64 CPUs (with or without hyperthreading)

  • 200GiB Memory.

  • 1 TiB storage. +- IMPORTANT: These instructions have been tested in a RHEL 8.7, we cannot guarantee that other operating systems (even RHEL-based) will work. We won’t be providing support out of RHEL 8.

These are the steps to install the required packages on a RHEL 8.7 server:

yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
usermod -aG qemu,libvirt $(id -un)
newgrp libvirt
systemctl enable --now libvirtd

Lab Deployment

All the steps in the below sections must be run as root user on the hypervisor host.

Install kcli

We use kcli to do several things, like managing VMs, deploying the first OCP cluster, etc. Additional kcli documentation can be found at https://kcli.readthedocs.io

Below commands must be executed from the hypervisor host as root if not specified otherwise.
dnf -y copr enable karmab/kcli
dnf -y install kcli bash-completion vim jq tar git ipcalc

Install oc/kubectl CLIs

kcli download oc -P version=stable -P tag='4.12'
kcli download kubectl -P version=stable -P tag='4.12'
mv kubectl oc /usr/bin/

Configure Disconnected Network

kcli create network -c 192.168.125.0/24 --nodhcp --domain 5g-deployment.lab 5gdeploymentlab

Configure Local DNS/DHCP Server

dnf -y install dnsmasq policycoreutils-python-utils
mkdir -p /opt/dnsmasq/include.d/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/dnsmasq.conf -o /opt/dnsmasq/dnsmasq.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/upstream-resolv.conf -o /opt/dnsmasq/upstream-resolv.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/hub.ipv4 -o /opt/dnsmasq/include.d/hub.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/sno1.ipv4 -o /opt/dnsmasq/include.d/sno1.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/sno2.ipv4 -o /opt/dnsmasq/include.d/sno2.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/infrastructure-host.ipv4 -o /opt/dnsmasq/include.d/infrastructure-host.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/dnsmasq/dnsmasq-virt.service -o /etc/systemd/system/dnsmasq-virt.service
touch /opt/dnsmasq/hosts.leases
semanage fcontext -a -t dnsmasq_lease_t /opt/dnsmasq/hosts.leases
restorecon /opt/dnsmasq/hosts.leases
sed -i "s/UPSTREAM_DNS/1.1.1.1/" /opt/dnsmasq/upstream-resolv.conf
systemctl daemon-reload
systemctl enable --now dnsmasq-virt
systemctl mask dnsmasq

Configure Local DNS as Primary Server

The default upstream DNS is set to 1.1.1.1 in /opt/dnsmasq/upstream-resolv.conf. There might be cases in your local environment where the hypervisor may not reach it. So, notice that you must change it to a different DNS that allows you to resolve public hostnames. Once changed, remember to restart the dnsmasq-virt service.

curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/hypervisor/forcedns -o /etc/NetworkManager/dispatcher.d/forcedns
chmod +x /etc/NetworkManager/dispatcher.d/forcedns
systemctl restart NetworkManager
/etc/NetworkManager/dispatcher.d/forcedns

Disable Firewall

You can also create the required rules in the firewall if you want, but for the sake of simplicity we are disabling the firewall.

systemctl disable firewalld iptables
systemctl stop firewalld iptables
iptables -F

Configure Webcache

curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/webcache/podman-webcache.service -o /etc/systemd/system/podman-webcache.service
mkdir -p /opt/webcache/data
curl -L https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/4.12.3/rhcos-4.12.3-x86_64-live-rootfs.x86_64.img -o /opt/webcache/data/rhcos-4.12.3-x86_64-live-rootfs.x86_64.img
curl -L https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.12/4.12.3/rhcos-4.12.3-x86_64-live.x86_64.iso -o /opt/webcache/data/rhcos-4.12.3-x86_64-live.x86_64.iso
systemctl daemon-reload
systemctl enable podman-webcache --now

Install Sushy Tools

curl -L https://gist.githubusercontent.com/mvazquezc/0acb9e716c329abb9a184f1bcceed591/raw/4ae558082a3289d5d46be7d745bc2474e834e238/deploy-sushy-tools.sh -o /tmp/deploy-sushy-tools.sh
chmod +x /tmp/deploy-sushy-tools.sh
/tmp/deploy-sushy-tools.sh

Configure Disconnected Registry

dnf -y install podman httpd-tools
REGISTRY_NAME=infra.5g-deployment.lab
mkdir -p /opt/registry/{auth,certs,data,conf}
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/registry/registry-key.pem -o /opt/registry/certs/registry-key.pem
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/registry/registry-cert.pem -o /opt/registry/certs/registry-cert.pem
htpasswd -bBc /opt/registry/auth/htpasswd admin r3dh4t1!
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/registry/config.yml -o /opt/registry/conf/config.yml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/registry/podman-registry.service -o /etc/systemd/system/podman-registry.service
systemctl daemon-reload
systemctl enable podman-registry --now
cp /opt/registry/certs/registry-cert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust
podman login --authfile auth.json -u admin  infra.5g-deployment.lab:8443 -p r3dh4t1!

Configure Git Server

mkdir -p /opt/gitea/
chown -R 1000:1000 /opt/gitea/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/gitea/podman-gitea.service -o /etc/systemd/system/podman-gitea.service
systemctl daemon-reload
systemctl enable podman-gitea --now
podman exec --user 1000 gitea /bin/sh -c 'gitea admin user create --username student --password student --email student@5g-deployment.lab --must-change-password=false --admin'
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab.git","uid":1,"repo_name":"5g-ran-deployments-on-ocp-lab"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
It could be possible that the last two commands do not work at the first time since the registry container image is being pulled and then started. If that’s the case, try them once again after a couple of seconds.

Configure NTP Server

dnf install chrony -y
cat <<EOF > /etc/chrony.conf
server time.cloudflare.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
bindcmdaddress ::
allow 192.168.125.0/24
EOF
systemctl enable chronyd --now

Configure Access to Cluster Apps

In order to access the hub cluster we will deploy an HAProxy that will be listening on the public interface of the Hypervisor host.

dnf install haproxy -y
semanage port -a -t http_port_t -p tcp 6443
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/haproxy/haproxy.cfg -o /etc/haproxy/haproxy.cfg
systemctl enable haproxy --now

After that you need to add the following entries to your /etc/hosts file:

<HYPERVISOR_REACHABLE_IP> infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab

Create OpenShift Nodes VMs

Before running the following commands, make sure you have generated a SSH key pair in your default location ~/.ssh/. That SSH key will allow you to connect to the VMs you are about to create:

kcli create pool -p /var/lib/libvirt/images default
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=48000 -P numcpus=16 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:01:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0101 -P name=hub-master0
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=48000 -P numcpus=16 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:01:02"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0102 -P name=hub-master1
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=48000 -P numcpus=16 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:01:03"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0103 -P name=hub-master2
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:02:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 -P name=sno1
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:03:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0301 -P name=sno2

If you need or want to connect to any of the VMs you can do so by just executing:

kcli ssh <VM_name>

Deploy OpenShift Hub Cluster

This step requires a valid OpenShift Pull Secret placed in /root/openshift_pull.json. Notice that you can replace the admin or developer’s password shown below for any other.
git clone https://github.com/karmab/kcli-openshift4-baremetal.git
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/hub-cluster/hub.yml -o /root/kcli-openshift4-baremetal/hub.yml
cp /root/openshift_pull.json /root/kcli-openshift4-baremetal/openshift_pull.json
cd /root/kcli-openshift4-baremetal/
sed -i "s/CHANGE_ADMIN_PWD/admin/" hub.yml
sed -i "s/CHANGE_DEV_PWD/developer/" hub.yml
kcli create plan --pf hub.yml --force

This will take around 1 hour to complete, you can follow progress by running kcli console -s.

If the installation fails for whatever reason, you will need to delete all the VMs that were created and execute the same procedure again. So, first remove the plans, which actually will remove all VMs:

# kcli list plan
------------------------------------------------------------+
|     Plan    |                      Vms                      |
------------------------------------------------------------+
| hub-cluster |                 hub-installer                 |
|     hub     | hub-master0,hub-master1,hub-master2,sno1,sno2 |
------------------------------------------------------------+

# kcli delete plan hub-cluster hub -y
hub-installer deleted on local!
Plan hub-cluster deleted!
hub-master0 deleted on local!
hub-master1 deleted on local!
hub-master2 deleted on local!
sno1 deleted on local!
sno2 deleted on local!
Plan hub deleted!

And then create the VMs again as explained in the previous section Deploy OpenShift Hub Cluster.

Configure OpenShift Hub Cluster

cd ~
kcli ssh hub-installer -- "sudo cp /root/ocp/auth/kubeconfig /tmp/kubeconfig && sudo chmod 644 /tmp/kubeconfig"
kcli scp hub-installer:/tmp/kubeconfig ~/hub-kubeconfig
export KUBECONFIG=~/hub-kubeconfig
oc apply -f https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/hub-cluster/lvmcluster.yaml
oc -n openshift-storage wait lvmcluster odf-lvmcluster --for=jsonpath='{.status.state}'=Ready --timeout=900s
oc patch storageclass lvms-vg1 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
curl https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.12/lab-materials/lab-env-data/hub-cluster/argocd-patch.json -o /tmp/argopatch.json
oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file /tmp/argopatch.json
oc wait --for=condition=Ready pod -lapp.kubernetes.io/name=openshift-gitops-repo-server -n openshift-gitops
oc -n openshift-gitops adm policy add-cluster-role-to-user cluster-admin -z openshift-gitops-argocd-application-controller

Deploy SNO1 Cluster (without ZTP)

A SNO is deployed outside the ZTP workflow so students can import it and see how that workflow works.

Once the cluster is deployed, the kubeconfig can be gathered as follows:

oc -n sno1 get agentclusterinstall,agent
NAME                                                    CLUSTER   STATE
agentclusterinstall.extensions.hive.openshift.io/sno1   sno1      adding-hosts

NAME                                                                    CLUSTER   APPROVED   ROLE     STAGE
agent.agent-install.openshift.io/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201   sno1      true       master   Done

oc extract secret/sno1-admin-kubeconfig --to=- -n sno1 > /root/sno1kubeconfig
oc --kubeconfig /root/sno1kubeconfig get nodes,clusterversion

NAME                      STATUS   ROLES                         AGE     VERSION
node/openshift-master-0   Ready    control-plane,master,worker   46m     v1.25.4+a34b9e9

NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
clusterversion.config.openshift.io/version   4.12.3    True        False         22m     Cluster version is 4.12.3