Lab Environment

This section describes how to deploy your own lab environment.

If you are a Red Hatter, you can order a lab environment on the Red Hat Demo Platform. You just need to order the lab named 5G RAN Deployment on OpenShift.

Lab Requirements

RHEL 9.X box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:

  • 64 cores in total, with or without hyperthreading enabled.

  • 200GiB Memory.

  • 1 TiB storage.

The /opt directory requires a minimum of 60 GiB of storage for the disconnected registry installation and the /var directory requires a minimum of 70 GiB for the lab installation. Size the filesystems containing these directories accordingly as the default partition layout for a RHEL 9.X install does not provide enough storage on the root filesystem for the lab installation.
These instructions have been tested in a RHEL 9.3, we cannot guarantee that other operating systems (even RHEL-based) will work. We won’t be providing support out of RHEL 9.

These are the steps to install the required packages on a RHEL 9.3 server:

dnf -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
usermod -aG qemu,libvirt $(id -un)
newgrp libvirt
systemctl enable libvirtd --now
Verfiy tha the libvirtd systemd service is running successfully by executing systemctl status libvirtd.

Lab Deployment

All the steps in the below sections must be run as root user on the hypervisor host.

Install kcli

We use kcli to do several things, like managing VMs, deploying the first OCP cluster, etc. Additional kcli documentation can be found at https://kcli.readthedocs.io

Below commands must be executed from the hypervisor host as root if not specified otherwise.
dnf -y copr enable karmab/kcli
dnf -y install kcli bash-completion vim jq tar git ipcalc pip
pip install pyopenssl

Install oc/kubectl CLIs

kcli download oc -P version=stable -P tag='4.14'
kcli download kubectl -P version=stable -P tag='4.14'
mv kubectl oc /usr/bin/

Configure Disconnected Network

kcli create network -c 192.168.125.0/24 -P dns=false -P dhcp=false --domain 5g-deployment.lab 5gdeploymentlab

Configure Local DNS/DHCP Server

If you’re using MacOS and you’re getting errors while running sed -i commands, make sure you are using gsed: brew install gnu-sed.
dnf -y install dnsmasq policycoreutils-python-utils
mkdir -p /opt/dnsmasq/include.d/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/dnsmasq.conf -o /opt/dnsmasq/dnsmasq.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/upstream-resolv.conf -o /opt/dnsmasq/upstream-resolv.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/hub.ipv4 -o /opt/dnsmasq/include.d/hub.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/sno1.ipv4 -o /opt/dnsmasq/include.d/sno1.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/sno2.ipv4 -o /opt/dnsmasq/include.d/sno2.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/infrastructure-host.ipv4 -o /opt/dnsmasq/include.d/infrastructure-host.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/dnsmasq/dnsmasq-virt.service -o /etc/systemd/system/dnsmasq-virt.service
touch /opt/dnsmasq/hosts.leases
semanage fcontext -a -t dnsmasq_lease_t /opt/dnsmasq/hosts.leases
restorecon /opt/dnsmasq/hosts.leases
sed -i "s/UPSTREAM_DNS/1.1.1.1/" /opt/dnsmasq/upstream-resolv.conf
systemctl daemon-reload
systemctl enable dnsmasq-virt
systemctl mask dnsmasq

Before starting dnsmasq-virt service we must set the proper SELinux attributes to the files that have been downloaded.

semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq/include.d(/.*)?"
semanage fcontext -a -t dnsmasq_lease_t "/opt/dnsmasq/(.*)?\.leases"
semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq/(.*)?\.conf"
semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq"
restorecon -rv /opt/dnsmasq/

Now, you can start the dnsmasq-virt systemd service.

systemctl restart dnsmasq-virt.service

Verify that the dnsmasq-virt service has started successfully this time.

Configure Local DNS as Primary Server

The default upstream DNS is set to 1.1.1.1 in /opt/dnsmasq/upstream-resolv.conf. There might be cases in your local environment where the hypervisor may not reach it. So, notice that you must change it to a different DNS that allows you to resolve public hostnames. Once changed, remember to restart the dnsmasq-virt service.

curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/hypervisor/forcedns -o /etc/NetworkManager/dispatcher.d/forcedns
chmod +x /etc/NetworkManager/dispatcher.d/forcedns
systemctl restart NetworkManager
/etc/NetworkManager/dispatcher.d/forcedns

Disable Firewall

You can also create the required rules in the firewall if you want, but for the sake of simplicity we are disabling the firewall.

systemctl disable firewalld iptables
systemctl stop firewalld iptables
iptables -F
systemctl restart libvirtd
The iptables.service probably does not exist in your server, so it fails to be disabled. In that case, just continue with the rest of the commands.

Configure Webcache

The webcache is basically an Apache httpd container serving the RHCOS live ISO and rootfs images locally. They are required for provisioning OCP clusters.

curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/webcache/podman-webcache.service -o /etc/systemd/system/podman-webcache.service
mkdir -p /opt/webcache/data
curl -L https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.14/4.14.0/rhcos-4.14.0-x86_64-live-rootfs.x86_64.img -o /opt/webcache/data/rhcos-4.14.0-x86_64-live-rootfs.x86_64.img
curl -L https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.14/4.14.0/rhcos-4.14.0-x86_64-live.x86_64.iso -o /opt/webcache/data/rhcos-4.14.0-x86_64-live.x86_64.iso
systemctl daemon-reload
systemctl enable podman-webcache --now

Verify that the webcache container has started successfully by executing podman ps. If it is not running, check the status of the podman-webcache systemd unit.

Install Ksushy Tool

pip3 install cherrypy
kcli create sushy-service --ssl --port 9000
systemctl enable ksushy --now
A message like this The unit files have no installation config (WantedBy, RequiredBy, Also, Alias settings in the [Install] section, and DefaultInstance for template units). can be shown in the terminal after creating or starting the service. It is just a warning, please check that the ksushy service has started successfully.

Verify that the ksushy service is running. Furthermore, you can check that the port number 9000 TCP is being used by a python application (ksushy) by executing netstat -lntp | grep 9000.

Configure Disconnected Registry

dnf -y install podman httpd-tools
REGISTRY_NAME=infra.5g-deployment.lab
mkdir -p /opt/registry/{auth,certs,data,conf}
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/registry/registry-key.pem -o /opt/registry/certs/registry-key.pem
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/registry/registry-cert.pem -o /opt/registry/certs/registry-cert.pem
htpasswd -bBc /opt/registry/auth/htpasswd admin r3dh4t1!
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/registry/config.yml -o /opt/registry/conf/config.yml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/registry/podman-registry.service -o /etc/systemd/system/podman-registry.service
systemctl daemon-reload
systemctl enable podman-registry --now
cp /opt/registry/certs/registry-cert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust
sleep 10
podman login --authfile auth.json -u admin  infra.5g-deployment.lab:8443 -p r3dh4t1!

Configure Git Server

mkdir -p /opt/gitea/
chown -R 1000:1000 /opt/gitea/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/gitea/podman-gitea.service -o /etc/systemd/system/podman-gitea.service
systemctl daemon-reload
systemctl enable podman-gitea --now
sleep 20
podman exec --user 1000 gitea /bin/sh -c 'gitea admin user create --username student --password student --email student@5g-deployment.lab --must-change-password=false --admin'
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab.git","uid":1,"repo_name":"5g-ran-deployments-on-ocp-lab"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-lab-aap-integration-tools.git","uid":1,"repo_name":"aap-integration-tools"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
It could be possible that the last few commands do not work at the first time since the registry container image is being pulled and then started. If that’s the case, try them once again after a couple of seconds.

Configure NTP Server

dnf install chrony -y
cat <<EOF > /etc/chrony.conf
server time.cloudflare.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
bindcmdaddress ::
allow 192.168.125.0/24
EOF
systemctl enable chronyd --now

Configure Access to Cluster Apps

In order to access the hub cluster we will deploy an HAProxy that will be listening on the public interface of the Hypervisor host.

dnf install haproxy -y
semanage port -a -t http_port_t -p tcp 6443
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/haproxy/haproxy.cfg -o /etc/haproxy/haproxy.cfg
systemctl enable haproxy --now

Verify that the haproxy systemd unit started successfully. After that, you need to add the following entries to your laptop’s local /etc/hosts file. This line will help you to connect to the different exposed services that are being set in the lab host. Notice that:

  • HYPERVISOR_REACHABLE_IP is the IP address of the lab server you are configuring. It must be an IP address that you can connect from your laptop, usually the IP address you are using to connect via SSH to the lab server.

For example, your lab server now should have similar interfaces (podman, virbr0 and 5gdeploymentlab) as my lab:

ip -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
2: ens1f0    inet 10.19.32.199/26 brd 10.19.32.255 scope global dynamic noprefixroute ens1f0\       valid_lft 13481sec preferred_lft 13481sec
2: ens1f0    inet6 2620:52:0:1343::8d/128 scope global dynamic noprefixroute \       valid_lft 13481sec preferred_lft 13481sec
2: ens1f0    inet6 2620:52:0:1343:e643:4bff:febd:9046/64 scope global dynamic noprefixroute \       valid_lft 2591777sec preferred_lft 604577sec
2: ens1f0    inet6 fe80::e643:4bff:febd:9046/64 scope link noprefixroute \       valid_lft forever preferred_lft forever
6: virbr0    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\       valid_lft forever preferred_lft forever
7: 5gdeploymentlab    inet 192.168.125.1/24 brd 192.168.125.255 scope global 5gdeploymentlab\       valid_lft forever preferred_lft forever
8: podman0    inet 10.88.0.1/16 brd 10.88.255.255 scope global podman0\       valid_lft forever preferred_lft forever
8: podman0    inet6 fe80::b85e:c8ff:feb9:e105/64 scope link \       valid_lft forever preferred_lft forever
9: veth0    inet6 fe80::5828:77ff:fe5d:869f/64 scope link \       valid_lft forever preferred_lft forever

Then, obtain the IP you are connecting to via SSH. In my case it is 10.19.32.199. Finally append this entry to your laptop’s local /etc/hosts:

10.19.32.199 infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab assisted-service-multicluster-engine.apps.hub.5g-deployment.lab automation-hub-aap.apps.hub.5g-deployment.lab automation-aap.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab
<HYPERVISOR_REACHABLE_IP> infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab assisted-service-multicluster-engine.apps.hub.5g-deployment.lab automation-hub-aap.apps.hub.5g-deployment.lab automation-aap.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab

Create SNO Nodes VMs

Before running the following commands, make sure you have generated a SSH key pair in your default location ~/.ssh/.

ssh-keygen -t rsa -b 2048

That SSH key will allow you to connect to the VMs you are about to create:

kcli create pool -p /var/lib/libvirt/images default
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:02:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 -P name=sno1
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:03:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0301 -P name=sno2

If you need or want to connect to any of the VMs, once they are started, you can do so by just executing:

kcli ssh <VM_name>

Deploy OpenShift Hub Cluster

This step requires a valid OpenShift Pull Secret placed in /root/openshift_pull.json. Notice that you can replace the admin or developer’s password shown below for any other.
If you’re using MacOS and you’re getting errors while running sed -i commands, make sure you are using gnu-sed by executing brew install gnu-sed.
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/hub-cluster/hub.yml -o /root/hub.yml
sed -i "s/CHANGE_ADMIN_PWD/admin/" hub.yml
sed -i "s/CHANGE_DEV_PWD/developer/" hub.yml
cd /root/
kcli create cluster openshift --pf hub.yml

This will take around 40 minutes to complete.

If the installation fails for whatever reason, you will need to delete all the VMs that were created and execute the same procedure again. So, first remove the plans, which actually will remove all VMs:

# kcli list plan
--------------------------------------------------------------+
| Plan |                          Vms                           |
--------------------------------------------------------------+
| hub  | hub-ctlplane-0,hub-ctlplane-1,hub-ctlplane-2,sno1,sno2 |
--------------------------------------------------------------+

# kcli delete plan hub -y
hub-ctlplane-0 deleted on local!
hub-ctlplane-1 deleted on local!
hub-ctlplane-2 deleted on local!
sno1 deleted on local!
sno2 deleted on local!
Plan hub deleted!

And then create the VMs again as explained in the previous section Deploy OpenShift Hub Cluster.

Configure OpenShift Hub Cluster

export KUBECONFIG=~/.kcli/clusters/hub/auth/kubeconfig
oc -n openshift-storage wait lvmcluster lvmcluster --for=jsonpath='{.status.state}'=Ready --timeout=900s
oc patch storageclass lvms-vg1 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
curl https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/hub-cluster/argocd-patch.json -o /tmp/argopatch.json
oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file /tmp/argopatch.json
oc wait --for=condition=Ready pod -lapp.kubernetes.io/name=openshift-gitops-repo-server -n openshift-gitops
oc -n openshift-gitops adm policy add-cluster-role-to-user cluster-admin -z openshift-gitops-argocd-application-controller

Deploy SNO1 Cluster (without ZTP)

A SNO is deployed outside the ZTP workflow so students can import it and see how that workflow works.

Once the cluster is deployed:

oc -n sno1 get agentclusterinstall,agent
NAME                                                    CLUSTER   STATE
agentclusterinstall.extensions.hive.openshift.io/sno1   sno1      adding-hosts

NAME                                                                    CLUSTER   APPROVED   ROLE     STAGE
agent.agent-install.openshift.io/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201   sno1      true       master   Done

The kubeconfig can be gathered as follows:

oc extract secret/sno1-admin-kubeconfig --to=- -n sno1 > /root/sno1kubeconfig

Now, with the proper credentials you can check the status of the SNO1 cluster:

oc --kubeconfig /root/sno1kubeconfig get nodes,clusterversion

NAME                      STATUS   ROLES                         AGE     VERSION
node/openshift-master-0   Ready    control-plane,master,worker   94m     v1.27.6+f67aeb3

NAME                                         VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
clusterversion.config.openshift.io/version   4.14.0    True        False         75m     Cluster version is 4.14.0

Configure Ansible Automation Platform

This step requires a valid Ansible Automation Platform Manifest. You can read the official docs to get one.

Before proceeding, you need to wait for the AAP to be fully deployed in the Hub cluster.

This configuration requires of having ansible deployed in the hypervisor host along with some ansible collections.

dnf install ansible-core python3.11-pip -y
python3.11 -m pip install kubernetes
ansible-galaxy collection install awx.awx
ansible-galaxy collection install infra.controller_configuration
ansible-galaxy collection install kubernetes.core

Next, let’s create a playbook that will configure the automation controller for us:

Change the aap_manifest_file_path var value to match the path where you stored the manifest in the hypervisor host and change the strong_student_password var to set a password for the AAP student user.
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.14/lab-materials/lab-env-data/aap2/configure-aap.yaml -o /root/configure-aap.yaml
ansible-playbook /root/configure-aap.yaml -e strong_student_password=yourstrongstudentpassword -e aap_manifest_file_path=/path/to/your/manifest -e ansible_python_interpreter=/usr/bin/python3.11

Once the playbook finishes we should have access to the AAP Controller at https://automation-aap.apps.hub.5g-deployment.lab with the student user and the password you configured.