Lab Environment
This section describes how to deploy your own lab environment.
If you are a Red Hatter, you can order a lab environment on the Red Hat Demo Platform. You just need to order the lab named 5G RAN Deployment on OpenShift.
|
Lab Requirements
RHEL 9.X box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:
-
64 cores in total, with or without hyperthreading enabled.
-
200GiB Memory.
-
1 TiB storage.
| The /opt directory requires a minimum of 60 GiB of storage for the disconnected registry installation and the /var directory requires a minimum of 70 GiB for the lab installation. Size the filesystems containing these directories accordingly as the default partition layout for a RHEL 9.X install does not provide enough storage on the root filesystem for the lab installation. |
| These instructions have been tested in a RHEL 9.4, we cannot guarantee that other operating systems (even RHEL-based) will work. We won’t be providing support out of RHEL 9. |
These are the steps to install the required packages on a RHEL 9.4 server:
dnf -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
usermod -aG qemu,libvirt $(id -un)
newgrp libvirt
systemctl enable libvirtd --now
Verfiy tha the libvirtd systemd service is running successfully by executing systemctl status libvirtd.
|
Lab Deployment
All the steps in the below sections must be run as root user on the hypervisor host.
|
Install kcli
We use kcli to do several things, like managing VMs, deploying the first OCP cluster, etc. Additional kcli documentation can be found at https://kcli.readthedocs.io
| Below commands must be executed from the hypervisor host as root if not specified otherwise. |
dnf -y install bash-completion vim jq tar git ipcalc pip https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab/releases/download/4.18/kcli-99.0.0.git.202503261511.dccfb3f-0.el9.x86_64.rpm
pip install pyopenssl
Install oc/kubectl CLIs
kcli download oc -P version=stable -P tag='4.18'
kcli download kubectl -P version=latest -P tag='4.18'
mv kubectl oc /usr/bin/
Configure Disconnected Networks
kcli create network -c 192.168.125.0/24 -P dns=false -P dhcp=false --domain 5g-deployment.lab 5gdeploymentlab
kcli create network -c 192.168.100.0/24 -P dhcp=false -P dns=false --domain sriov-network.lab -i sriov-network
Configure Local DNS/DHCP Server
If you’re using MacOS and you’re getting errors while running sed -i commands, make sure you are using gsed: brew install gnu-sed.
|
dnf -y install dnsmasq policycoreutils-python-utils
mkdir -p /opt/dnsmasq/include.d/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/dnsmasq.conf -o /opt/dnsmasq/dnsmasq.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/upstream-resolv.conf -o /opt/dnsmasq/upstream-resolv.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/hub.ipv4 -o /opt/dnsmasq/include.d/hub.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/sno1.ipv4 -o /opt/dnsmasq/include.d/sno1.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/sno2.ipv4 -o /opt/dnsmasq/include.d/sno2.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/infrastructure-host.ipv4 -o /opt/dnsmasq/include.d/infrastructure-host.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/dnsmasq/dnsmasq-virt.service -o /etc/systemd/system/dnsmasq-virt.service
touch /opt/dnsmasq/hosts.leases
semanage fcontext -a -t dnsmasq_lease_t /opt/dnsmasq/hosts.leases
restorecon /opt/dnsmasq/hosts.leases
sed -i "s/UPSTREAM_DNS/1.1.1.1/" /opt/dnsmasq/upstream-resolv.conf
systemctl daemon-reload
systemctl enable dnsmasq-virt
systemctl mask dnsmasq
Before starting dnsmasq-virt service we must set the proper SELinux attributes to the files that have been downloaded.
semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq/include.d(/.*)?"
semanage fcontext -a -t dnsmasq_lease_t "/opt/dnsmasq/(.*)?\.leases"
semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq/(.*)?\.conf"
semanage fcontext -a -t dnsmasq_etc_t "/opt/dnsmasq"
restorecon -rv /opt/dnsmasq/
Now, you can start the dnsmasq-virt systemd service.
systemctl restart dnsmasq-virt.service
Verify that the dnsmasq-virt service has started successfully this time.
Configure Local DNS as Primary Server
The default upstream DNS is set to 1.1.1.1 in /opt/dnsmasq/upstream-resolv.conf. There might be cases in your local environment where the hypervisor may not reach it. So, notice that you must change it to a different DNS that allows you to resolve public hostnames. Once changed, remember to restart the dnsmasq-virt service.
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/hypervisor/forcedns -o /etc/NetworkManager/dispatcher.d/forcedns
chmod +x /etc/NetworkManager/dispatcher.d/forcedns
systemctl restart NetworkManager
/etc/NetworkManager/dispatcher.d/forcedns
Disable Firewall
You can also create the required rules in the firewall if you want, but for the sake of simplicity we are disabling the firewall.
systemctl disable firewalld
systemctl stop firewalld
iptables -F
systemctl restart libvirtd
| The iptables.service probably does not exist in your server, so it fails to be disabled. In that case, just continue with the rest of the commands. |
Configure Webcache
The webcache is basically an Apache httpd container serving the RHCOS live ISO and rootfs images locally. They are required for provisioning OCP clusters.
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/webcache/podman-webcache.service -o /etc/systemd/system/podman-webcache.service
mkdir -p /opt/webcache/data
curl -L https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.18/4.18.1/rhcos-4.18.1-x86_64-live-rootfs.x86_64.img -o /opt/webcache/data/rhcos-4.18.1-x86_64-live-rootfs.x86_64.img
curl -L https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.18/4.18.1/rhcos-4.18.1-x86_64-live.x86_64.iso -o /opt/webcache/data/rhcos-4.18.1-x86_64-live.x86_64.iso
systemctl daemon-reload
systemctl enable podman-webcache --now
Verify that the webcache container has started successfully by executing podman ps. If it is not running, check the status of the podman-webcache systemd unit.
Configure S3 Storage
We need a few S3 buckets. They are required for updating SNOs using IBU.
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/minio/podman-minio.service -o /etc/systemd/system/podman-minio.service
mkdir -p /opt/minio/s3-volume
systemctl daemon-reload
systemctl enable podman-minio --now
Verify that the minio container has started successfully by executing podman ps. If it is not running, check the status of the podman-minio systemd unit.
Now we configure the buckets:
curl -L https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/local/bin/mc
chmod +x /usr/local/bin/mc
mc alias set minio http://192.168.125.1:9002 admin admin1234
mc mb minio/sno-abi
mc mb minio/sno-ibi
Install Ksushy Tool
pip3 install cherrypy
kcli create sushy-service --ssl --port 9000
systemctl enable ksushy --now
Verify that the ksushy service is running. Furthermore, you can check that the port number 9000 TCP is being used by a python application (ksushy) by executing ss -tupln | grep 9000.
Configure Disconnected Registry
dnf -y install podman httpd-tools
REGISTRY_NAME=infra.5g-deployment.lab
mkdir -p /opt/registry/{auth,certs,data,conf}
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/registry/registry-key.pem -o /opt/registry/certs/registry-key.pem
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/registry/registry-cert.pem -o /opt/registry/certs/registry-cert.pem
htpasswd -bBc /opt/registry/auth/htpasswd admin r3dh4t1!
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/registry/config.yml -o /opt/registry/conf/config.yml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/registry/podman-registry.service -o /etc/systemd/system/podman-registry.service
systemctl daemon-reload
systemctl enable podman-registry --now
cp /opt/registry/certs/registry-cert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust
sleep 10
podman login --authfile auth.json -u admin infra.5g-deployment.lab:8443 -p r3dh4t1!
| Link to additional configuration documentation of a disconnected registry: https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/disconnected_environments/mirroring-in-disconnected-environments#installing-mirroring-creating-registry |
Configure Git Server
mkdir -p /opt/gitea/
chown -R 1000:1000 /opt/gitea/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/gitea/podman-gitea.service -o /etc/systemd/system/podman-gitea.service
systemctl daemon-reload
systemctl enable podman-gitea --now
sleep 20
podman exec --user 1000 gitea /bin/sh -c 'gitea admin user create --username student --password student --email student@5g-deployment.lab --must-change-password=false --admin'
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab.git","uid":1,"repo_name":"5g-ran-deployments-on-ocp-lab"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
| It could be possible that the last few commands do not work at the first time since the registry container image is being pulled and then started. If that’s the case, try them once again after a couple of seconds. |
Configure NTP Server
dnf install chrony -y
cat <<EOF > /etc/chrony.conf
server time.cloudflare.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
bindcmdaddress ::
allow 192.168.125.0/24
EOF
systemctl enable chronyd --now
Configure Showroom APP
Showroom is a WebAPP consisting of different services that will be used to expose the lab content and a terminal and a browser running on the hypervisor host, that way uses can consume the lab from their browser without extra requirements.
First we need to create a user that the students will use to connect to the hypervisor terminal, we recommend an unprivileged user.
useradd lab-user -G libvirt
passwd lab-user
Make sure the new user can use rootless podman:
Below command may fail with ERRO[0000] running `/usr/bin/newuidmap 66372 0 1000 1 1 100000 65536: newuidmap: write to uid_map failed: Operation not permitted`.
|
su - lab-user
podman version
If above command failed, you need to either configure root-less podman or run podman as root.
Next, make sure lab-user has the correct git config:
su - lab-user
git config --global user.email "student@5g-deployment.lab"
git config --global user.name "Student"
Now that lab-user is configured, make sure you continue to run the next commands as root.
Generate lab content on the hypervisor host:
HYPERVISOR_PUBLIC_HOSTNAME=<PUT_HERE_PUBLIC_HOSTNAME_OR_IP>
mkdir -p /opt/showroom
git clone -b lab-4.18 https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab.git /opt/showroom/lab-repo/
podman run --rm -v /opt/showroom/lab-repo:/antora:z quay.io/rhsysdeseng/showroom:antora-v3.0.0 site.yml
mkdir -p /opt/showroom/lab-content
cp -pr /opt/showroom/lab-repo/gh-pages/* /opt/showroom/lab-content/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/webfiles/index.html -o /opt/showroom/lab-content/index.html
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/webfiles/split.css -o /opt/showroom/lab-content/split.css
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/webfiles/tabs.css -o /opt/showroom/lab-content/tabs.css
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/webfiles/tabs.js -o /opt/showroom/lab-content/tabs.js
sed -i "s/<CHANGEME_LAB_VERSION>/4.18/" /opt/showroom/lab-content/index.html
sed -i "s/<CHANGEME_HYPERVISOR_PUBLIC_HOSTNAME>/$HYPERVISOR_PUBLIC_HOSTNAME/" /opt/showroom/lab-content/index.html
Run the apache that exposes lab content:
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/podman-showroom-apache.service -o /etc/systemd/system/podman-showroom-apache.service
systemctl daemon-reload
systemctl enable podman-showroom-apache --now
Run the web terminal:
LAB_USER_PWD=<PUT_HERE_LAB_USER_PASSWORD>
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/podman-showroom-wetty.service -o /etc/systemd/system/podman-showroom-wetty.service
sed -i "s/<CHANGEME_LAB_USER>/lab-user/" /etc/systemd/system/podman-showroom-wetty.service
sed -i "s/<CHANGEME_LAB_USER_PWD>/$LAB_USER_PWD/" /etc/systemd/system/podman-showroom-wetty.service
systemctl daemon-reload
systemctl enable podman-showroom-wetty --now
Run the firefox browser:
HTPASS_PASSWORD=<PUT_HERE_A_PASSWORD>
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/podman-showroom-firefox.service -o /etc/systemd/system/podman-showroom-firefox.service
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/firefox-config.tar.gz -o /tmp/firefox-config.tar.gz
tar xfz /tmp/firefox-config.tar.gz -C /opt/showroom/
rm -f /tmp/firefox-config.tar.gz
sed -i "s/<CHANGEME_LAB_USER>/lab-user/" /etc/systemd/system/podman-showroom-firefox.service
sed -i "s/<CHANGEME_LAB_USER_PWD>/$HTPASS_PASSWORD/" /etc/systemd/system/podman-showroom-firefox.service
systemctl daemon-reload
systemctl enable podman-showroom-firefox --now
Run the reverse proxy:
HTPASS_USER=$(echo $HTPASS_PASSWORD | htpasswd -i -n lab-user)
HYPERVISOR_PUBLIC_HOSTNAME=<PUT_HERE_PUBLIC_HOSTNAME_OR_IP>
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/podman-showroom-traefik.service -o /etc/systemd/system/podman-showroom-traefik.service
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/showroom/traefik-config.tar.gz -o /tmp/traefik-config.tar.gz
mkdir -p /opt/showroom/traefik
tar xfz /tmp/traefik-config.tar.gz -C /opt/showroom/traefik/
rm -f /tmp/traefik-config.tar.gz
sed -i "s|<CHANGEME_HTPASS_USER>|$HTPASS_USER|" /opt/showroom/traefik/config/traefik/dynamic_config.yml
sed -i "s/<CHANGEME_HYPERVISOR_PUBLIC_HOSTNAME>/$HYPERVISOR_PUBLIC_HOSTNAME/" /opt/showroom/traefik/config/traefik/dynamic_config.yml
systemctl daemon-reload
systemctl enable podman-showroom-traefik --now
At this point we can access our hypervisor to get the Showroom app, this URL is the one the student will use with the credentials we configured. Example url: https://hypervisor.example.com/content/.
Create SNO Nodes VMs
Before running the following commands, make sure you have generated a SSH key pair in your default location ~/.ssh/.
ssh-keygen -t rsa -b 2048
That SSH key will allow you to connect to the VMs you are about to create:
kcli create pool -p /var/lib/libvirt/images default
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[400] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:02:01"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"0"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"1"}'] -P name=sno-seed
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[400] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:03:01"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"0"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"1"}'] -P name=sno-abi
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[300,100] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:04:01"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"0"}','{"name":"sriov-network","type":"igb","vfio":"true","noconf":"true","numa":"1"}'] -P name=sno-ibi
If you need or want to connect to any of the VMs, once they are started, you can do so by just executing:
kcli ssh <VM_name>
Deploy OpenShift Hub Cluster
| This step requires a valid OpenShift Pull Secret placed in /root/openshift_pull.json. Notice that you can replace the admin or developer’s password shown below for any other. |
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/hub-cluster/hub.yml -o /root/hub.yml
sed -i "s/CHANGE_ADMIN_PWD/admin/" hub.yml
sed -i "s/CHANGE_DEV_PWD/developer/" hub.yml
cd /root/
kcli create cluster openshift --pf hub.yml
This will take around 40 minutes to complete.
If the installation fails for whatever reason, you will need to delete all the VMs that were created and execute the same procedure again. So, first remove the plans, which actually will remove all VMs:
kcli delete plan hub -y
hub-ctlplane-0 deleted on local!
hub-ctlplane-1 deleted on local!
hub-ctlplane-2 deleted on local!
sno-abi deleted on local!
sno-ibi deleted on local!
sno-seed deleted on local!
Plan hub deleted!
And then create the VMs again as explained in the previous section Deploy OpenShift Hub Cluster.
Configure OpenShift Hub Cluster
export KUBECONFIG=~/.kcli/clusters/hub/auth/kubeconfig
oc -n openshift-storage wait lvmcluster lvmcluster --for=jsonpath='{.status.state}'=Ready --timeout=900s
oc patch storageclass lvms-vg1 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
curl https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/hub-cluster/argocd-patch.json -o /tmp/argopatch.json
oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file /tmp/argopatch.json
oc wait --for=condition=Ready pod -lapp.kubernetes.io/name=openshift-gitops-repo-server -n openshift-gitops
oc -n openshift-gitops adm policy add-cluster-role-to-user cluster-admin -z openshift-gitops-argocd-application-controller
Deploy Seed Cluster (without ZTP)
A SNO is deployed outside the ZTP workflow, this seed cluster will be used as reference to deploy Image Based SNOs.
curl -L http://infra.5g-deployment.lab:3000/student/5g-ran-deployments-on-ocp-lab/raw/branch/lab-4.18/lab-materials/lab-env-data/hypervisor/ssh-key -o /root/.ssh/snokey
chmod 400 /root/.ssh/snokey
oc apply -f https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/lab-env-data/hub-cluster/sno1-argoapp.yaml
Once the cluster is deployed:
Cluster is considered deployed once its state is adding-hosts.
|
oc -n sno-seed get agentclusterinstall,agent
NAME CLUSTER STATE
agentclusterinstall.extensions.hive.openshift.io/sno-seed sno-seed adding-hosts
NAME CLUSTER APPROVED ROLE STAGE
agent.agent-install.openshift.io/3d97ebf7-870c-438e-a253-61db15afa2ed sno-seed true master Done
The kubeconfig can be gathered as follows:
oc -n sno-seed extract secret/sno-seed-admin-kubeconfig --to=- > ~/seed-cluster-kubeconfig
Now, with the proper credentials you can check the status of the SNO SEED cluster:
oc --kubeconfig ~/seed-cluster-kubeconfig get nodes,clusterversion
NAME STATUS ROLES AGE VERSION
node/openshift-master-0 Ready control-plane,master,worker 94m v1.31.6
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
clusterversion.config.openshift.io/version 4.18.3 True False 75m Cluster version is 4.18.3
Configure Seed Cluster
export KUBECONFIG=~/seed-cluster-kubeconfig
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/02_oadp_deployment.yaml -o /tmp/02_oadp_deployment.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/02_lca_deployment.yaml -o /tmp/02_lca_deployment.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/02_sriov_deployment.yaml -o /tmp/02_sriov_deployment.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/02_lvms_deployment.yaml -o /tmp/02_lvms_deployment.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/03_sriovoperatorconfig.yaml -o /tmp/03_sriovoperatorconfig.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/03_sriov-nw-du-netdev.yaml -o /tmp/03_sriov-nw-du-netdev.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/03_sriov-nw-du-vfio.yaml -o /tmp/03_sriov-nw-du-vfio.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/04_performance_profile.yaml -o /tmp/04_performance_profile.yaml
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/lab-4.18/lab-materials/sno-config/05_TunedPerformancePatch.yaml -o /tmp/05_TunedPerformancePatch.
oc apply -f /tmp/02_oadp_deployment.yaml -f /tmp/02_lca_deployment.yaml -f /tmp/02_lvms_deployment.yaml -f /tmp/02_sriov_deployment.yaml
sleep 30
oc -n openshift-adp wait clusterserviceversion -l operators.coreos.com/redhat-oadp-operator.openshift-adp --for=jsonpath='{.status.phase}'=Succeeded --timeout=900s
oc -n openshift-lifecycle-agent wait clusterserviceversion -l operators.coreos.com/lifecycle-agent.openshift-lifecycle-agent --for=jsonpath='{.status.phase}'=Succeeded --timeout=900s
oc -n openshift-storage wait clusterserviceversion -l operators.coreos.com/lvms-operator.openshift-storage --for=jsonpath='{.status.phase}'=Succeeded --timeout=900s
oc -n openshift-sriov-network-operator wait clusterserviceversion -l operators.coreos.com/sriov-network-operator.openshift-sriov-network-operator --for=jsonpath='{.status.phase}'=Succeeded --timeout=900s
oc apply -f /tmp/03_sriovoperatorconfig.yaml -f /tmp/03_sriov-nw-du-netdev.yaml -f /tmp/03_sriov-nw-du-vfio.yaml
sleep 30
oc wait node openshift-master-0 --for=jsonpath='{.status.allocatable.openshift\.io/virt-enp4s0}'=2 --timeout=900s
oc wait node openshift-master-0 --for=jsonpath='{.status.allocatable.openshift\.io/virt-enp5s0}'=2 --timeout=900s
oc apply -f /tmp/04_performance_profile.yaml -f /tmp/05_TunedPerformancePatch.yaml
When applying the PerformanceProfile the SNO Seed cluster will be rebooted. Wait until the cluster is up and wait for the PerformanceProfile to report Available condition:
oc wait --for='jsonpath={.status.conditions[?(@.type=="Available")].status}=True' performanceprofile openshift-node-performance-profile