Lab Setup

This section describes how to deploy your own lab environment.

If you are a Red Hatter, you can order a lab environment on the Red Hat Demo Platform. You just need to order the lab named Hosted Control Planes on Baremetal.

Lab Requirements

RHEL 8.X box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:

  • 64 CPUs (with or without hyperthreading)

  • 200GiB Memory.

  • 1 TiB storage.

These instructions have been tested in a RHEL 8.7, we cannot guarantee that other operating systems (even RHEL-based) will work. We won’t be providing support out of RHEL 8.

These are the steps to install the required packages on a RHEL 8.7 server:

yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
usermod -aG qemu,libvirt $(id -un)
newgrp libvirt
systemctl enable --now libvirtd

Lab Deployment

All the steps in the below sections must be run as root user on the hypervisor host.

Install kcli

We use kcli to do several things, like managing VMs, deploying the first OCP cluster, etc. Additional kcli documentation can be found at https://kcli.readthedocs.io

Below commands must be executed from the hypervisor host as root if not specified otherwise.
dnf -y copr enable karmab/kcli
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf -y install kcli bash-completion vim jq tar git python3-cherrypy

Install oc/kubectl CLIs

kcli download oc -P version=stable -P tag='4.13'
kcli download kubectl -P version=stable -P tag='4.13'
mv kubectl oc /usr/bin/

Configure Lab Network

kcli create network -c 192.168.125.0/24 -P dhcp=false -P dns=false --domain hypershift.lab hypershiftlab

Configure Local DNS/DHCP Server

dnf -y install dnsmasq policycoreutils-python-utils
mkdir -p /opt/dnsmasq/include.d/
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/dnsmasq.conf -o /opt/dnsmasq/dnsmasq.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/upstream-resolv.conf -o /opt/dnsmasq/upstream-resolv.conf
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/management.ipv4 -o /opt/dnsmasq/include.d/management.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/hosted.ipv4 -o /opt/dnsmasq/include.d/hosted.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/infrastructure-host.ipv4 -o /opt/dnsmasq/include.d/infrastructure-host.ipv4
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/dnsmasq/dnsmasq-virt.service -o /etc/systemd/system/dnsmasq-virt.service
touch /opt/dnsmasq/hosts.leases
semanage fcontext -a -t dnsmasq_lease_t /opt/dnsmasq/hosts.leases
restorecon /opt/dnsmasq/hosts.leases
sed -i "s/UPSTREAM_DNS/1.1.1.1/" /opt/dnsmasq/upstream-resolv.conf
systemctl daemon-reload
systemctl enable --now dnsmasq-virt
systemctl mask dnsmasq

Configure Local DNS as Primary Server

The default upstream DNS is set to 1.1.1.1 in /opt/dnsmasq/upstream-resolv.conf. There might be cases in your local environment where the hypervisor may not reach it. So, notice that you must change it to a different DNS that allows you to resolve public hostnames. Once changed, remember to restart the dnsmasq-virt service.

curl -L https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/hypervisor/forcedns -o /etc/NetworkManager/dispatcher.d/forcedns
chmod +x /etc/NetworkManager/dispatcher.d/forcedns
systemctl restart NetworkManager
/etc/NetworkManager/dispatcher.d/forcedns

Disable Firewall

You can also create the required rules in the firewall if you want, but for the sake of simplicity we are disabling the firewall.

systemctl disable firewalld iptables
systemctl stop firewalld iptables
iptables -F

Install KSushy Tool

kcli create sushy-service --ssl --port 9000 --bootonce

Configure NTP Server

dnf install chrony -y
cat <<EOF > /etc/chrony.conf
server time.cloudflare.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
bindcmdaddress ::
allow 192.168.125.0/24
EOF
systemctl enable chronyd --now

Configure Access to Cluster Apps

In order to access the hub cluster we will deploy a HAProxy that will be listening on the public interface of the Hypervisor host.

dnf install haproxy -y
semanage port -a -t http_port_t -p tcp 6443
curl -L https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/haproxy/haproxy.cfg -o /etc/haproxy/haproxy.cfg
systemctl enable haproxy --now

After that you need to add the following entries to your /etc/hosts file:

<HYPERVISOR_REACHABLE_IP> infra.hypershift.lab api.management.hypershift.lab console-openshift-console.apps.management.hypershift.lab oauth-openshift.apps.management.hypershift.lab api.hosted.hypershift.lab console-openshift-console.apps.hosted.hypershift.lab oauth-hosted-hosted.apps.management.hypershift.lab

Create Hosted Cluster Nodes VMs

Before running the following commands, make sure you have generated a SSH key pair in your default location ~/.ssh/. That SSH key will allow you to connect to the VMs you are about to create:

kcli create pool -p /var/lib/libvirt/images default
kcli create vm -P start=False -P uefi_legacy=true -P plan=lab -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "hypershiftlab", "mac": "aa:aa:aa:aa:02:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 -P name=hosted-worker0
kcli create vm -P start=False -P uefi_legacy=true -P plan=lab -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "hypershiftlab", "mac": "aa:aa:aa:aa:02:02"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202 -P name=hosted-worker1
kcli create vm -P start=False -P uefi_legacy=true -P plan=lab -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "hypershiftlab", "mac": "aa:aa:aa:aa:02:03"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203 -P name=hosted-worker2

If you need or want to connect to any of the VMs you can do so by just executing:

kcli ssh <VM_name>

Deploy OpenShift Management Cluster

This step requires a valid OpenShift Pull Secret placed in /root/openshift_pull.json. Notice that you can replace the admin or developer’s password shown below for any other.
curl -sL https://raw.githubusercontent.com/RHsyseng/hypershift-baremetal-lab/lab-4.13/lab-materials/lab-env-data/management-cluster/management.yml -o /root/management.yml
sed -i "s/CHANGE_ADMIN_PWD/admin/" management.yml
sed -i "s/CHANGE_DEV_PWD/developer/" management.yml
kcli create cluster openshift --pf management.yml --force

This will take around 30-45m to complete, you can follow progress by running kcli console -s.

If the installation fails for whatever reason, you will need to delete all the VMs that were created and execute the same procedure again. So, first remove the plans, which actually will remove all VMs:

# kcli list plan
-------------------------------------------------------------------------------+
|    Plan    |                                Vms                                |
-------------------------------------------------------------------------------+
|    lab     |            hosted-worker0,hosted-worker1,hosted-worker2           |
| management | management-ctlplane-0,management-ctlplane-1,management-ctlplane-2 |
-------------------------------------------------------------------------------+

# kcli delete plan management lab -y
management-installer deleted on local!
Plan management deleted!
hosted-worker0 deleted on local!
hosted-worker1 deleted on local!
hosted-worker2 deleted on local!
management-ctlplane-0 deleted on local!
management-ctlplane-1 deleted on local!
management-ctlplane-2 deleted on local!
Plan lab deleted!

And then create the VMs again as explained in the previous section Deploy OpenShift Hub Cluster.

Check OpenShift Management Cluster

export KUBECONFIG=~/.kcli/clusters/management/auth/kubeconfig
oc get nodes
NAME                                   STATUS   ROLES                         AGE   VERSION
management-ctlplane-0.hypershift.lab   Ready    control-plane,master,worker   39m   v1.26.3+b404935
management-ctlplane-1.hypershift.lab   Ready    control-plane,master,worker   39m   v1.26.3+b404935
management-ctlplane-2.hypershift.lab   Ready    control-plane,master,worker   39m   v1.26.3+b404935