Hosted Cluster Deployment

The creation of the Hosted Cluster can be done by leveraging the Hosted Control Planes API resources such as HostedCluster and NodePool, in this case we will be using the Web Console. You can find the required yaml files to run this same deployment here (you must change the pull secret content if you want to use it).

Creating a Hosted Cluster

  1. Access the OpenShift Console and login with the OpenShift admin credentials you got in the lab’s email.

  2. On the top bar, next to the Red Hat OpenShift logo, make sure All Clusters is selected. This will show us the MultiCloud console.

  3. Once you’re in, click on InfrastructureClusters. You will see a screen like the one below.

    MultiCloud Console Overview
  4. Click on Create clusterHost inventoryHosted control plane.

  5. You will get to a wizard that will guide you through the creation of the Hosted Cluster. Make sure you enter the following details.

    You must use OpenShift Version 4.13.0 even if you see a newer one.
    1. Infrastructure provider credential: Leave it empty

    2. Cluster name: hosted

    3. Cluster set: Leave it empty

    4. Base domain: hypershift.lab

    5. OpenShift Version: OpenShift 4.13.0

    6. Pull secret: Put the output given by the command below:

      oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n openshift-config \
          extract secret/pull-secret --to=- 2>/dev/null
      Hosted Cluster Wizard Screen 1
  6. In the next screen enter the following details.

    1. Namespace: hardware-inventory

    2. Number of hosts: 2

      Hosted Cluster Wizard Screen 2
  7. Following screen will show the networking settings, these are the settings you should use.

    1. API server publishing strategy: LoadBalancer

    2. Use advanced networking: Unchecked

    3. Show proxy settings: Unchecked

    4. SSH public key: Put the output given by the command below:

      oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hardware-inventory \
          get infraenv/hosted -o jsonpath='{.spec.sshAuthorizedKey}'
      Hosted Cluster Wizard Screen 3
  8. In the Automation screen leave everything empty and click Next.

  9. The final screen should look like this.

    Hosted Cluster Wizard Screen 4
  10. At this point you can go ahead and click Create.

  11. After clicking you will be redirected to the hosted cluster view.

Monitoring the Hosted Cluster Deployment

In this section we will learn how we can monitor the Hosted Cluster deployment from the Web Console as well as from the CLI.

Monitoring the Hosted Cluster Deployment via the Web Console

  1. You should be in the hosted cluster view already from the previous step. If you’re not you can get back to it by opening the OpenShift Console. Making sure All Clusters is selected and clicking on the cluster named hosted.

  2. You should see a screen like the one below where we see that most of the control plane components are ready.

    It can take up to 5 minutes for the conditions to look like in the screenshot below.
    Hosted Cluster View 1
  3. In this same view, we can click on the NodePool to see the nodes deployment status. We can see at this point two bare metal hosts from the inventory (hosted-worker0 and hosted-worker2) being installed.

    The bare metal hosts selected may be different for your cluster.
    Hosted Cluster View 2
  4. After a few moments (around 10 minutes), we will see that the nodes have been installed.

    Hosted Cluster View 3
  5. Additionally, we can click on Nodes in the hosted cluster view and we will see that the two nodes joined the Hosted Cluster.

    Hosted Cluster View 4
  6. At this point the Hosted Cluster is almost installed. In the next section we will see how to monitor the deployment from the CLI.

Monitoring the Hosted Cluster Deployment via the CLI

Below commands must be executed from the workstation host if not specified otherwise.
  1. Check the HostedCluster for the hosted cluster, you can see it says the hosted control plane is ready but the progress is Partial. This is expected since the Hosted Cluster deployment is not finished yet.

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get \
        hostedcluster hosted
    NAME     VERSION   KUBECONFIG                PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
    hosted             hosted-admin-kubeconfig   Partial    True        False         The hosted control plane is available
  2. We can check the control plane pods.

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted-hosted get pods
    NAME                                                 READY   STATUS    RESTARTS   AGE
    capi-provider-dcb6ffdc8-vllvw                        1/1     Running   0          25m
    catalog-operator-57df8c49d4-848d2                    2/2     Running   0          23m
    certified-operators-catalog-67fb94bd67-7bprn         1/1     Running   0          23m
    cluster-api-5b57c6c54b-mwx2s                         1/1     Running   0          25m
    cluster-autoscaler-5d95c5b57d-zlpb5                  1/1     Running   0          24m
    cluster-image-registry-operator-64dc684b9-p7wzn      2/2     Running   0          23m
    cluster-network-operator-994b465c7-fp445             1/1     Running   0          23m
    cluster-node-tuning-operator-798b6fcc78-jb2m8        1/1     Running   0          23m
    cluster-policy-controller-6c7fbc7884-j7q9q           1/1     Running   0          23m
    cluster-storage-operator-688cc9b879-5blvb            1/1     Running   0          23m
    cluster-version-operator-56f58c6c94-5xtww            1/1     Running   0          23m
    community-operators-catalog-7cc759bcd5-hzjmq         1/1     Running   0          23m
    control-plane-operator-7df9688864-xcsc6              1/1     Running   0          25m
    csi-snapshot-controller-57667cc9bc-lnz62             1/1     Running   0          23m
    csi-snapshot-controller-operator-5bf7cc7649-lstlt    1/1     Running   0          23m
    csi-snapshot-webhook-8988984d9-vcfh2                 1/1     Running   0          23m
    dns-operator-d6fdbc7c5-f4dcg                         1/1     Running   0          23m
    etcd-0                                               2/2     Running   0          24m
    hosted-cluster-config-operator-84d969fd86-t9xbv      1/1     Running   0          23m
    ignition-server-545f9c6b7d-sncvn                     1/1     Running   0          24m
    ingress-operator-74bcd8b697-js8ng                    2/2     Running   0          23m
    konnectivity-agent-c9dc84589-zsfgs                   1/1     Running   0          24m
    konnectivity-server-86966d7fc8-ndm69                 1/1     Running   0          24m
    kube-apiserver-dd6c5456c-v8r4c                       3/3     Running   0          24m
    kube-controller-manager-cc9594b49-bf2gc              1/1     Running   0          11m
    kube-scheduler-57b7b5d76c-5d4mv                      1/1     Running   0          23m
    machine-approver-55f5ffb68-gqtsl                     1/1     Running   0          24m
    multus-admission-controller-5c7d4dc7c4-gk87m         2/2     Running   0          14m
    oauth-openshift-6b9c485bf9-prl42                     2/2     Running   0          23m
    olm-operator-cb9686dcc-5hmsp                         2/2     Running   0          23m
    openshift-apiserver-69c8f48954-xx9g9                 3/3     Running   0          11m
    openshift-controller-manager-77cc954585-8gw24        1/1     Running   0          23m
    openshift-oauth-apiserver-5759955b88-4zmdf           2/2     Running   0          23m
    openshift-route-controller-manager-9f9df9cd7-24tfz   1/1     Running   0          23m
    ovnkube-master-0                                     7/7     Running   0          14m
    packageserver-d789d6659-49xwb                        2/2     Running   0          23m
    redhat-marketplace-catalog-6dbfc579f6-m82x5          1/1     Running   0          23m
    redhat-operators-catalog-7bb4c764b9-ks8n6            1/1     Running   0          23m
  3. The NodePool will tell us the state of the nodes joining the Hosted Cluster:

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get \
        nodepool nodepool-hosted-1
    At this point the nodes are still being deployed as you can see.
    NAME                CLUSTER   DESIRED NODES   CURRENT NODES   AUTOSCALING   AUTOREPAIR   VERSION   UPDATINGVERSION   UPDATINGCONFIG   MESSAGE
    nodepool-hosted-1   hosted    2               0               False         False        4.13.0                                       Scaling up MachineSet to 2 replicas (actual 0)
  4. Since the nodes are being deployed we can check the status of the agents:

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hardware-inventory get agents
    We can see at this point two agents from the inventory being rebooted as part of the installation process.
    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201             true       auto-assign
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202   hosted    true       worker        Rebooting
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203   hosted    true       worker        Rebooting
  5. After a few moments (up to 10 minutes), the agents will be fully installed:

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hardware-inventory get agents
    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201             true       auto-assign
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202   hosted    true       worker        Done
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203   hosted    true       worker        Done
  6. And the NodePool will reflect that as well.

    oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get \
        nodepool nodepool-hosted-1
    NAME                CLUSTER   DESIRED NODES   CURRENT NODES   AUTOSCALING   AUTOREPAIR   VERSION   UPDATINGVERSION   UPDATINGCONFIG   MESSAGE
    nodepool-hosted-1   hosted    2               2               False         False        4.13.0

At this point the Hosted Cluster deployment is not finished yet, since we need to fix Ingress for the cluster to be fully deployed. We will do that in the next section where we will learn how to access the Hosted Cluster.