Crafting the Cluster Telco Related Infrastructure Operators Configs
In the previous section, we created the infrastructure as code (IaC) for our SNO2
cluster using the SiteConfig
object. In this section, we are going to create the configuration as code (CaC) for our clusters by using multiple PolicyGenTemplate
objects.
We described how PolicyGenTemplate
works in detail here, so let’s jump directly to the creation of the different templates.
The configuration defined through these PolicyGenTemplate CRs is only a subset of what was described in Introduction to Telco Related Infrastructure Operators. This is for clarity and tailored to the lab environment. A full production environment for supporting telco 5G vRAN workloads would have additional configuration not included here, but described in detail as the Telco-5G-RAN 4.12 - QE Validated Reference Configuration slide deck. |
Below commands must be executed from the workstation host if not specified otherwise. |
Crafting Common Policies
The common policies apply to every cluster in our infrastructure that matches our binding rule. These policies are often used to configure things like CatalogSources, operator deployments, etc. that are common to all our clusters.
These configs may vary from release to release, that’s why we create a common-412.yaml
file. We will likely have a common configuration profile for each release we deploy.
If you check the binding rules, you see that we are targeting clusters labeled with common: "ocp412" and logicalGroup: "active" . These labels were set in the SiteConfig definition in the previous section.
|
-
Create the
common
PolicyGenTemplate for 4.12 SNOs:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/active/common-412.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "common" namespace: "ztp-policies" spec: bindingRules: common: "ocp412" logicalGroup: "active" mcp: master remediationAction: inform sourceFiles: - fileName: OperatorHub.yaml policyName: config-policies - fileName: DefaultCatsrc.yaml metadata: name: redhat-operator-index spec: image: infra.5g-deployment.lab:8443/redhat/redhat-operator-index:v4.12-1687863175 policyName: config-policies - fileName: ReduceMonitoringFootprint.yaml policyName: config-policies - fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.12 source: redhat-operator-index policyName: subscription-policies EOF
Crafting Group Policies
The group policies apply to a group of clusters that typically have something in common, for example they are SNOs, or they have similar hardware: SR-IOV cards, number of CPUs, etc.
The CNF team has prepared some common tuning configurations that should be applied on every SNO DU deployed. In this section, we will be crafting these configurations.
If you check the binding rules you can see that we are targeting clusters labeled with group-du-sno: "" and logicalGroup: "active" . These labels were set in the SiteConfig definition in the previous section.
|
-
Create the
group
PolicyGenTemplate for SNOs:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/active/group-du-sno.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-sno" namespace: "ztp-policies" spec: bindingRules: group-du-sno: "" logicalGroup: "active" mcp: master remediationAction: inform sourceFiles: - fileName: ConsoleOperatorDisable.yaml policyName: group-policies - fileName: DisableSnoNetworkDiag.yaml policyName: group-policies EOF
-
Since we are deploying DUs, we need to run the validator policies crafted by the CNF team as well:
cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/active/group-du-sno-validator.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-sno-validator" namespace: "ztp-policies" spec: bindingRules: group-du-sno: "" logicalGroup: "active" bindingExcludedRules: ztp-done: "" mcp: "master" sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform policyName: "validation" EOF
Crafting Site Policies
Site policies apply to a specific cluster/s in our site, they usually configure stuff that is very specific to a single cluster or to a small subset of clusters. For example, we could configure storage for a group of clusters if those share the same hardware, disks configurations, etc. otherwise we will need to have different bindings for different clusters.
We are going to create the PerformanceProfile
and storage configurations for our SNO2 cluster.
-
Create the
sno2
PolicyGenTemplate for theSNO2
cluster in our5glab
site:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/sites/hub-1/site-sno2.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "sno2" namespace: "ztp-policies" spec: bindingRules: du-site: "sno2" logicalGroup: "active" mcp: master remediationAction: inform sourceFiles: - fileName: StorageLVMCluster.yaml spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 policyName: "site-policies" - fileName: PerformanceProfile.yaml policyName: "site-policies" metadata: annotations: kubeletconfig.experimental: | {"topologyManagerScope": "pod", "systemReserved": {"memory": "3Gi"} } spec: cpu: isolated: '{{hub fromConfigMap "" "hub1-sites-data" (printf "%s-isolated-cpus" .ManagedClusterName) hub}}' reserved: '{{hub fromConfigMap "" "hub1-sites-data" (printf "%s-reserved-cpus" .ManagedClusterName) hub}}' hugepages: defaultHugepagesSize: 1G pages: - count: '{{hub fromConfigMap "" "hub1-sites-data" (printf "%s-hugepages-count" .ManagedClusterName)|toInt hub}}' size: '{{hub fromConfigMap "" "hub1-sites-data" (printf "%s-hugepages-size" .ManagedClusterName) hub}}' numa: topologyPolicy: single-numa-node realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: false - fileName: TunedPerformancePatch.yaml policyName: "site-policies" spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full={{hub fromConfigMap "" "hub1-sites-data" (printf "%s-isolated-cpus" .ManagedClusterName) hub}} [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable EOF
-
We’re using policy templating, so we need to create the
ConfigMap
with the templating values to be used:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/sites/hub-1/hub1-sites-data.yaml --- apiVersion: v1 kind: ConfigMap metadata: name: hub1-sites-data namespace: ztp-policies annotations: argocd.argoproj.io/sync-options: Replace=true data: sno2-isolated-cpus: "4-11" sno2-reserved-cpus: "0-3" sno2-hugepages-count: "4" sno2-hugepages-size: "1G" EOF
Crafting testing Policies
Testing policies before applying them to our production clusters is a must, in order to do that we will create a set of testing policies (which usually will be very similar to the production ones) and these policies will target clusters labeled with logicalGroup: "testing"
. We won’t go over every file, if you check the files that will be created those are the same as in active
, also known as production
, but with different names and binding policies.
-
Create the
common
testing PolicyGenTemplate for 4.12 SNOs:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/testing/common-412.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "common-test" namespace: "ztp-policies" spec: bindingRules: common: "ocp412" logicalGroup: "testing" mcp: master remediationAction: inform sourceFiles: - fileName: OperatorHub.yaml policyName: config-policies - fileName: DefaultCatsrc.yaml metadata: name: redhat-operator-index spec: image: infra.5g-deployment.lab:8443/redhat/redhat-operator-index:v4.12 policyName: config-policies - fileName: ReduceMonitoringFootprint.yaml policyName: config-policies - fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.12 source: redhat-operator-index policyName: subscription-policies EOF
-
Create the
group
testing PolicyGenTemplate for SNOs:cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/testing/group-du-sno.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-sno-test" namespace: "ztp-policies" spec: bindingRules: group-du-sno: "" logicalGroup: "testing" mcp: master remediationAction: inform sourceFiles: - fileName: ConsoleOperatorDisable.yaml policyName: group-policies - fileName: DisableSnoNetworkDiag.yaml policyName: group-policies EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/testing/group-du-sno-validator.yaml --- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "du-sno-validator-test" namespace: "ztp-policies" spec: bindingRules: group-du-sno: "" logicalGroup: "testing" bindingExcludedRules: ztp-done: "" mcp: "master" sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform policyName: "validation" EOF
At this point, policies are the same as in production (active). In the future, you prior want to apply these groups of testing policies for the clusters running in your test environment before promoting changes to production (active).
Configure Kustomization for Policies
We need to create the required kustomization files as we did for SiteConfigs. In this case, policies also require a namespace where they will be created. Therefore, we will create the required namespace and the kustomization files.
-
Policies need to live in a namespace, let’s add it to the repo:
cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/policies-namespace.yaml --- apiVersion: v1 kind: Namespace metadata: name: ztp-policies labels: name: ztp-policies EOF
-
Create the required Kustomization files
You can see that we commented the group-du-sno-validator.yaml
files in our Kustomization files. Since this is a lab environment and we don’t have PTP/SRIO-V hardware the validator policies won’t be able to verify our SNOs as a well-configured DU. We kept the files here so you know how a real environment should be configured.cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - fleet/ - sites/ resources: - policies-namespace.yaml EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/sites/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - hub-1/ EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/sites/hub-1/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site-sno2.yaml resources: - hub1-sites-data.yaml EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - active/ - testing/ EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/active/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-412.yaml - group-du-sno.yaml # - group-du-sno-validator.yaml EOF cat <<EOF > ~/5g-deployment-lab/ztp-repository/site-policies/fleet/testing/kustomization.yaml --- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-412.yaml - group-du-sno.yaml # - group-du-sno-validator.yaml EOF
-
At this point, we can push the changes to the repo and continue to the next section.
cd ~/5g-deployment-lab/ztp-repository git add --all git commit -m 'Added policies information' git push origin main cd ~