Install Using Advanced Cluster Management
SUMMARY Learn how to install CN2 using Advanced Cluster Management (ACM).
The procedures in this section show how you can install or import a CN2 cluster using ACM.
Before proceeding, you need to create an ACM hub cluster. See Create an ACM Hub Cluster for an example of how to create a hub cluster. The hub cluster provides the ACM functionality. It does not contain any CN2 components.
Install with User-Managed Networking Using ACM
Use this procedure to bring up an OpenShift cluster with user-managed networking using ACM. User-managed networking refers to a deployment where you explicitly provide an external load balancer for your installation.
Ensure you've set up the ACM hub cluster before you start this procedure.
-
Copy the kubeconfig of the hub cluster to the default kubeconfig location
(
~/.kube/config
) on the installation computer where you're running this procedure. -
Prepare the deployment by setting up an SSH key and downloading the pull
secret.
-
Create an SSH key that you'll use to access the nodes in your
cluster.
ssh-keygen
We're leaving the SSH key in its default location ~/.ssh/id_rsa.pub. -
Download the image pull secret from your Red Hat account onto your
local computer. The pull secret allows your installation to access
services and registries that serve container images for OpenShift
components.
You can download the pull secret file (pull-secret) from the https://console.redhat.com/openshift/downloads page.
-
Create an SSH key that you'll use to access the nodes in your
cluster.
-
Create the namespace that you'll use for the managed cluster
configuration.
We'll call the namespace
mgmt-spoke1
.oc create ns mgmt-spoke1
-
Convert all the CN2 manifests that you plan to use to ConfigMaps.
Here's an example of the ConfigMap for the 110-vroutermasters-cr.yaml manifest. The ConfigMap structure contains the contents of 110-vroutermasters-cr.yaml in the
data
section. For convenience, we provide a script that performs the manifest-to-ConfigMap conversions:kind: ConfigMap apiVersion: v1 metadata: name: 110-vroutermasters-cr-yaml namespace: mgmt-spoke1 data: 110-vroutermasters-cr.yaml : | apiVersion: dataplane.juniper.net/v1alpha1 kind: Vrouter metadata: name: contrail-vrouter-masters namespace: contrail spec: agent: default: xmppAuthEnable: false httpServerPort: 18085 sandesh: introspectSslEnable: false common: containers: - image: enterprise-hub.jnpr.net/contrail-container-prod/contrail-vrouter-agent:<release> name: contrail-vrouter-agent - image: enterprise-hub.jnpr.net/contrail-container-prod/contrail-init:<release> name: contrail-watcher - image: enterprise-hub.jnpr.net/contrail-container-prod/contrail-telemetry-exporter:<release> name: contrail-vrouter-telemetry-exporter initContainers: - image: enterprise-hub.jnpr.net/contrail-container-prod/contrail-init:<release> name: contrail-init - image: enterprise-hub.jnpr.net/contrail-container-prod/contrail-cni-init:<release> name: contrail-cni-init
-
First, create a template file that provides the ConfigMap
structure.
We'll call this file TEMPLATE.
kind: ConfigMap apiVersion: v1 metadata: name: CHANGEME-yaml namespace: mgmt-spoke1 data: CHANGEME.yaml : |
-
Create the following bash script that steps through each CN2
manifest and generates a ConfigMap based on the above
template.
The script below steps through all the YAML files in theNote:This script modifies the original CN2 manifests slightly. We recommend you make a copy of the original manifests before proceeding.
SRC_DIR
directory and creates a corresponding ConfigMap for each manifest by applying the TEMPLATE and appending the contents of the original YAML file. The script places the resulting ConfigMap files into theDST_DIR
directory.We'll call the script convert-manifests.sh. Modify theSRC_DIR
andDST_DIR
variables in the script as needed. Ensure that theSRC_DIR
only contains the CN2 manifests that you want to use. See Manifests for a description of the CN2 manifests that we provide. Make the script executable.#!/bin/bash SRC_DIR="/home/cn2/manifests" DST_DIR="/home/cn2/tmp/config-maps" CWD=$PWD cd $SRC_DIR for i in `ls *.yaml` do echo "processing $i" j="${i//.yaml/}" j="${j//_/-/}" sed -i -e 's/^/ /' $i cat $CWD/TEMPLATE | sed "s#CHANGEME#${j}#g" > $DST_DIR/$i cat $i >> $DST_DIR/$i done cd $CWD
chmod +x convert-manifests.sh
-
Run the script.
./convert-manifests.sh
-
First, create a template file that provides the ConfigMap
structure.
-
Apply the ConfigMaps.
Run the following commands from the DST_DIR directory, which is /home/cn2/tmp/config-maps in our example.
for i in *.yaml do oc create -f $i done
-
Create and apply the manifest for the managed cluster.
-
Create the manifest.
Here's an example of a manifest that has 3 control plane nodes and 2 worker nodes, and subnets consistent with other examples in this document:
-
machine network cidr -
172.16.0.0/24
-
cluster network cidr -
10.128.0.0/14
-
service network cidr -
172.31.0.0/16
where:apiVersion: v1 kind: Secret metadata: name: assisted-deployment-pull-secret namespace: mgmt-spoke1 stringData: .dockerconfigjson: '<pull-secret>' type: kubernetes.io/dockerconfigjson --- apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 annotations: agent-install.openshift.io/install-config-overrides: | { "networking": { "networkType": "Contrail" } } spec: manifestsConfigMapRefs: - name: 050-dpdk-machineconfigpool-yaml - name: 051-worker-vfio-pci-yaml - name: 052-kargs-1g-hugepages-yaml - name: 099-disable-offload-master-yaml - name: 099-disable-offload-worker-yaml - name: 100-certificaterequests.cert-manager.io-yaml - name: 100-certificates.cert-manager.io-yaml - name: 100-challenges.acme.cert-manager.io-yaml - name: 100-clusterissuers.cert-manager.io-yaml - name: 100-configplane.juniper.net-apiservers-yaml - name: 100-configplane.juniper.net-apiserverstatuses-yaml - name: 100-configplane.juniper.net-contrailcertificatemanagers-yaml - name: 100-configplane.juniper.net-contrailcertificatemanagerstatuses-yaml - name: 100-configplane.juniper.net-controllers-yaml - name: 100-configplane.juniper.net-controllerstatuses-yaml - name: 100-configplane.juniper.net-kubemanagers-yaml - name: 100-configplane.juniper.net-kubemanagerstatuses-yaml - name: 100-contrailstatus.juniper.net-contrailstatusmonitors-yaml - name: 100-contrailstatus.juniper.net-contrailstatusmonitorstatuses-yaml - name: 100-controlplane.juniper.net-controls-yaml - name: 100-controlplane.juniper.net-controlstatuses-yaml - name: 100-dataplane.juniper.net-vrouters-yaml - name: 100-dataplane.juniper.net-vrouterstatuses-yaml - name: 100-datastore.juniper.net-etcds-yaml - name: 100-datastore.juniper.net-etcdstatuses-yaml - name: 100-issuers.cert-manager.io-yaml - name: 100-k8s.cni.cncf.io-network-attachment-definitions-yaml - name: 100-orders.acme.cert-manager.io-yaml - name: 100-plugins.juniper.net-apstraplugins-yaml - name: 100-plugins.juniper.net-apstrapluginstatuses-yaml - name: 101-contrail-deploy-namespace-yaml - name: 101-contrail-namespace-yaml - name: 101-contrail-system-namespace-yaml - name: 101-namespace-cert-manager-yaml - name: 102-clusterrole-cert-manager-cainjector-yaml - name: 102-clusterrole-cert-manager-controller-approve-cert-manager-io-yaml - name: 102-clusterrole-cert-manager-controller-certificates-yaml - name: 102-clusterrole-cert-manager-controller-certificatesigningrequests-yaml - name: 102-clusterrole-cert-manager-controller-challenges-yaml - name: 102-clusterrole-cert-manager-controller-clusterissuers-yaml - name: 102-clusterrole-cert-manager-controller-ingress-shim-yaml - name: 102-clusterrole-cert-manager-controller-issuers-yaml - name: 102-clusterrole-cert-manager-controller-orders-yaml - name: 102-clusterrole-cert-manager-edit-yaml - name: 102-clusterrole-cert-manager-view-yaml - name: 102-clusterrole-cert-manager-webhook-subjectaccessreviews-yaml - name: 102-clusterrolebinding-cert-manager-cainjector-yaml - name: 102-clusterrolebinding-cert-manager-controller-approve-cert-manager-io-yaml - name: 102-clusterrolebinding-cert-manager-controller-certificates-yaml - name: 102-clusterrolebinding-cert-manager-controller-certificatesigningrequests-yaml - name: 102-clusterrolebinding-cert-manager-controller-challenges-yaml - name: 102-clusterrolebinding-cert-manager-controller-clusterissuers-yaml - name: 102-clusterrolebinding-cert-manager-controller-ingress-shim-yaml - name: 102-clusterrolebinding-cert-manager-controller-issuers-yaml - name: 102-clusterrolebinding-cert-manager-controller-orders-yaml - name: 102-clusterrolebinding-cert-manager-webhook-subjectaccessreviews-yaml - name: 102-cn2-clusterrolebind-yaml - name: 102-cn2-cluterrole-yaml - name: 102-configmap-cert-manager-webhook-yaml - name: 102-contrail-deploy-serviceaccount-yaml - name: 102-contrail-serviceaccount-yaml - name: 102-contrail-system-serviceaccount-yaml - name: 102-mutatingwebhookconfiguration-cert-manager-webhook-yaml - name: 102-role-cert-manager-cainjector-leaderelection-yaml - name: 102-role-cert-manager-leaderelection-yaml - name: 102-role-cert-manager-webhook-dynamic-serving-yaml - name: 102-rolebinding-cert-manager-cainjector-leaderelection-yaml - name: 102-rolebinding-cert-manager-leaderelection-yaml - name: 102-rolebinding-cert-manager-webhook-dynamic-serving-yaml - name: 102-service-cert-manager-webhook-yaml - name: 102-service-cert-manager-yaml - name: 102-serviceaccount-cert-manager-cainjector-yaml - name: 102-serviceaccount-cert-manager-webhook-yaml - name: 102-serviceaccount-cert-manager-yaml - name: 102-validatingwebhookconfiguration-cert-manager-webhook-yaml - name: 103-contrail-clusterrole-yaml - name: 103-contrail-deploy-clusterrole-yaml - name: 103-contrail-system-clusterrole-yaml - name: 103-deployment-cert-manager-cainjector-yaml - name: 103-deployment-cert-manager-webhook-yaml - name: 103-deployment-cert-manager-yaml - name: 104-contrail-clusterrolebinding-yaml - name: 104-contrail-deploy-clusterrolebinding-yaml - name: 104-contrail-system-clusterrolebinding-yaml - name: 104-contrail-system-configmap-yaml - name: 105-contrail-operator-yaml - name: 106-apiserver-cr-yaml - name: 106-contrailcertificatemanager-cr-yaml - name: 106-etcd-cr-yaml - name: 107-controller-cr-yaml - name: 108-kubemanager-cr-yaml - name: 109-controlnode-cr-yaml - name: 110-vroutermasters-cr-yaml - name: 111-vrouternodes-cr-yaml - name: 112-vrouterdpdknodes-cr-yaml - name: 113-contrailstatusmonitor-cr-yaml clusterDeploymentRef: name: mgmt-spoke1 imageSetRef: name: openshift-v4.12 networking: machineNetwork: - cidr: 172.16.0.0/24 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.31.0.0/16 userManagedNetworking: true provisionRequirements: controlPlaneAgents: 3 workerAgents: 2 sshPublicKey: '<cluster_ssh_key>' --- apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 spec: baseDomain: contrail.juniper.net clusterName: mgmt-spoke1 controlPlaneConfig: servingCertificates: {} installed: false clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: mgmt-spoke1 version: v1beta1 platform: agentBareMetal: agentSelector: matchLabels: cluster-name: mgmt-spoke1 pullSecretRef: name: assisted-deployment-pull-secret --- apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 spec: clusterName: mgmt-spoke1 clusterNamespace: mgmt-spoke1 clusterLabels: cloud: auto-detect vendor: auto-detect applicationManager: enabled: false certPolicyController: enabled: false iamPolicyController: enabled: false policyController: enabled: false searchCollector: enabled: false --- apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 spec: hubAcceptsClient: true --- apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: mgmt-spoke1 namespace: mgmt-spoke1 spec: clusterRef: name: mgmt-spoke1 namespace: mgmt-spoke1 sshAuthorizedKey: '<cluster_ssh_key>' agentLabelSelector: matchLabels: cluster-name: mgmt-spoke1 ignitionConfigOverride: '<ignition-config>' pullSecretRef: name: assisted-deployment-pull-secret
-
<pull-secret> is the contents of the pull-secret file you downloaded from Red Hat earlier
-
<cluster_ssh_key> is the contents of the ~/.ssh/id_rsa.pub file you created earlier
-
<ignition-config> is the string below:
{"ignition_config_override": "{\"ignition\":{\"version\":\"3.1.0\"},\"systemd\":{\"units\":[{\"name\":\"ca-patch.service\",\"enabled\":true,\"contents\":\"[Service]\\nType=oneshot\\nExecStart=/usr/local/bin/ca-patch.sh\\n\\n[Install]\\nWantedBy=multi-user.target\"}]},\"storage\":{\"files\":[{\"path\":\"/usr/local/bin/ca-patch.sh\",\"mode\":720,\"contents\":{\"source\":\"data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKc3VjY2Vzcz0wCnVudGlsIFsgJHN1Y2Nlc3MgLWd0IDEgXTsgZG8KICB0bXA9JChta3RlbXApCiAgY2F0IDw8RU9GPiR7dG1wfSB8fCB0cnVlCmRhdGE6CiAgcmVxdWVzdGhlYWRlci1jbGllbnQtY2EtZmlsZTogfAokKHdoaWxlIElGUz0gcmVhZCAtYSBsaW5lOyBkbyBlY2hvICIgICAgJGxpbmUiOyBkb25lIDwgPChjYXQgL2V0Yy9rdWJlcm5ldGVzL2Jvb3RzdHJhcC1zZWNyZXRzL2FnZ3JlZ2F0b3ItY2EuY3J0KSkKRU9GCiAgS1VCRUNPTkZJRz0vZXRjL2t1YmVybmV0ZXMvYm9vdHN0cmFwLXNlY3JldHMva3ViZWNvbmZpZyBrdWJlY3RsIC1uIGt1YmUtc3lzdGVtIHBhdGNoIGNvbmZpZ21hcCBleHRlbnNpb24tYXBpc2VydmVyLWF1dGhlbnRpY2F0aW9uIC0tcGF0Y2gtZmlsZSAke3RtcH0KICBpZiBbWyAkPyAtZXEgMCBdXTsgdGhlbgoJcm0gJHt0bXB9CglzdWNjZXNzPTIKICBmaQogIHJtICR7dG1wfQogIHNsZWVwIDYwCmRvbmUK\"}}]},\"kernelArguments\":{\"shouldExist\":[\"ipv6.disable=1\"]}}"}
This string contains an encoded script that configures the extended API server with the proper certificate.
-
-
Apply the manifest.
oc apply -f mgmt-spoke1.yaml
-
Create the manifest.
-
Get the download URL for the ISO image for the managed cluster nodes.
oc get infraenv -n mgmt-spoke1 mgmt-spoke1 -o jsonpath={'.status.isoDownloadURL'}
-
Download the ISO image.
wget "<download_url>"
- Boot up the cluster nodes with the downloaded ISO image.
-
Use the following commands to monitor the progress of the
installation.
oc get agentclusterinstall -n mgmt-spoke1 mgmt-spoke1 -o jsonpath={'.status.conditions[-1].message'}
You can also monitor the installation on the ACM UI.oc get agentclusterinstall -n mgmt-spoke1 mgmt-spoke1 -o jsonpath={'.status.debugInfo.stateInfo'}
Import an Existing CN2 Cluster to ACM
See Import an Existing Cluster to ACM for an example of how you can import a cluster.
Ensure you've set up the ACM hub cluster before you start this procedure.