Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Install CN2 Pipelines

Release: CN2 23.3
{}
Change Release
date_range 29-Jun-23

SUMMARY This section guides you through installing CN2 Pipelines.

Download CN2 Pipelines

Download the CN2 Pipelines files to update the files with the needed tokens prior to installation.

To download the CN2 Pipelines tar file:

  1. Download CN2 Pipelines Deployer files from Juniper Networks Downloads.
  2. Untar the downloaded files to the management server.

Install the CN2 Pipelines Helm Chart

The CN2 Pipelines Helm chart is used to install and configure the CN2 Pipelines management cluster.

To install the CN2 Pipelines Helm chart on your management cluster:

  1. In your downloaded CN2 Pipelines Deployer files, locate the values.yaml in the folder contrail-pipelines-x.x.x/values.
  2. Input the chart values. For parameter descriptions, see Explanation of values.yaml.

    Example CN2 Pipelines values.yaml for the management cluster:

    content_copy zoom_out_map
    ####################################################################
    #                 Common Configuration (global vars)               #
    ####################################################################
    global:
      docker_image_repo: docker.io  # Global docker registry for non Juniper images
      registry: enterprise-hub.juniper.net/contrail-container-prod/ # Global image registry to pull Juniper artifacts
      imagePullSecret: <base64 imagePullSecret> # Image pull secret for authenticated registry ## Keep this commented for nonAuthicated registry
      deployment_type: 'k8s' # deployment_type: k8s for CN2 kubernetes cluster (or) deployment_type: "openshift" for CN2 openshift cluster
      managementServer: <managementServer> # CN2 pipeline management server IP
    
      gitServer:
        access_token: <access_token> # eg: eTE1Y0p1Mlo4TGhiWFpfLTFSVEg= 
        gitlabBaseURL: <gitlabBaseURL> # eg: https://cnf-gitlab.net
        project: <project> # eg: devops/cn2/cn2-pipelines
        folderName: <folderName> # eg: cn2networkconfig
        branch: <branch> # eg: master
    
      cn2ClusterDetails:
        name: <cluster name> # CN2 cluster name
        server: <kubeAPI IP>   # CN2 cluster kubeapi server (should be reachable from management server)
        kubeconfig: cn2-cluster-kubeconfig # CN2 kubeconfig name, leave as default
        mountpath: /opt/cn2_workflows # CN2 test profile folder location
    
    workflow-objects:
      ssl_enabled: True # True if CN2 cluster deployed with SSL enabled else it is False
      ## ''' Enable below OCP keys only for deployment_type is openshift ''' ##
      #ocp_api_host_ip: <ocp_api_host_ip> # eg: '192.167.19.571'
      #ocp_api_host_name: <ocp_api_host_name> # eg: 'api.ocp-ss-571.net'
    
  3. Run the following command to install the CN2 Pipelines Helm chart with the release name cn2-pipeline:
    content_copy zoom_out_map
    helm install cn2-pipeline . --timeout=20m

Verify the CN2 Pipelines Helm Chart Installation

To verify the CN2 Pipelines Helm chart Installation, run the following commands:

  1. List the Helm release in the current namespace.
    content_copy zoom_out_map
    helm ls

    Output:

    content_copy zoom_out_map
    NAME        	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART         	APP VERSION
    cn2-pipeline	default  	1       	2023-16-10 11:44:29.380155158 +0000 UTC	deployed	cn2-pipeline-1	23.2.0 
  2. Display all pods in all namespaces.
    content_copy zoom_out_map
    kubectl get pods -A

    Output:

    content_copy zoom_out_map
    NAMESPACE     NAME                                               READY   STATUS    RESTARTS        AGE
    argo-events   controller-manager-844d44-vf6r8                    1/1     Running   0               2m23s
    argo-events   eventbus-default-stan-0                            2/2     Running   0               2m18s
    argo-events   eventbus-default-stan-1                            2/2     Running   0               2m6s
    argo-events   eventbus-default-stan-2                            2/2     Running   0               2m4s
    argo-events   events-webhook-64dc49f456-p6rmw                    1/1     Running   0               2m23s
    argo-events   gitlab-eventsource-qhnz8-74c4c785dc-ggmpr          1/1     Running   2 (118s ago)    2m17s
    argo-events   gitlab-sensor-xc5s6-74c65564b8-m5cld               1/1     Running   3 (115s ago)    2m17s
    argo          argo-server-65566599f8-tv99s                       1/1     Running   0               2m23s
    argo          workflow-controller-77c44779bf-9b42k               1/1     Running   0               2m23s
    argocd        argocd-application-controller-0                    1/1     Running   0               2m23s
    argocd        argocd-dex-server-76d5bc7dc6-r5rnw                 1/1     Running   1 (2m15s ago)   2m23s
    argocd        argocd-notifications-controller-5ff9495c68-8z58l   1/1     Running   0               2m23s
    argocd        argocd-redis-857ddfd67b-2lfd2                      1/1     Running   0               2m23s
    argocd        argocd-repo-server-6dcd4856d4-hjv95                1/1     Running   0               2m23s
    argocd        argocd-server-7cf45b4594-cntd5                     1/1     Running   0               2m23s

Argo CD and Helm Configuration

This topic lists the Argo components and configurations that are automated as part of the CN2 Pipelines Helm chart install.

  • Argo CD External Service—Creates a Kubernetes service with service type as NodePort or LoadBalancer. This creates the Argo CD external service that provides access to the Argo CD API server and the Argo CD GUI.

  • Register Git Repository with CN2 Configurations—Configures repository credentials and connects your Git repository to Argo CD. Argo CD is configured to your Git repository to watch and pull the configuration changes from your Git repository. This Git repository should only contain Kubernetes resources. Argo CD does not understand any other type of YAML or files.

  • Register Kubernetes Clusters—Registers a Kubernetes cluster to Argo CD. This process configures Argo CD to provision the Kubernetes resources in any Kubernetes cluster. Multiple Kubernetes clusters can be configured in Argo CD.

  • Create an Argo CD Application—Creates an application using the Argo CD GUI. Any application created in Argo CD needs to be associated with a Git repository and one Kubernetes cluster.

Argo Log In

After installing the CN2 Pipelines Helm chart, you have access to the Argo Workflow GUI and the Argo CD GUI.

Access Argo Workflow UI

To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30550.

  1. Access the Argo CD GUI from your browser.

    content_copy zoom_out_map
    https://<management-api-ip>:30550
  2. On the management node, run the following command to receive the token.

    content_copy zoom_out_map
    kubectl -n argo exec $(kubectl get pod -n argo -l 'app=argo-server' -o jsonpath='{.items[0].metadata.name}') -- argo auth token

Access Argo CD GUI

To access the Argo CD GUI, you need connectivity from the management cluster to access the GUI using the NodePort service. The Argo CD GUI is accessed using the management server IP address and port 30551.

  1. Access the Argo CD GUI from your browser.

    content_copy zoom_out_map
    https://<management-api-ip>:30551
  2. On the management node, run the following command to receive the token. The username is admin.

    content_copy zoom_out_map
    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

CN2 and Workflows

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. You can define workflows where each step in the workflow is a container. You can also model multi-step workflows as a sequence of tasks or capture dependencies between tasks using a directed acyclic graph (DAG).

Why Workflows Are Needed

Workflows are used to invoke and run CN2 test cases after provisioning CN2 resources by using the GitOps engine. These workflows qualify the CN2 application configurations and generates test results for the configuration that is being deployed.

How Workflows Work and How CN2 Uses Workflows

Workflows are triggered whenever a CN2 resource is provisioned by the GitOps engine. Each of the CN2 resources or a group of CN2 resources are mapped to a specific workflow test DAG. After successful completion of these test suites, the CN2 configurations are qualified to be promoted to production environments from the Staging or Test environments.

CN2 Pipelines Service

The pipeline service listens for notifications from Argo Events for any changes in Kubernetes resources. The pipeline service exposes a service which is used by Argo Events to consume and trigger the service with the data related to the CN2 configuration that you applied. It is the responsibility of the CN2 Pipelines service to identify the test workflow to be triggered for the type of CN2 configuration that you applied. Workflows change dynamically depending on the objects being notified. The CN2 Pipelines listener service invokes the respective workflow dependent on the CN2 configuration that gets applied.

CN2 Pipelines Configurations

This topic shows examples for the CN2 Pipelines configurations.

Pipeline Configuration

The pipeline configuration is used by the pipeline engine and includes:

  • Pipeline commit threshold

  • Config map: cn2pipeline-configs

  • Namespace: argo-events

Example CN2 Pipelines configuration:

content_copy zoom_out_map
apiVersion: v1 
data: 
  testcase_trigger_threshold: "10" 
kind: ConfigMap 
  labels: 
    app.kubernetes.io/managed-by: Helm 
  name: cn2pipeline-configs 
  namespace: argo-events

Test Workflow Template Parameter Configuration

All workflow template inputs are stored as configuration maps. These configuration maps are dynamically selected during the execution by the pipeline service.

Workflow to Kind Map

This mapping configuration contains the workflow template to CN2 resource kind mapping. Only one template is selected for execution and the first map that matches has the higher priority. An asterisk (*) in kind: ['*'] indicates that template has higher priority than any other kind matches and overrides every mapping.

A workflow template for a CN2 resource kind mapping template includes:

  • Config map: cn2tmpl-to-kind-map

  • Namespace: argo-events

Following is an example configuration for the workflow template to CN2 resource kind mapping. Note the asterisk (*) in kind: ['*'] kindmap.

content_copy zoom_out_map
kind: ConfigMap
apiVersion: v1
metadata:
  name: cn2tmpl-to-kind-map
  namespace: argo-events
data:
  kindmap: |
    - workflow: it-cloud
      kind: ['*']
    - workflow: custom-cnf-sample-test
      kind: ['namespace']

Create Custom Workflows for the CN2 Pipelines

You can create custom workflows tests to test your container network functions (CNFs).

To create a custom workflow, you can use the example custom test workflow templates provided with the CN2 Pipelines files. Every workflow has a set of input parameters, volume mounts, container creation, and so on. To understand the workflow template creation see Argo Workflows.

The following example custom test workflow templates are provided:

  • Input parameters to workflow

  • Mount volumes

  • Create Kubernetes resource using workflow (Template name: create-cnf-tf and create-cnf-service-tf)

  • Embedded code in workflow (Template name: test-access-tf)

  • Pull external code and execute within a container (Template name: test-service-tf)

To automate the inputs to the workflow during the pipeline run, a workflow parameter configuration map is created which has the inputs for the workflow. The configuration map must have the same name as the workflow template.

In the following example, the template name is custom-cnf-sample-test. A configuration map is created automatically with the same name. As a part of the pipeline run, the pipeline service looks for the configuration map with the template name, gets the inputs, which are then automatically added to the workflow when the pipeline triggers the workflow.

Another update that happens in the test case which triggers the custom workflow is to change the configuration map name to <cn2tmpl-to-kind-map>.

content_copy zoom_out_map
    - workflow: custom-cnf-sample-test 
      kind: ['namespace']

The following is an example workflow configuration for the template custom-cnf-sample-test:

content_copy zoom_out_map
apiVersion: argoproj.io/v1alpha1 
kind: WorkflowTemplate                  # new type of k8s spec 
metadata: 
  name: custom-cnf-sample-test     # name of the workflow spec 
  namespace: argo-events 
spec: 
  serviceAccountName: operate-workflow-sa 
  entrypoint: cnf-test-workflow       # invoke the workflows template 
  hostNetwork: true 
  arguments: 
    parameters: 
      - name: image               # the path to a test docker image 
        value: not_provided 
      - name: kubeconfig_secret   # eg: kubeconfig-989348 
        value: not_provided 
      - name: report_dir          # eg: /root/SolutionReports 
        value: not_provided 
  volumes: 
    - name: kubeconfig 
      secret: 
        secretName: "{{ `{{workflow.parameters.kubeconfig_secret}}` }}" 
    - name: reportdir 
      hostPath: 
        path: "{{ `{{workflow.parameters.report_dir}}` }}" 
  templates: 
  - name: create-cnf-tf 
    resource: 
      action: apply 
      #successCondition: status.succeeded > 0 
      #failureCondition: status.failed > 3 
      manifest: | 
        apiVersion: v1 
        kind: Pod 
        metadata: 
          name: webapp-cnf 
          namespace: argo-events 
          labels: 
            app.kubernetes.io/name: proxy 
        spec: 
          containers: 
          - name: nginx 
            image: {{ .Values.global.docker_image_repo }}/nginx:stable 
            ports: 
              - containerPort: 80 
                name: http-web-svc 
  - name: create-cnf-service-tf 
    resource: 
      action: apply 
      #successCondition: status.succeeded > 0 
      #failureCondition: status.failed > 3 
      manifest: | 
        apiVersion: v1 
        kind: Service 
        metadata: 
          name: webapp-service 
          namespace: argo-events 
        spec: 
          selector: 
            app.kubernetes.io/name: proxy 
          ports: 
          - name: webapp-http 
            protocol: TCP 
            port: 80 
            targetPort: http-web-svc 
  - name: test-access-tf 
    script: 
      image: "{{ `{{workflow.parameters.image}}` }}" 
      command: [python] 
      source: | 
        import time 
        print('--Test access to CNF--') 
        url = 'webapp-service.argo-events.svc.cluster.local' 
        retry_max = 3 
        retry_cnt = 0 
        while retry_cnt < retry_max: 
          print('Response status code: {}','200') 
          time.sleep(1) 
          retry_cnt += 1 
          print('Monitoring access count: {}',retry_cnt) 
        print('Completed') 
  - name: test-service-tf 
    inputs: 
      artifacts: 
      - name: pyrunner 
        path: /usr/local/src/cn2_py_runner.py 
        mode: 0755 
        http: 
          url: https://raw.githubusercontent.com/roshpr/argotest/main/cn2-experiments/cn2_py_runner.py 
    script: 
      image: "{{ `{{workflow.parameters.image}}` }}" 
      command: [python] 
      args: ["/usr/local/src/cn2_py_runner.py", "4"] 
  - name: cnf-test-workflow 
    dag: 
      tasks: 
      - name: create-cnf 
        template: create-cnf-tf 
      - name: create-cnf-service 
        template: create-cnf-service-tf 
      - name: test-connectivity 
        template: test-access-tf 
        dependencies: [create-cnf-service] 
      - name: test-load 
        template: test-service-tf 
        dependencies: [create-cnf-service] 
footer-navigation