CN2 Pipeline Solution Test Architecture and Design
SUMMARY
Overview
Solution Test Automation Framework (STAF) is a common platform developed for automating and maintaining solution use cases mimicking the real-world production scenarios.
-
STAF can granularly simulate and control user-personas, actions, timing at scale and thereby exposing the software to all real-world scenarios with long-running traffic.
-
STAF architecture can be extended to allow the customer to plug-in GitOps artifacts and create custom test workflows.
-
STAF is implemented in Python and Pytest test framework.
Use Case
STAF emulates Day 0, Day 1, and Day-to-Day operations in a customer environment. Use case tests are performed as a set of test workflows by user-persona. Each user-persona has its own operational scope.
-
Operator—Performs global operations, such as cluster setup and maintenance, CN2 deployment, and so on.
-
Architect—Performs tenant related operations, such as onboarding, teardown, and so on.
-
Site Reliability Engineering (SRE)—Performs operations in the scope of a single tenant only.
Currently, STAF supports IT cloud webservice and telco use cases.
Workflows
Workflows for each tenant are executed sequentially only. Several tenants’ workflows can be executed in parallel, with the exclusion of Operator tests.
Day 0 operation or CN2 deployment is currently independent from test execution. The rest of the workflows are executed as Solution Sanity Tests. In Pytest each workflow is represented by a test suite.
For test descriptions, see CN2 Pipeline Test Case Descriptions.
Profile
Workflow is executed for a use case instance described in a profile YAML file. The profile YAML describes parameters for namespaces, application layers, network policies, service type, and so on.
The profile file is mounted outside of a test container to give you flexibility with choice of scale parameters. For CN2 Pipeline Release 23.1, you can update the total number of pods only.
The complete set of profiles can be accessed from the downloaded CN2 pipeline tar file in folder charts/workflow-objects/templates.
The following sections have examples profiles.
- Isolated LoadBalancer Profile
- Isolated NodePort Profile
- Multi-Namespace Contour Ingress LoadBalancer Profile
- Multi-Namespace Isolated LoadBalancer Profile
- Non-Isolated Nginx Ingress LoadBalancer Profile
Isolated LoadBalancer Profile
The IsolatedLoadBalancerProfile.yml configures a three-tier webservice profile
-
Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster through the LoadBalancer service.
-
Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.
-
Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.
-
Policies are created to allow only specific ports for each tier.
IsolatedLoadbalancerProfile.yml
isl-lb-profile: WebService: isolated_namespace: True count: 1 frontend: external_network: custom n_pods: 2 services: - service_type: LoadBalancer ports: - service_port: 21 target_port: 21 protocol: TCP middleware: n_pods: 2 aap: active-standby services: - service_type: ClusterIP ports: - service_port: 80 target_port: 80 protocol: TCP backend: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 3306 target_port: 3306 protocol: UDP
Isolated NodePort Profile
The IsolatedNodePortProfile.yml configures a three-tier webservice profile.
-
Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using haproxy node port ingress service.
-
Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.
-
Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.
-
Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile.
isl-np-web-profile-w-haproxy-ingress: WebService: count: 1 isolated_namespace: True frontend: n_pods: 2 anti_affinity: true liveness_probe: HTTP ingress: haproxy_nodeport services: - service_type: NodePort ports: - service_port: 443 target_port: 443 protocol: TCP - service_port: 80 target_port: 80 protocol: TCP middleware: n_pods: 2 liveness_probe: command services: - service_type: ClusterIP ports: - service_port: 80 target_port: 80 protocol: TCP backend: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 3306 target_port: 3306 protocol: UDP
Multi-Namespace Contour Ingress LoadBalancer Profile
The MultiNamespaceContourIngressLB.yml configures a three-tier webservice profile.
-
Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using haproxy node port ingress service.
-
Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.
-
Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.
-
Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile.
multi-ns-contour-ingress-profile: WebService: isolated_namespace: True multiple_namespace: True fabric_forwarding: True count: 1 frontend: n_pods: 2 ingress: contour_loadbalancer services: - service_type: ClusterIP ports: - service_port: 6443 target_port: 6443 protocol: TCP middleware: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 80 target_port: 80 protocol: TCP backend: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 3306 target_port: 3306 protocol: UDP
Multi-Namespace Isolated LoadBalancer Profile
The MultiNamespaceIsolatedLB.yml profile configures a three-tier webservice profile.
-
Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using a LoadBalancer service.
-
Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These middleware pods are accessible through the ClusterIP service from the frontend pods.
-
Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.
-
Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile in addition to multiple namespace for frontend, middleware, and backend deployments.
multi-ns-lb-profile: WebService: isolated_namespace: True multiple_namespace: True count: 1 frontend: n_pods: 2 services: - service_type: LoadBalancer ports: - service_port: 443 target_port: 443 protocol: TCP - service_port: 6443 target_port: 6443 protocol: TCP middleware: n_pods: 2 aap: active-standby services: - service_type: ClusterIP ports: - service_port: 80 target_port: 80 protocol: TCP backend: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 3306 target_port: 3306 protocol: UDP
Non-Isolated Nginx Ingress LoadBalancer Profile
The NonIsolatedNginxIngressLB .yml profile configures a three-tier webservice profile.
-
Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using a NGINX ingress LoadBalancer service.
-
Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These middleware pods are accessible through the ClusterIP service from the frontend pods.
-
Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.
-
Policies are created to allow only specific ports for each tier.
non-isl-nginx-ingress-lb-profile: WebService: isolated_namespace: False count: 1 frontend: ingress: nginx_loadbalancer n_pods: 2 liveness_probe: HTTP services: - service_type: ClusterIP ports: - service_port: 80 target_port: 80 protocol: TCP middleware: n_pods: 2 services: - service_type: ClusterIP ports: - service_port: 443 target_port: 443 protocol: TCP backend: n_pods: 2 is_deployment: False liveness_probe: command services: - service_type: ClusterIP ports: - service_port: 3306 target_port: 3306 protocol: UDP
Test Environment Configuration
To configure the test environment, you deploy a YAML file which contains parameters that describe the test execution environment. This topic shows example YAML files for both Kubernetes and OpenShift test environment configuration.
Kubernetes Environment
Following are example YAML files used to deploy and configure a Kubernetes test environment.
- CN2 in Kubernetes Environment — No vMX and No VM
- CN2 in Kubernetes Environment — Standard Test Setup
CN2 in Kubernetes Environment — No vMX and No VM
The following is an example YAML for configuring a Kubernetes test environment that does not have a Juniper Networks® MX Series 3D Universal Edge Router (vMX) and does not have an external virtual machine (VM) setup in the test environment.
k8s_clusters: central: authentication: type: cn2_client_cert kubeconfig_file: /root/.kube/config metadata: kubemanager: contrail-k8s-kubemanager multus: False ingress: haproxy_nodeport: ingress_class_name: haproxy-nodeport namespace: haproxy-controller service_name: haproxy-ingress service_port: 80 nginx_loadbalancer: ingress_class_name: nginx-loadbalancer namespace: ingress-nginx service_name: ingress-nginx-controller service_port: 80 contour_loadbalancer: ingress_class_name: contour namespace: projectcontour service_name: envoy service_port: 80 computes: cn2-test-node-1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-3: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-4: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-5: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-6: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here test_config: deployment: type: k8s traffic_image: enterprise-hub.jnpr.net/contrail-container-internal/ubuntu-traffic:v2.0.1 registry: type: authenticated
CN2 in Kubernetes Environment — Standard Test Setup
The following YAML is a standard test setup for configuring a Kubernetes test environment.
k8s_clusters: central: authentication: type: cn2_client_cert kubeconfig_file: /root/.kube/config metadata: kubemanager: contrail-k8s-kubemanager multus: False dc_gw: - vmx-1 rr: - vmx-1 public_endpoint: - cn2-sanity-public-endpoint-1 ingress: haproxy_nodeport: ingress_class_name: haproxy-nodeport namespace: haproxy-controller service_name: haproxy-ingress service_port: 80 nginx_loadbalancer: ingress_class_name: nginx-loadbalancer namespace: ingress-nginx service_name: ingress-nginx-controller service_port: 80 contour_loadbalancer: ingress_class_name: contour namespace: projectcontour service_name: envoy service_port: 80 devices: vmx-1: mgmt_ip: ip_is_here type: mx user_name: username_is_here password: password_is_here labels: - rr - gw attributes: public_interface: ge-0/0/1 loopback: 10.1.1.1 public_vrf: sanity-default-public-network config_group_name: DCGW bgp_group_name: Sanity-Cluster public_rt: "target:6000:99" computes: cn2-sanity-public-endpoint-1: mgmt_ip: ip_is_here attributes: public_address: ip_is_here interfaces: ens192: peer: vmx-1 cn2-test-node-1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-3: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-4: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-5: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-test-node-6: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here test_config: deployment: type: k8s traffic_image: enterprise-hub.jnpr.net/contrail-container-internal/ubuntu-traffic:v2.0.1 registry: type: authenticated
OpenShift Environment
Following are example YAML files used to deploy and configure a Red Hat OpenShift test environment.
- CN2 in OpenShift Environment — No vMX and No VM Setup
- CN2 in OpenShift Environment — Standard Test Setup
CN2 in OpenShift Environment — No vMX and No VM Setup
The following is an example YAML for configuring an OpenShift test environment that does not have a Juniper Networks® MX Series 3D Universal Edge Router (vMX) and does not have an external virtual machine (VM) setup in the test environment.
k8s_clusters: central: authentication: type: cn2_client_cert kubeconfig_file: /root/.kube/config metadata: kubemanager: contrail-k8s-kubemanager multus: True ingress: haproxy_nodeport: ingress_class_name: haproxy-nodeport namespace: haproxy-controller service_name: haproxy-ingress service_port: 80 nginx_loadbalancer: ingress_class_name: nginx namespace: nginx-ingress service_name: nginxingress-sample-nginx-ingress service_port: 80 contour_loadbalancer: ingress_class_name: contour namespace: projectcontour service_name: envoy service_port: 80 test_config: deployment: type: openshift traffic_image: enterprise-hub.jnpr.net/contrail-container-internal/ubuntu-traffic:v2.0.1 registry: type: authenticated ocp_infra: ocp_api_host: ip: ip_is_here name: api.ocp.domain_is_here ocp_native_apps_host: ip: ip_is_here computes: master1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here master2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here master3: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here worker1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here worker2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here
CN2 in OpenShift Environment — Standard Test Setup
The following YAML is a standard test setup for configuring an OpenShift test environment.
k8s_clusters: central: authentication: type: cn2_client_cert kubeconfig_file: /root/.kube/config metadata: kubemanager: contrail-k8s-kubemanager multus: True dc_gw: - vmx1 rr: - vmx1 public_endpoint: - cn2-sanity-public-endpoint-1 ingress: haproxy_nodeport: ingress_class_name: haproxy-nodeport namespace: haproxy-controller service_name: haproxy-ingress service_port: 80 nginx_loadbalancer: ingress_class_name: nginx namespace: nginx-ingress service_name: nginxingress-sample-nginx-ingress service_port: 80 contour_loadbalancer: ingress_class_name: contour namespace: projectcontour service_name: envoy service_port: 80 devices: vmx1: mgmt_ip: ip_is_here type: mx user_name: user_name_is_here password: password_is_here labels: - rr - gw attributes: public_interface: ge-0/0/1 loopback: 10.1.1.1 public_vrf: sanity-default-public-network config_group_name: DCGW bgp_group_name: Sanity-Cluster public_rt: "target:64512:1000" test_config: deployment: type: openshift traffic_image: enterprise-hub.jnpr.net/contrail-container-internal/ubuntu-traffic:v2.0.1 registry: type: authenticated ocp_infra: ocp_api_host: ip: ip_is_here name: api.ocp.domain_is_here ocp_native_apps_host: ip: ip_is_here computes: master1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here master2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here master3: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here worker1: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here worker2: username: username_is_here password: password_is_here mgmt_ip: ip_is_here data_ip: ip_is_here cn2-sanity-public-endpoint-1: mgmt_ip: ip_is_here username: username_is_here password: password_is_here attributes: public_address: ip_is_here interfaces: ens160: peer: vmx1
Kubeconfig File
The kubeconfig file data is used for authentication. The kubeconfig file is stored as a secret on the Argo host Kubernetes cluster.
Required Data:
-
Server: Secret key should point to either server IP address or host name.
-
For Kubernetes setups, point to the master node IP address: server:
https://xx.xx.xx.xx:6443
-
For OpenShift setups, point to the OpenShift Container Platform API server, extension, and server:
https://api.ocp.xxxx.com:6443
-
-
Client certificate: Kubernetes client certificate.
Logging and Reporting
Two types of log files are created during each test run:
-
Pytest session log file—One per session
-
Test suite log file—One per test suite
Default file size is 50 MB. Log file rotation is supported.