Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

CN2 Pipeline Test Case Descriptions

SUMMARY This section provides the complete list of solution test cases with descriptions of what the test case is performing.

Architect Onboard

test_create_namespaces: : all profiles

  1. For each profile in the use case profiles, create a webservice instance.

  2. Inside the webservice instance, create all the required namespaces.

Architect Execute

test_validate_link_local_service

  1. Update GlobalVrouterConfig (GVC) with the localhost service.

  2. Update GVC with multiple services and with one service having multiple fabric IP addresses.

  3. Delete the newly added LLS service.

  4. Restore the original LLS set.

test_validate_mesh_virtual_network_router

  1. Remove all imports from all virtual network routers (VNR)s. Routes should not be advertised.

  2. Add all the other VNRs to each other VNRs. Creates mesh of all three layers.

  3. Update NetworkPolicy so that backend can talk to frontend.

  4. Cascading VNR, remove the direct import of VNRs between frontend and backend.

  5. Negative test of updating the VNR type from mesh to hub.

  6. Update the virtual network (VN) label selector of middleware service VNR.

  7. Remove the added VN label selector.

  8. Update the VNR label of middleware service VNR.

  9. Reset the VNRs as specified by the profile.

test_update_namespace_label

  1. Update labels of namespace from name=ns1 to name=ns2.

  2. Update the corresponding VirtualNetworkRouter objects.

  3. Update namespaceSelector in the backend NetworkPolicy rule.

  4. Reset the Namespace label, VNR selectors, and NetworkPolicy selector.

test_validate_hub_spoke_virtual_network_router

  1. Convert middleware to hub VNR. Convert frontend and backend to spoke VNR.

  2. Create new mesh VNRs and interconnect all.

  3. Delete the middleware hub VNR and recreate the same later in the process.

  4. Add dummy VN label to backend spoke VNR (Negative).

  5. Revert back to the proper VN label.

  6. Update backend VNR as hub VNR (Negative).

  7. Update backend VNR as mesh VNR (Negative).

  8. Delete the newly created mesh VNR.

  9. Create a new hub VNR for middleware pod network without any import statements.

  10. Remove middleware pod network label from original hub VNR.

  11. Update the duplicate hub VNR with the import of the backend spoke VNR.

  12. Update backend spoke VNR with new middleware pod hub VNR.

  13. Change metadata label on new middleware VNR.

  14. Update import statements of backend spoke VNR

  15. Create a custom VN matching the backend VN label.

  16. Update custom VN label to dummy so traffic should fail.

  17. Remove the labels from backend VNR so all the VNs in the namespace are selected.

  18. Reset the configurations to the baseline profile defined.

test_update_forwarding_mode_on_namespace

  1. Set annotations forwarding mode to false (no ip-fabric or fabric-snat).

  2. Set forwarding mode to ip-fabric.

  3. Set forwarding mode to fabric-snat.

  4. Reset forwarding mode to original forwarding mode.

test_update_fabric_forwarding_on_external_vn

  1. Enable fabric forwarding on external VN.

  2. Reset fabric forwarding on external VN.

Architect Teardown

test_teardown_namespaces: all profiles

  1. For each profile in the use case profiles, teardown the namespaces.

SRE Onboard

test_onboard_services: all profiles

  1. For each profile in use case profiles, create number of instances mentioned by count.

  2. Setup deployment, services, and traffic generator for each instance.

SRE Execute

test_modify_liveness_probe

  1. Validate HTTP liveness probe failure.

  2. Validate Exec liveness probe failure.

test_update_cluster_ip_service

  1. Create new middleware pods as a replica of existing ones.

  2. Update selector in NetworkPolicy and Service.

  3. Update NetworkPolicy with additional ports.

  4. Update the target port from 9091 to 9092.

  5. Update the service port from 9091 to 9090.

  6. Update protocol of ClusterIP from TCP to UDP.

  7. Remove the newly added service UDP/9090/9092.

  8. Multiple services mapping to the same podSelector label.

  9. Delete the newly created set of middleware pods and set the ClusterIP selector.

test_update_nodeport_service

  1. Create new frontend pods as a replica of existing ones.

  2. Update selector with new labels.

  3. Update NetworkPolicy with additional ports.

  4. Update the target port from 9091 to 9092.

  5. Update the service port from 9091 to 9090.

  6. Update session affinity of the ClientIP service to a specific pod.

  7. Update external policy to local.

  8. Remove the newly added service TCP/9090/9092.

  9. Multiple services mapping to the same podSelector label.

  10. Update NodePort port from x to x+1 and validate the traffic.

  11. Delete the newly created set of frontend pods and set the ClusterIP selector.

test_update_service_type

  1. Update service type from LoadBalancer to NodePort.

  2. Reset the service type back to LoadBalancer.

  3. Update the service type from ClusterIP to NodePort.

  4. Reset the service type back to ClusterIP.

test_update_ingress_network_policy

  1. Change the NetworkPolicy ingress rules from match on podLabel to namespaceLabel.

  2. Delete all the ingress rules. Frontend to middleware traffic will be dropped - deny all.

  3. Update the ingress to list of Empty dictionaries (Allow all traffic).

  4. Add back the ingress PolicyType and ingress rules.

  5. Add ingress rule with ipBlock - CIDR information. (Update podSelector to ipBlock alone - /16).

  6. New NetworkPolicy with ip_block cidr and exception rule.

  7. Update the rule such as exception for a specific IP address. (Deny from one of the frontend pods alone).

  8. Update the rule so that rule has all of the three filters namespaceSelector, podSelector, and AddressBlock.

  9. Reset the rules to rules specified by the profile.

test_update_egress_network_policy

  1. Change the NetworkPolicy egress rules from match on ip_block to namespaceLabel.

  2. Delete all the egress rules. (Middleware to backend traffic will be dropped - deny all).

  3. Update the egress to list of Empty dictionaries. (Allow all traffic).

  4. Add back the egress PolicyType and egress rules.

  5. Update the rule such as except a specific IP address. (Deny to backend service).

  6. New NetworkPolicy for same podSelector but allow the except address.

  7. Update the rule so that rule has all of the three filters namespaceSelector, podSelector, and AddressBlock.

  8. Reset the rules to rules specified by the profile.

test_update_network_policy_policy_types

  1. Modify PolicyTypes to have ingress alone. (Deny all outgoing.)

  2. Modify PolicyTypes to have egress alone. (Deny all incoming.)

  3. Modify PolicyTypes to have none of the policy types. (Deny all incoming and outgoing.)

  4. Reset the rules to rules specified by the profile.

test_update_loadbalancer_service_general_properties

  1. Create new frontend pods as a replica of existing ones.

  2. Update selector with new labels.

  3. Update NetworkPolicy with additional ports.

  4. Update the target port from 9091 to 9092.

  5. Update the service port from 9091 to 9090.

  6. Update session affinity of ClientIP to a specific pod.

  7. Update external policy to local.

  8. Remove the newly added service TCP/9090/9092.

  9. Multiple services mapping to the same podSelector label.

  10. Delete the newly created set of frontend pods and reset selectors to previously existing labels as a test case teardown.

test_validate_allowed_address_pair_failover

  1. Trigger Virtual Router Redundancy Protocol (VRRP) master switchover.

  2. Configure Allowed Address Pairs (AAP) mode as active-active.

  3. Reset AAP mode to active-backup.

  4. Reset VRRP master status.

test_validate_allowed_address_pair_update

  1. Update AAP IP from x to y.

  2. Update AAP IP to have multiple addresses.

test_update_lb_service_static_public_vn

  1. Create a VN (new-public-vn) under the service namespace with custom public RT1 assigned. Also, configure Juniper Networks® MX Series 5G Universal Routing Platform (MX) with routing instance and route targets with the same VN properties.

  2. Create a VN (new-public-vn) under default namespace with custom public RT2 assigned and configure MX accordingly.

  3. Update the namespace annotations ExternalNetwork to default new-public-vn.

  4. Create a new LoadBalancer service with annotations ExternalNetwork and value as new-public-vn.

  5. Update ExternalIP on the LoadBalancer service (both IPv4 and IPv6).

  6. Delete the LoadBalancer service and validate.

  7. Create a service without any annotation specified and validate traffic.

  8. Exhaust the IP address on the IPv4 IP addresses on the public subnet by creating dummy LoadBalancer services.

  9. Create one more service and check if service is in pending state as the IP address pool is exhausted.

  10. Delete one of the dummy LoadBalancer services.

  11. Validate the service which was created in Step 9.

  12. Update Namespace annotation external-virtual-network to new-public-vn (without namespace).

  13. Create a service without any annotation specified and validate traffic.

  14. Reset the namespace annotations to original value.

  15. Delete all services.

test_update_ingress_service

  1. Create additional service for frontend.

  2. Update the service selector label of ingress backend from old service to new service.

  3. Have multiple service paths mentioned in the ingress specification.

  4. Remove one of the paths.

  5. Delete the ingress service.

  6. Recreate the ingress service.

test_update_label_of_public_network

  1. Update custom public networks label from local==public-test to local==unselect-public-vn. Existing service should not be affected.

  2. Create a new LoadBalancer service with service.contrail.juniper.net/externalNetworkSelector=custom-external-in-service-namespace annotation set. New service moves to pending state.

  3. Reset the label to local==public-test. The new service created in Step 2 will get a public IP address and is accessible from the Internet endpoint.

  4. Delete the new service.

test_update_label_of_pods

  1. Update pod label, corresponding service, and NetworkPolicy selectors.

  2. Reset pod label, corresponding service, and NetworkPolicy selectors.

SRE teardown

test_teardown_services

  1. Delete SRE objects created during the SRE onboard phase.