Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Juniper BNG CUPS User Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Use Juniper BNG CUPS with Multiple Geographical Redundancy

date_range 10-Jan-25

In a disaggregated BNG model, the control plane provides services to many user planes and their associated subscribers. The control plane’s role in the disaggregated model enables new use cases, services, and levels of network redundancy. The use cases require that the control plane has new levels of redundancy. Moving the control plane into a cloud backed by a Kubernetes cluster enables these redundancies.

Kubernetes brings scalability, operational efficiency, and reliability to the solution. The modularity of a Kubernetes cloud enables cluster architectures to have unparalleled redundancy. But, even the most redundant cluster architectures are susceptible to events such as natural disasters or cyberattacks which might target a specific location or geography. A multiple geographic, multiple cluster setup mitigates these susceptibilities.

Figure 1 shows an example of a multiple geographic, multiple cluster setup.

Figure 1: Multiple Geographies with Multiple Cluster Setup Multiple Geographies with Multiple Cluster Setup

In a multiple geographic, multiple cluster setup, the management cluster maintains a separate context for running multiple cluster scheduling and monitoring functions and is connected to both workload clusters. The multiple cluster context is driven by a policy engine that informs the scheduler how to distribute the application across the workload clusters. Typically, applications that use the multiple cluster setup for multiple geographical redundancy, have policy rules that distribute parts of the application involved in state replication to both workload clusters. Specific additional parts are distributed to one workload cluster that is chosen as the primary workload cluster.

The workload clusters accept work from the management cluster through the Kubernetes REST API. The workload clusters are standard Kubernetes cluster. A secure L3 tunnel is maintained between the workload clusters. The tunnel facilitates the exchange of application state and general communication between the two workload clusters. As a standard Kubernetes cluster, a workload cluster monitors pods and deployments and performs scheduling tasks for the worker nodes in the cluster, maintaining the deployed application components. The workload cluster does not require the presence of the management cluster to maintain its application workloads. When applications are deployed, it is the workload cluster’s responsibility to maintain the applications.

If the management cluster detects that a workload cluster has failed or that an applications component cannot be satisfactorily scheduled on the workload cluster, the management cluster can drive a switchover event. The switchover action is controlled by the policies that are defined for the application. In a switchover event, any applications components that were deployed on the failed workload cluster are redeployed to the other workload cluster.

The BNG CUPS Controller can be deployed in a multiple geographic, multiple cluster environment. The BNG Controller’s Helm charts include propagation policy rules that instruct the management cluster’s multiple cluster context to deploy an instance of the state cache microservice on both workload clusters. The two state cache instances communicate over the secure tunnel to mirror the subscriber state between the two geographies. The control plane instance is deployed only on one workload cluster. The control plane instance mirrors its state to its local state cache instance, and is replicated to its peer in the other workload cluster.

Figure 2 Shows a BNG CUPS Controller in a multiple geographic, multiple cluster setup.

Figure 2: BNG CUPS Controller in a Multiple Geographic and Multiple Cluster Setup BNG CUPS Controller in a Multiple Geographic and Multiple Cluster Setup

If the workload cluster, where the control plane instance is deployed fails, the management cluster will reschedule the control plane instance on the other workload cluster. When the control plane instance initializes on the second workload cluster, it recovers its configuration from a replicated configuration cache (not shown). The control plane instance also recovers its subscriber state from the local state cache instance just as it would on any microservice restart. Since the local state cache received replication state information from the previous workload cluster, all stable states will be recovered. Once the state has been recovered, the control plane instance establishes its associations with the BNG User Planes. The BNG User Plane association logic detects that the new association is originating from the same BNG CUPS Controller (but in a different geography).

The BNG CUPS Controller also deploys a microservice on the management cluster that is called the observer. The observer runs in the regular context of the management cluster and watches control plane instance scheduling events in the multiple cluster context (associated with the management cluster). In switchover situations where the control plane instance might exist temporarily in both workload clusters, the observer allows the BNG CUPS Controller to resolve any ambiguity over which the control plane instance should be running.

footer-navigation