Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Running Third-Party Applications in Containers

date_range 08-Feb-24

To run your own applications on Junos OS Evolved, you have the option to deploy them as a Docker container. The container runs on Junos OS Evolved, and the applications run in the container, keeping them isolated from the host OS. Containers are installed in a separate partition mounted at /var/extensions. Containers persist across reboots and software upgrades.

Note:

Docker containers are not integrated into Junos OS Evolved, they are created and managed entirely through Linux by using Docker commands. For more information on Docker containers and commands, see the official Docker documentation: https://docs.docker.com/get-started/

Containers have default limits for the resources that they can use from the system:

  • Storage – The size of the /var/extensions partition is platform driven: 8GB or 30% of the total size of /var, whichever is smaller.

  • Memory – Containers have no default physical memory limit. This can be changed.

  • CPU – Containers have no default CPU limit. This can be changed.

Note:

You can modify the resource limits on containers if necessary. See Modifying Resource Limits for Containers.

Deploying a Docker Container

To deploy a docker container:

  1. Start the Docker service bound to a VRF (for example vrf0). All the containers managed by this Docker service will be bound to this Linux VRF.
    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# systemctl start docker@vrf0 
  2. Set the Docker socket for the client by configuring the following environment variable:
    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# export DOCKER_HOST=unix:///run/docker-vrf0.sock
  3. Import the image.
    Note:

    The URL for the import command needs to be changed for different containers.

    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# docker import http://198.0.2.2/lxc-images/images/pyez_new/2.1.9/amd64/default/20190225_19:53/rootfs.tar.xz 
  4. Make sure the image is downloaded, and get the image ID.
    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# docker image ls
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    pyez                latest              738c70533604        59 seconds ago      491MB
    
  5. Create a container using the image ID and enter a bash session in that container.
    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# docker run -it --name pyez1 --network=host 738c70533604 bash
  6. Create a container with Packet IO and Netlink capablity using the image ID and enter a bash session in that container.
    content_copy zoom_out_map
    [vrf:vrf0] user@host_RE0:~# docker run --rm -it --network=host --ipc=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --device=/dev/jtd0 -v /dev/mcgrp:/dev/mcgrp -v /dev/shm:/dev/shm --env-file=/run/docker-vrf0/jnet.env --dns ::1 debian:stretch ip link 738c70533604 bash
    Note:

    Docker containers are daemonized by default unless you use the -it argument.

Managing a Docker Container

Docker containers are managed through standard Docker Linux workflow. Use the docker ps, ps or top Linux commands to show which Docker containers are running, and use Docker commands to manage the containers. For more information on Docker commands, see: https://docs.docker.com/engine/reference/commandline/cli/

Note:

Junos OS Evolved high availability features are not supported for custom applications in Docker containers, If an application has high availability functionality then you should run the application on each RE to ensure it can sync itself. Such an application will need to have the required business logic to manage itself and communicate with all instances.

Selecting a VRF for a Docker Container

Containers inherit virtual routing and forwarding (VRF) from the Docker daemon. In order to run containers in a distinct VRF, a Docker daemon instance needs to be started in the corresponding VRF. The docker@vrf.service instance allows for starting a daemon in the corresponding VRF. If the VRF is unspecified, the VRF defaults to vrf0.

The docker.service runs in vrf:none by default.

The docker daemon for a specific VRF listens on corresponding socket located at /run/docker-vrf.sock.

This is the VRF as seen on the Linux and not the Junos OS Evolved VRF. The utility evo_vrf_name (available starting in Junos OS Evolved release 24.1) can be used to find the Linux VRF that corresponds to a Junos OS Evolved VRF.

The Docker client gets associated with the VRF specific docker daemon by use the following arguments:

content_copy zoom_out_map
--env-file /run/docker-vrf/jnet.env
--host unix:///run/docker-vrf.sock or export DOCKER_HOST=unix:///run/docker-vrf.sock

For example, to run a container in vrf0 enter the following Docker command and arguments:

content_copy zoom_out_map
[vrf:none] user@host:~# docker -H unix:///run/docker-vrf0.sock run --rm -it --network=host --ipc=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --device=/dev/jtd0 -v /dev/mcgrp:/dev/mcgrp -v /dev/shm:/dev/shm --env-file=/run/docker-vrf0/jnet.env --dns ::1 debian:stretch ip link
1002: et-01000000000: BROADCAST,MULTICAST,UP mtu 1514 state UP qlen 1
    link/ether ac:a:a:18:01:ff brd ff:ff:ff:ff:ff:ff
1001: mgmt-0-00-0000: BROADCAST,MULTICAST,UP mtu 1500 state UP qlen 1
    link/ether 50:60:a:e:08:bd brd ff:ff:ff:ff:ff:ff
1000: lo0_0: LOOPBACK,UP mtu 65536 state UP qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Note:

A container can only be associated to a single VRF.

Modifying Resource Limits for Containers

The default resource limits for containers are controlled through a file located at /etc/extensions/platform_attributes. You will see the following text upon opening this file:

content_copy zoom_out_map
## Edit to change upper cap of total resource limits for all containers.
## applies only to containers and does not apply to container runtimes.
## memory.memsw.limit_in_bytes = EXTENSIONS_MEMORY_MAX_MIB + EXTENSIONS_MEMORY_SWAP_MAX_MIB:-0
## check current defaults, after starting extensions-cglimits.service
## $ /usr/libexec/extensions/extensions-cglimits get
## please start extensions-cglimits.service to apply changes here

## device size limit will be ignored once extensionsfs device is created
#EXTENSIONS_FS_DEVICE_SIZE_MIB=
#EXTENSIONS_CPU_QUOTA_PERCENTAGE=
#EXTENSIONS_MEMORY_MAX_MIB=
#EXTENSIONS_MEMORY_SWAP_MAX_MIB=

To change the resource limits for containers, add values to the EXTENSIONS entries at the bottom of the file:

  • EXTENSIONS_FS_DEVICE_SIZE_MIB= controls the maximum storage space that containers can use. Enter the value in bytes. The default value is 8GB or 30% of the total size of /var, whichever is smaller.

  • EXTENSIONS_CPU_QUOTA_PERCENTAGE= controls the maximum CPU usage that containers can use. Enter a value as a percentage of CPU usage.

  • EXTENSIONS_MEMORY_MAX_MIB= controls the maximum amount of physical memory that containers can use. Enter the value in bytes.

CAUTION:

Before modifying the resource limits for containers, be aware of the CPU and memory requirements for the scale you have to support in your configuration. Exercise caution when increasing resource limits for containers to prevent them from causing a strain on your system.

footer-navigation