Running Third-Party Applications in Containers
To run your own applications on Junos OS Evolved, you have the option to deploy them as a Docker container. The container runs on Junos OS Evolved, and the applications run in the container, keeping them isolated from the host OS. Containers are installed in a separate partition mounted at /var/extensions. Containers persist across reboots and software upgrades.
Docker containers are not integrated into Junos OS Evolved, they are created and managed entirely through Linux by using Docker commands. For more information on Docker containers and commands, see the official Docker documentation: https://docs.docker.com/get-started/
Containers have default limits for the resources that they can use from the system:
-
Storage – The size of the /var/extensions partition is platform driven: 8GB or 30% of the total size of /var, whichever is smaller.
-
Memory – Containers have no default physical memory limit. This can be changed.
-
CPU – Containers have no default CPU limit. This can be changed.
You can modify the resource limits on containers if necessary. See Modifying Resource Limits for Containers.
Deploying a Docker Container
To deploy a docker container:
Managing a Docker Container
Docker containers are managed
through
standard Docker Linux workflow. Use the
docker
ps
, ps
or top
Linux commands to show which Docker containers are running, and use Docker
commands to manage the containers. For more information on Docker commands, see:
https://docs.docker.com/engine/reference/commandline/cli/
Junos OS Evolved high availability features are not supported for custom applications in Docker containers, If an application has high availability functionality then you should run the application on each RE to ensure it can sync itself. Such an application will need to have the required business logic to manage itself and communicate with all instances.
Enabling Netlink or PacketIO in a Container
You need to provide additional arguments to Docker commands if your container
requires extra capabilities like Netlink or PacketIO.
You will
also need to enable nlsd
service for enabling Netlink
functionality on certain releases. The following example shows
how to activate Netlink or PacketIO capabilities for a container by adding
arguments to a Docker command:
Create a read-only name persistent volume upon starting Docker services. Mounting the
jnet
volume will mount required libraries needed for PacketIO and Netlink functionality over WAN/data ports:--mount source=jnet,destination=/usr/evo
Share the host’s Network and IPC namespace with the container. Containers requiring PacketIO and Netlink functionality over WAN/data ports will need to be in the host Network and IPC namespace:
--network=host --ipc=host
Automatically start the container upon system reboot:
--restart=always
Enable net admin capability, which is required by Netlink and PacketIO libraries:
--cap-add=NET_ADMIN
Enable the environmental variables required for Netlink and PacketIO over WAN/data ports:
--env-file=/run/docker/jnet.env
Mount the jtd0 device from host to the container to help with PacketIO:
--device=/dev/jtd0
Mount the host’s /dev/shm directory to the container for Netlink and PacketIO over WAN/data ports:
-v /dev/shm:/dev/shm
If multicast group management is required by the container application, mount the /dev/mcgrp directory from the host to the container:
-v /dev/mcgrp:/dev/mcgrp
After Junos OS Evolved release 24.1R1, containers in the host network namespace that want to have DNS resolution will need to pass the
--dns ::1
option to thedocker run
command. This is not required for Junos OS Evolved release 23.4 and earlier:--dns ::1
If your container requires Netlink related processing, then you also need to enable the Netlink asynchronous API (
nlsd
) process in Junos OS Evolved with the following CLI configuration:[edit] user@host# set system processes nlsd enable
Native Linux or container-based applications that require PacketIO and
Netlink functionality should be dynamically linked. We recommend using
Ubuntu based Docker containers, as they are the only containers that are
officially qualified by Juniper Networks. Ubuntu-based containers should use
the glibc
compatible with the base Junos Evolved OS
glibc
.
Selecting a VRF for a Docker Container
Containers inherit virtual routing and forwarding (VRF) from the Docker daemon.
In order to run containers in a distinct VRF, a Docker daemon instance needs to
be started in the corresponding VRF. The
docker@vrf.service
instance allows for
starting a daemon in the corresponding VRF. If the VRF is unspecified, the VRF
defaults to vrf0
.
The docker.service
runs in vrf:none
by
default.
The docker daemon for a specific VRF listens on corresponding socket located at /run/docker-vrf.sock.
This is the VRF as seen on the Linux and not the Junos OS Evolved VRF. The
utility evo_vrf_name
(available starting in Junos OS Evolved
release 24.1) can be used to find the Linux VRF that corresponds to a Junos OS
Evolved VRF.
The Docker client gets associated with the VRF specific docker daemon by use the following arguments:
--env-file /run/docker-vrf/jnet.env --host unix:///run/docker-vrf.sock or export DOCKER_HOST=unix:///run/docker-vrf.sock
For example, to run a container in vrf0
enter the following
Docker command and arguments:
[vrf:none] user@host:~# docker -H unix:///run/docker-vrf0.sock run --rm -it --network=host --ipc=host --cap-add=NET_ADMIN --mount source=jnet,destination=/usr/evo --device=/dev/jtd0 -v /dev/mcgrp:/dev/mcgrp -v /dev/shm:/dev/shm --env-file=/run/docker-vrf0/jnet.env --dns ::1 debian:stretch ip link 1002: et-01000000000: BROADCAST,MULTICAST,UP mtu 1514 state UP qlen 1 link/ether ac:a:a:18:01:ff brd ff:ff:ff:ff:ff:ff 1001: mgmt-0-00-0000: BROADCAST,MULTICAST,UP mtu 1500 state UP qlen 1 link/ether 50:60:a:e:08:bd brd ff:ff:ff:ff:ff:ff 1000: lo0_0: LOOPBACK,UP mtu 65536 state UP qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
A container can only be associated to a single VRF.
Modifying Resource Limits for Containers
The default resource limits for containers are controlled through a file located at /etc/extensions/platform_attributes. You will see the following text upon opening this file:
## Edit to change upper cap of total resource limits for all containers. ## applies only to containers and does not apply to container runtimes. ## memory.memsw.limit_in_bytes = EXTENSIONS_MEMORY_MAX_MIB + EXTENSIONS_MEMORY_SWAP_MAX_MIB:-0 ## check current defaults, after starting extensions-cglimits.service ## $ /usr/libexec/extensions/extensions-cglimits get ## please start extensions-cglimits.service to apply changes here ## device size limit will be ignored once extensionsfs device is created #EXTENSIONS_FS_DEVICE_SIZE_MIB= #EXTENSIONS_CPU_QUOTA_PERCENTAGE= #EXTENSIONS_MEMORY_MAX_MIB= #EXTENSIONS_MEMORY_SWAP_MAX_MIB=
To change the resource limits for containers, add values to the
EXTENSIONS
entries at the bottom of the file:
-
EXTENSIONS_FS_DEVICE_SIZE_MIB=
controls the maximum storage space that containers can use. Enter the value in bytes. The default value is 8GB or 30% of the total size of /var, whichever is smaller. -
EXTENSIONS_CPU_QUOTA_PERCENTAGE=
controls the maximum CPU usage that containers can use. Enter a value as a percentage of CPU usage. -
EXTENSIONS_MEMORY_MAX_MIB=
controls the maximum amount of physical memory that containers can use. Enter the value in bytes.
Before modifying the resource limits for containers, be aware of the CPU and memory requirements for the scale you have to support in your configuration. Exercise caution when increasing resource limits for containers to prevent them from causing a strain on your system.