ON THIS PAGE
Installing vMX on KVM
Read this topic to understand how to install the virtual MX router in the KVM environment.
Preparing the Ubuntu Host to Install vMX
To prepare the Ubuntu host system for installing vMX (Starting in Junos OS Release 15.1F6):
-
Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
-
See Upgrading Kernel and Upgrading to libvirt 1.2.19 sections below.
-
If you are using Intel XL710 PCI-Express family cards, make sure you update the drivers. See Updating Drivers for the X710 NIC.
-
Enable Intel VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)
Refer to the procedure to enable VT-d available on the Intel Website.
-
Disable KSM by setting
KSM_ENABLED=0
in /etc/default/qemu-kvm. -
Disable APIC virtualization by editing the /etc/modprobe.d/qemu-system-x86.conf file and adding
enable_apicv=0
to the line containingoptions kvm_intel
.options kvm_intel nested=1 enable_apicv=0
-
Restart the host to disable KSM and APIC virtualization.
-
If you are using SR-IOV, you must perform this step.
Note:You must remove any previous installation with an external bridge in /etc/network/interfaces and revert to using the original management interface. Make sure that the
ifconfig -a
command does not show external bridges before you proceed with the installation.To determine whether an external bridge is displayed, use the
ifconfig
command to see the management interface. To confirm that this interface is used for an external bridge group, use thebrctl show
command to see whether the management interface is listed as an external bridge.Enable SR-IOV capability by turning on
intel_iommu=on
in the /etc/default/grub directory.GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
Append the
intel_iommu=on
string to any existing text for theGRUB_CMDLINE_LINUX_DEFAULT
parameter.Run the
update-grub
command followed by thereboot
command. -
For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure the NUMA node for the VFP has at least 16 1G Huge Pages. To configure the size of Huge Pages, add the following line in /etc/default/grub:
GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=number-of-huge-pages"
The number of Huge Pages must be at least (16G * number-of-numa-sockets).
-
Run the
modprobe kvm-intel
command before you install vMX.
Starting in Junos OS 18.2 and later releases, Ubuntu 16.04.5 LTS and Linux 4.4.0-62-generic are supported.
To meet the minimum software and OS requirements, you might need to perform these tasks:
Upgrading the Kernel
Upgrading Linux kernel in Ubuntu 16.04 version is not required.
If you are using Ubuntu 14.04.1 LTS, which comes with 3.19.0-80-generic, you can skip this step. Ubuntu 14.04 comes with a lower version of kernel (Linux 3.13.0-24-generic) than the recommended version (Linux 3.19.0-80-generic).
To upgrade the kernel:
-
Determine your version of the kernel.
uname -a Linux rbu-node-33 3.19.0-80-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
-
If your version differs from the version shown in step 1, run the following commands:
apt-get install linux-firmware apt-get install linux-image-3.19.0-80-generic apt-get install linux-image-extra-3.19.0-80-generic apt-get install linux-headers-3.19.0-80-generic
-
Restart the system.
Upgrading to libvirt 1.2.19
Ubuntu 16.04.5 supports Libvirt version is 1.3.1. Upgrading libvirt in Ubuntu 16.04 is not required.
Ubuntu 14.04 supports libvirt 1.2.2 (which works for VFP lite mode). If you are using the VFP performance mode or deploying multiple vMX instances using the VFP lite mode, you must upgrade to libvirt 1.2.19.
To upgrade libvirt:
-
Make sure that you install all the packages listed in Minimum Hardware and Software Requirements.
-
Navigate to the /tmp directory using the
cd /tmp
command. -
Get the
libvirt-1.2.19
source code by using the commandwget http://libvirt.org/sources/libvirt-1.2.19.tar.gz
. -
Uncompress and untar the file using the
tar xzvf libvirt-1.2.19.tar.gz
command. -
Navigate to the libvirt-1.2.19 directory using the
cd libvirt-1.2.19
command. -
Stop libvirtd with the
service libvirt-bin stop
command. -
Run the
./configure --prefix=/usr --localstatedir=/ --with-numactl
command. -
Run the
make
command. -
Run the
make install
command. -
Make sure that the libvirtd daemon is running. (Use the
service libvirt-bin start
command to start it again. If it does not start, use the/usr/sbin/libvirtd -d
command.)root@vmx-server:~# ps aux | grep libvirtd root 1509 0.0 0.0 372564 16452 ? Sl 10:25 0:00 /usr/sbin/libvirtd -d
-
Verify that the versions of libvirtd and virsh are 1.2.19.
root@vmx-server:~# /usr/sbin/libvirtd --version libvirtd (libvirt) 1.2.19 root@vmx-server:~# /usr/bin/virsh --version 1.2.19 root@vmx-server:~#
The system displays the code compilation log.
If you cannot deploy vMX after upgrading libvirt, bring down the virbr0
bridge with the ifconfig virbr0 down
command and delete the
bridge with the brctl delbr virbr0
command.
Updating Drivers for the X710 NIC
If you are using Intel XL710 PCI-Express family NICs, make sure you update the drivers before you install vMX.
To update the drivers:
Install the Other Required Packages
apt-get install python-pip apt-get install python-netifaces pip install pyyaml
Preparing the Red Hat Enterprise Linux Host to Install vMX
To prepare the host system running Red Hat Enterprise Linux for installing vMX, perform the task for your version:
- Preparing the Red Hat Enterprise Linux 7.3 Host to Install vMX
- Preparing the Red Hat Enterprise Linux 7.2 Host to Install vMX
Preparing the Red Hat Enterprise Linux 7.3 Host to Install vMX
To prepare the host system running Red Hat Enterprise Linux 7.3 for installing vMX:
You can now install vMX.
When you install vMX with the sh vmx.sh -lv --install
command, you might see a kernel version mismatch warning. You can ignore
this warning.
Preparing the Red Hat Enterprise Linux 7.2 Host to Install vMX
To prepare the host system running Red Hat Enterprise Linux 7.2 for installing vMX:
You can now install vMX.
When you install vMX with the sh vmx.sh -lv --install
command, you might see a kernel version mismatch warning. You can ignore
this warning.
Preparing the CentOS Host to Install vMX
To prepare the host system running CentOS for installing vMX:
You can now install vMX.
When you install vMX with the sh vmx.sh -lv --install
command, you might see a kernel version mismatch warning. You can
ignore this warning.
Installing vMX for Different Use Cases
Installing vMX is different for specific use cases. Table lists the sample configuration requirements for some vMX use cases.
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
5 GB: 1 GB for VCP4 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
10:1 for VCP9 for VFP |
20 GB: 4 GB for VCP16 GB for VFP |
virtio |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance |
10:1 for VCP9 for VFP |
20 GB 4 GB for VCP16 GB for VFP |
SR-IOV |
Dual virtual Routing Engines Note:
When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other. |
Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances. |
Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances. |
virtio or SR-IOV |
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
5 GB: 1 GB for VCP4 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
8:1 for VCP7 for VFP |
16 GB: 4 GB for VCP12 GB for VFP |
virtio |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance |
8:1 for VCP7 for VFP |
16 GB 4 GB for VCP12 GB for VFP |
SR-IOV |
Dual virtual Routing Engines Note:
When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other. |
Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances. |
Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances. |
virtio or SR-IOV |
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
5 GB: 1 GB for VCP4 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
8:1 for VCP7 for VFP |
16 GB: 4 GB for VCP12 GB for VFP |
virtio |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance |
8:1 for VCP7 for VFP |
16 GB 4 GB for VCP12 GB for VFP |
SR-IOV |
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
5 GB: 1 GB for VCP4 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
8:1 for VCP7 for VFP |
16 GB: 4 GB for VCP12 GB for VFP |
virtio |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance |
8:1 for VCP7 for VFP |
16 GB 4 GB for VCP12 GB for VFP |
SR-IOV |
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
10 GB: 2 GB for VCP8 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
4:1 for VCP3 for VFP |
10 GB: 2 GB for VCP8 GB for VFP |
virtio or SR-IOV |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports) Up to 80 Gbps of raw performance |
8:1 for VCP7 for VFP |
16 GB 4 GB for VCP12 GB for VFP |
SR-IOV |
Use Case |
Minimum vCPUs |
Minimum Memory |
NIC Device Type |
---|---|---|---|
Lab simulation Up to 100 Mbps performance |
4: 1 for VCP3 for VFP |
8 GB: 2 GB for VCP6 GB for VFP |
virtio |
Low-bandwidth applications Up to 3 Gbps performance |
4:1 for VCP3 for VFP |
8 GB: 2 GB for VCP6 GB for VFP |
virtio or SR-IOV |
High-bandwidth applications or performance testing For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports) Up to 80 Gbps of raw performance |
5:1 for VCP4 for VFP |
8 GB 2 GB for VCP6 GB for VFP |
SR-IOV |
From Junos OS Release 18.4R1 (Ubuntu host) and Junos OS Release
19.1R1 (RedHat host), you can set the use_native_drivers
value to true in the vMX configuration file to use the latest unmodified
drivers for your network interface cards for vMX installations
To install vMX for a particular use case, perform one of the following tasks:
- Installing vMX for Lab Simulation
- Installing vMX for Low-Bandwidth Applications
- Installing vMX for High-Bandwidth Applications
- Installing vMX with Dual Routing Engines
- Installing vMX with Mixed WAN Interfaces
Installing vMX for Lab Simulation
Starting in Junos OS Release 14.1, the use case for lab simulation uses the virtio NIC.
To install vMX for the lab simulation (less than 100 Mbps) application use case:
Here is a sample vMX startup configuration file using the virtio device type for lab simulation:
--- #Configuration on the host side - management interface, VM images etc. HOST: identifier : vmx1 # Maximum 4 characters host-management-interface : eth0 routing-engine-image : "/home/vmx/vmxlite/images/junos-vmx-x86-64.qcow2" routing-engine-hdd : "/home/vmx/vmxlite/images/vmxhdd.img" forwarding-engine-image : "/home/vmx/vmxlite/images/vFPC.img" --- #External bridge configuration BRIDGES: - type : external name : br-ext # Max 10 characters --- #vRE VM parameters CONTROL_PLANE: vcpus : 1 memory-mb : 1024 console_port: 8601 interfaces : - type : static ipaddr : 10.102.144.94 macaddr : "0A:00:DD:C0:DE:0E" --- #vPFE VM parameters FORWARDING_PLANE: memory-mb : 4096 vcpus : 3 console_port: 8602 device-type : virtio interfaces : - type : static ipaddr : 10.102.144.98 macaddr : "0A:00:DD:C0:DE:10" --- #Interfaces JUNOS_DEVICES: - interface : ge-0/0/0 mac-address : "02:06:0A:0E:FF:F0" description : "ge-0/0/0 interface" - interface : ge-0/0/1 mac-address : "02:06:0A:0E:FF:F1" description : "ge-0/0/1 interface"
Installing vMX for Low-Bandwidth Applications
Starting in Junos OS Release 14.1, the use case for low-bandwidth applications uses virtio or SR-IOV NICs.
To install vMX for the low-bandwidth (up to 3 Gbps) application use case:
Here is a sample vMX startup configuration file using the virtio device type for low-bandwidth applications:
--- #Configuration on the host side - management interface, VM images etc. HOST: identifier : vmx1 # Maximum 4 characters host-management-interface : eth0 routing-engine-image : "/home/vmx/vmx/images/junos-vmx-x86-64.qcow2" routing-engine-hdd : "/home/vmx/vmx/images/vmxhdd.img" forwarding-engine-image : "/home/vmx/vmx/images/vFPC.img" --- #External bridge configuration BRIDGES: - type : external name : br-ext # Max 10 characters --- #vRE VM parameters CONTROL_PLANE: vcpus : 1 memory-mb : 4096 console_port: 8601 interfaces : - type : static ipaddr : 10.102.144.94 macaddr : "0A:00:DD:C0:DE:0E" --- #vPFE VM parameters FORWARDING_PLANE: memory-mb : 16384 vcpus : 9 console_port: 8602 device-type : virtio interfaces : - type : static ipaddr : 10.102.144.98 macaddr : "0A:00:DD:C0:DE:10" --- #Interfaces JUNOS_DEVICES: - interface : ge-0/0/0 mac-address : "02:06:0A:0E:FF:F0" description : "ge-0/0/0 interface" - interface : ge-0/0/1 mac-address : "02:06:0A:0E:FF:F1" description : "ge-0/0/1 interface"
Installing vMX for High-Bandwidth Applications
Starting in Junos OS Release 14.1, the use case for high-bandwidth applications uses the SR-IOV NICs.
To install vMX for the high-bandwidth (above 3 Gbps) application use case:
Here is a sample vMX startup configuration file using the SR-IOV device type:
--- #Configuration on the host side - management interface, VM images etc. HOST: identifier : vmx1 # Maximum 4 characters host-management-interface : eth0 routing-engine-image : "/home/vmx/images/junos-vmx-x86-64.qcow2" routing-engine-hdd : "/home/vmx/images/vmxhdd.img" forwarding-engine-image : "/home/vmx/images/vFPC.img" --- #External bridge configuration BRIDGES: - type : external name : br-ext # Max 10 characters --- #VCP VM parameters CONTROL_PLANE: vcpus : 1 memory-mb : 4096 console_port: 8601 interfaces : - type : static ipaddr : 10.102.144.94 macaddr : "0A:00:DD:C0:DE:0E" --- #VFP VM parameters FORWARDING_PLANE: memory-mb : 16384 vcpus : 9 console_port: 8602 device-type : sriov interfaces : - type : static ipaddr : 10.102.144.98 macaddr : "0A:00:DD:C0:DE:10" --- #Interfaces JUNOS_DEVICES: - interface : ge-0/0/0 port-speed-mbps : 10000 nic : eth1 mtu : 2000 virtual-function : 0 mac-address : "02:06:0A:0E:FF:F0" description : "ge-0/0/0 connects to eth1" - interface : ge-0/0/1 port-speed-mbps : 10000 nic : eth2 mtu : 2000 virtual-function : 0 mac-address : "02:06:0A:0E:FF:F1" description : "ge-0/0/1 connects to eth2"
For more information see, Example: Enabling SR-IOV on vMX Instances on KVM.
Installing vMX with Dual Routing Engines
You can set up redundant Routing Engines on the vMX server by creating the primary Routing Engine (re0) and backup Routing Engine (re1) in the CONTROL_PLANE section of the vMX startup configuration file (default file is config/vmx.conf).
When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.
Starting in Junos OS Release 18.1 to install vMX for the dual Routing Engines use case:
Here is a sample vMX startup configuration file that is deploying the first Routing Engine instance on this host:
--- #Configuration on the host side - management interface, VM images etc. HOST: identifier : vmx1 # Maximum 4 characters host-management-interface : eth0 routing-engine-image : "/home/vmx/images/junos-vmx-x86-64.qcow2" routing-engine-hdd : "/home/vmx/images/vmxhdd.img" forwarding-engine-image : "/home/vmx/images/vFPC.img" --- #External bridge configuration BRIDGES: - type : external name : br-ext # Max 10 characters --- #VCP VM parameters CONTROL_PLANE: vcpus : 1 memory-mb : 4096 console_port : 8601 deploy : 0 interfaces : - type : static instance : 0 ipaddr : 10.102.144.94 macaddr : "0A:00:DD:C0:DE:0E" - type : static instance : 1 console_port : 8612 ipaddr : 10.102.144.95 macaddr : "0A:00:DD:C0:DE:0F" --- #VFP VM parameters FORWARDING_PLANE: memory-mb : 12288 vcpus : 10 console_port: 8602 device-type : sriov interfaces : - type : static ipaddr : 10.102.144.98 macaddr : "0A:00:DD:C0:DE:10" --- #Interfaces JUNOS_DEVICES: - interface : ge-0/0/0 port-speed-mbps : 10000 nic : eth1 mtu : 2000 virtual-function : 0 mac-address : "02:06:0A:0E:FF:F0" description : "ge-0/0/0 connects to eth1" - interface : ge-0/0/1 port-speed-mbps : 10000 nic : eth2 mtu : 2000 virtual-function : 0 mac-address : "02:06:0A:0E:FF:F1" description : "ge-0/0/1 connects to eth2"
Installing vMX with Mixed WAN Interfaces
Starting in Junos OS Release 17.2, the use case for mixed WAN interfaces uses the virtio and SR-IOV interfaces. Sample configuration requirements are the same as for using SR-IOV device type.
To install vMX with mixed interfaces:
Here is a sample vMX startup configuration file using mixed interfaces:
--- #Configuration on the host side - management interface, VM images etc. HOST: identifier : vmx1 # Maximum 4 characters host-management-interface : eth0 routing-engine-image : "/home/vmx/images/junos-vmx-x86-64.qcow2" routing-engine-hdd : "/home/vmx/images/vmxhdd.img" forwarding-engine-image : "/home/vmx/images/vFPC.img" --- #External bridge configuration BRIDGES: - type : external name : br-ext # Max 10 characters --- #VCP VM parameters CONTROL_PLANE: vcpus : 1 memory-mb : 4096 console_port: 8601 interfaces : - type : static ipaddr : 10.102.144.94 macaddr : "0A:00:DD:C0:DE:0E" --- #VFP VM parameters FORWARDING_PLANE: memory-mb : 12288 vcpus : 10 console_port: 8602 device-type : mixed interfaces : - type : static ipaddr : 10.102.144.98 macaddr : "0A:00:DD:C0:DE:10" --- #Interfaces JUNOS_DEVICES: - interface : ge-0/0/0 type : virtio mac-address : "02:06:0A:0E:FF:F0" description : "ge-0/0/0 interface" - interface : ge-0/0/1 type : sriov port-speed-mbps : 10000 nic : eth2 mtu : 2000 virtual-function : 0 mac-address : "02:06:0A:0E:FF:F1" description : "ge-0/0/1 connects to eth2"