Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Installing vMX on KVM

date_range 21-Sep-21

Read this topic to understand how to install the virtual MX router in the KVM environment.

Preparing the Ubuntu Host to Install vMX

To prepare the Ubuntu host system for installing vMX (Starting in Junos OS Release 15.1F6):

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.

  2. See Upgrading Kernel and Upgrading to libvirt 1.2.19 sections below.

  3. If you are using Intel XL710 PCI-Express family cards, make sure you update the drivers. See Updating Drivers for the X710 NIC.

  4. Enable Intel VT-d in BIOS. (We recommend that you verify the process with the vendor because different systems have different methods to enable VT-d.)

    Refer to the procedure to enable VT-d available on the Intel Website.

  5. Disable KSM by setting KSM_ENABLED=0 in /etc/default/qemu-kvm.

  6. Disable APIC virtualization by editing the /etc/modprobe.d/qemu-system-x86.conf file and adding enable_apicv=0 to the line containing options kvm_intel.

    options kvm_intel nested=1 enable_apicv=0

  7. Restart the host to disable KSM and APIC virtualization.

  8. If you are using SR-IOV, you must perform this step.

    Note:

    You must remove any previous installation with an external bridge in /etc/network/interfaces and revert to using the original management interface. Make sure that the ifconfig -a command does not show external bridges before you proceed with the installation.

    To determine whether an external bridge is displayed, use the ifconfig command to see the management interface. To confirm that this interface is used for an external bridge group, use the brctl show command to see whether the management interface is listed as an external bridge.

    Enable SR-IOV capability by turning on intel_iommu=on in the /etc/default/grub directory.

    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

    Append the intel_iommu=on string to any existing text for the GRUB_CMDLINE_LINUX_DEFAULT parameter.

    Run the update-grub command followed by the reboot command.

  9. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure the NUMA node for the VFP has at least 16 1G Huge Pages. To configure the size of Huge Pages, add the following line in /etc/default/grub:

    GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=number-of-huge-pages"

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  10. Run the modprobe kvm-intel command before you install vMX.

Note:

Starting in Junos OS 18.2 and later releases, Ubuntu 16.04.5 LTS and Linux 4.4.0-62-generic are supported.

To meet the minimum software and OS requirements, you might need to perform these tasks:

Upgrading the Kernel

Note:

Upgrading Linux kernel in Ubuntu 16.04 version is not required.

Note:

If you are using Ubuntu 14.04.1 LTS, which comes with 3.19.0-80-generic, you can skip this step. Ubuntu 14.04 comes with a lower version of kernel (Linux 3.13.0-24-generic) than the recommended version (Linux 3.19.0-80-generic).

To upgrade the kernel:

  1. Determine your version of the kernel.

    content_copy zoom_out_map
    uname -a
    Linux rbu-node-33 3.19.0-80-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  2. If your version differs from the version shown in step 1, run the following commands:

    content_copy zoom_out_map
    apt-get install linux-firmware
    apt-get install linux-image-3.19.0-80-generic
    apt-get install linux-image-extra-3.19.0-80-generic
    apt-get install linux-headers-3.19.0-80-generic
    
  3. Restart the system.

Upgrading to libvirt 1.2.19

Note:

Ubuntu 16.04.5 supports Libvirt version is 1.3.1. Upgrading libvirt in Ubuntu 16.04 is not required.

Ubuntu 14.04 supports libvirt 1.2.2 (which works for VFP lite mode). If you are using the VFP performance mode or deploying multiple vMX instances using the VFP lite mode, you must upgrade to libvirt 1.2.19.

To upgrade libvirt:

  1. Make sure that you install all the packages listed in Minimum Hardware and Software Requirements.

  2. Navigate to the /tmp directory using the cd /tmp command.

  3. Get the libvirt-1.2.19 source code by using the command wget http://libvirt.org/sources/libvirt-1.2.19.tar.gz.

  4. Uncompress and untar the file using the tar xzvf libvirt-1.2.19.tar.gz command.

  5. Navigate to the libvirt-1.2.19 directory using the cd libvirt-1.2.19 command.

  6. Stop libvirtd with the service libvirt-bin stop command.

  7. Run the ./configure --prefix=/usr --localstatedir=/ --with-numactl command.

  8. Run the make command.

  9. Run the make install command.

  10. Make sure that the libvirtd daemon is running. (Use the service libvirt-bin start command to start it again. If it does not start, use the /usr/sbin/libvirtd -d command.)

    content_copy zoom_out_map
    root@vmx-server:~# ps aux | grep libvirtd
    root      1509  0.0  0.0 372564 16452 ?        Sl   10:25   0:00 /usr/sbin/libvirtd -d
    
  11. Verify that the versions of libvirtd and virsh are 1.2.19.

    content_copy zoom_out_map
    root@vmx-server:~# /usr/sbin/libvirtd --version
    libvirtd (libvirt) 1.2.19
    root@vmx-server:~#  /usr/bin/virsh --version
    1.2.19
    root@vmx-server:~#

    The system displays the code compilation log.

Note:

If you cannot deploy vMX after upgrading libvirt, bring down the virbr0 bridge with the ifconfig virbr0 down command and delete the bridge with the brctl delbr virbr0 command.

Updating Drivers for the X710 NIC

If you are using Intel XL710 PCI-Express family NICs, make sure you update the drivers before you install vMX.

To update the drivers:

  1. Download the vMX software package as root and uncompress the package.
    content_copy zoom_out_map
    tar xzvf package-name
    
  2. Install the i40e driver from the installation directory.
    content_copy zoom_out_map
    cd drivers/i40e-1.3.46/src
    make install
    
  3. Install the latest i40evf driver from Intel.

    For example, the following commands download and install Version 1.4.15:

    content_copy zoom_out_map
    cd /tmp
    wget https://downloadmirror.intel.com/26003/eng/i40evf-1.4.15.tar.gz
    tar zxvf i40evf-1.4.15.tar.gz
    cd i40evf-1.4.15/src
    make install
    
  4. Update initrd with the drivers.
    content_copy zoom_out_map
    update-initramfs -u -k 'uname -r'
    
  5. Activate the new driver.
    content_copy zoom_out_map
    rmmod i40e
    modprobe i40e
    

Install the Other Required Packages

Use the following commands to install python-netifaces package on Ubuntu.
content_copy zoom_out_map
apt-get install python-pip 
apt-get install python-netifaces
pip install pyyaml

Preparing the Red Hat Enterprise Linux Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux for installing vMX, perform the task for your version:

Preparing the Red Hat Enterprise Linux 7.3 Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux 7.3 for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

    content_copy zoom_out_map
    yum groupinstall "virtualization host"
    yum groupinstall "virtualization platform"
    
  4. Register your host using your Red Hat account credentials. Enable the appropriate repositories.
    content_copy zoom_out_map
    subscription-manager register --username username --password password --auto-attach
    subscription-manager repos --enable rhel-7-fast-datapath-htb-rpms
    subscription-manager repos --enable rhel-7-fast-datapath-rpms
    subscription-manager repos --enable rhel-7-server-extras-rpms
    subscription-manager repos --enable rhel-7-server-nfv-rpms
    subscription-manager repos --enable rhel-7-server-optional-rpms
    subscription-manager repos --enable rhel-7-server-rh-common-rpms
    subscription-manager repos --enable rhel-7-server-rhn-tools-beta-rpms
    subscription-manager repos --enable rhel-7-server-rpms
    subscription-manager repos --enable rhel-ha-for-rhel-7-server-rpms
    subscription-manager repos --enable rhel-server-rhscl-7-rpms
    

    To install the Extra Packages for Enterprise Linux 7 (epel) repository:

    content_copy zoom_out_map
    yum -y install wget
    cd /tmp/
    wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    yum -y install epel-release-latest-7.noarch.rpm
    
  5. Update currently installed packages.
    content_copy zoom_out_map
    yum upgrade
    
  6. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure that the NUMA node for the VFP has at least sixteen 1G Huge Pages. To configure the size of Huge Pages, use the following step:

    For Red Hat: Add the Huge Pages configuration.

    content_copy zoom_out_map
    grubby --update-kernel=ALL --args="default_hugepagesz=huge-pages-size  hugepagesz=huge-pages-size hugepages=number-of-huge-pages" 
    grub2-install /dev/boot-device-name 
    reboot
    

    Use the mount | grep boot command to determine the boot device name.

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  7. Install the required packages.
    content_copy zoom_out_map
    yum install python27-python-pip python27-python-devel numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel numactl-devel redhat-lsb kmod-ixgbe libvirt-daemon-kvm numactl telnet net-tools dosfstools
    
  8. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.
    content_copy zoom_out_map
    yum install kernel-devel gcc
    grubby --args="intel_iommu=on" --update-kernel=ALL
    

    Reboot and log in again.

  9. Link the qemu-kvm binary to the qemu-system-x86_64 file.
    content_copy zoom_out_map
    ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
    
  10. Set up the path for the correct Python release and install the PyYAML library.
    content_copy zoom_out_map
    PATH=/opt/rh/python27/root/usr/bin:$PATH
    export PATH
    pip install netifaces pyyaml
    
  11. If you have installed any Red Hat OpenStack libraries, you must change script/templates/red_{vPFE, vRE}-ref.xml to use <type arch='x86_64' machine='pc-0.13'>hvm</type> as the machine type.
  12. Disable KSM.
    content_copy zoom_out_map
    systemctl disable ksm  
    systemctl disable ksmtuned 
    

    To verify that KSM is disabled run the following command.

    content_copy zoom_out_map
    cat /sys/kernel/mm/ksm/run 0
    

    The value 0 in the output indicates that KSM is disabled.

  13. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=n to the line containing options kvm_intel.
    content_copy zoom_out_map
    modprobe -r kvm_intel
    
    content_copy zoom_out_map
    vi /etc/modprobe.d/kvm.conf to add the following lines
    options kvm-intel enable_apicv=n
    

    You can use enable_apicv=0 also.

    content_copy zoom_out_map
    modprobe kvm-intel 
    

    Restart the host to disable KSM and APIC virtualization.

  14. Stop and disable Network Manager.
    content_copy zoom_out_map
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  15. Ensure that the build directory is readable by the QEMU user.
    content_copy zoom_out_map
    chmod -R o+r,o+x build-directory-pathname
    

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user="root".

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Preparing the Red Hat Enterprise Linux 7.2 Host to Install vMX

To prepare the host system running Red Hat Enterprise Linux 7.2 for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

    content_copy zoom_out_map
    yum groupinstall "virtualization host"
    yum groupinstall "virtualization platform"
    
  4. Register your host using your Red Hat account credentials. Enable the appropriate repositories.
    content_copy zoom_out_map
    subscription-manager register --username username --password password --auto-attach
    subscription-manager repos --enable rhel-server-rhscl-7-rpms
    subscription-manager repos --enable rhel-7-server-extras-rpms
    subscription-manager repos --enable rhel-7-server-rhn-tools-beta-rpms
    subscription-manager repos --enable rhel-7-server-optional-rpms
    
  5. Update currently installed packages.
    content_copy zoom_out_map
    yum upgrade
    
  6. Install the required packages.
    content_copy zoom_out_map
    yum install python27-python-pip python27-python-devel numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel numactl-devel redhat-lsb kmod-ixgbe libvirt-daemon-kvm numactl telnet net-tools dosfstools
    
  7. For optimal performance, we recommend you configure the size of Huge Pages to be 1G on the host and make sure that the NUMA node for the VFP has at least sixteen 1G Huge Pages. To configure the size of Huge Pages, use the following step:

    For Red Hat: Add the Huge Pages configuration.

    content_copy zoom_out_map
    grubby --update-kernel=ALL --args="default_hugepagesz=huge-pages-size  hugepagesz=huge-pages-size hugepages=number-of-huge-pages" 
    grub2-install /dev/boot-device-name
     reboot
    

    Use the mount | grep boot command to determine the boot device name.

    The number of Huge Pages must be at least (16G * number-of-numa-sockets).

  8. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.
    content_copy zoom_out_map
    yum install kernel-devel gcc
    grubby --args="intel_iommu=on" --update-kernel=ALL
    

    Reboot and log in again.

  9. Link the qemu-kvm binary to the qemu-system-x86_64 file.
    content_copy zoom_out_map
    ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
    
  10. Set up the path for the correct Python release and install the PyYAML library.
    content_copy zoom_out_map
    PATH=/opt/rh/python27/root/usr/bin:$PATH
    export PATH
    pip install netifaces pyyaml
    
  11. If you have installed any Red Hat OpenStack libraries, you must change script/templates/red_{vPFE, vRE}-ref.xml to use <type arch='x86_64' machine='pc-0.13'>hvm</type> as the machine type.
  12. Disable KSM.
    content_copy zoom_out_map
    systemctl disable ksm  
    systemctl disable ksmtuned 
    

    To verify that KSM is disabled run the following command.

    content_copy zoom_out_map
    cat /sys/kernel/mm/ksm/run 0
    

    The value 0 in the output indicates that KSM is disabled.

  13. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=n to the line containing options kvm_intel.
    content_copy zoom_out_map
    modprobe -r kvm_intel
    
    content_copy zoom_out_map
    vi /etc/modprobe.d/kvm.conf to add the following lines
    options kvm-intel enable_apicv=n
    

    You can use enable_apicv=0 also.

    content_copy zoom_out_map
    modprobe kvm-intel 
    

    Restart the host to disable KSM and APIC virtualization.

  14. Stop and disable Network Manager.
    content_copy zoom_out_map
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  15. Ensure that the build directory is readable by the QEMU user.
    content_copy zoom_out_map
    chmod -R o+r,o+x build-directory-pathname
    

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user="root".

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Preparing the CentOS Host to Install vMX

To prepare the host system running CentOS for installing vMX:

  1. Meet the minimum software and OS requirements described in Minimum Hardware and Software Requirements.
  2. Enable hyperthreading and VT-d in BIOS.

    If you are using SR-IOV, enable SR-IOV in BIOS.

    We recommend that you verify the process with the vendor because different systems have different methods to access and change BIOS settings.

  3. During the OS installation, select the Virtualization Host and Virtualization Platform software collections.

    If you did not select these software collections during the GUI installation, use the following commands to install them:

    content_copy zoom_out_map
    yum groupinstall "virtualization host"
    yum groupinstall "virtualization platform"
    
  4. Enable the appropriate repositories.
    content_copy zoom_out_map
    yum install -y "http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm"
    yum install centos-release-scl
    
  5. Update currently installed packages.
    content_copy zoom_out_map
    yum upgrade
    
  6. Install the required packages.
    content_copy zoom_out_map
    yum install python27-python-pip python27-python-devel numactl-libs libpciaccess-devel parted-devel yajl-devel libxml2-devel glib2-devel libnl-devel libxslt-devel libyaml-devel numactl-devel redhat-lsb kmod-ixgbe libvirt-daemon-kvm numactl telnet net-tools
    
  7. (Optional) If you are using SR-IOV, you must install these packages and enable SR-IOV capability.
    content_copy zoom_out_map
    yum install kernel-devel gcc
    grubby --args="intel_iommu=on" --update-kernel=ALL
    

    Reboot and log in again.

  8. Link the qemu-kvm binary to the qemu-system-x86_64 file.
    content_copy zoom_out_map
    ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
    
  9. Set up the path for the correct Python release and install the PyYAML library.
    content_copy zoom_out_map
    PATH=/opt/rh/python27/root/usr/bin:$PATH
    export PATH
    pip install netifaces pyyaml
    
    Note:

    In case of error with installation, use the following workaround:

    content_copy zoom_out_map
    # yum install python27-python-pip 
    # scl enable python27 bash
     # source scl_source enable python27 
    # export LD_LIBRARY_PATH=/opt/rh/python27/root/usr/lib64
    # pip install -upgrade pip 
    # pip install netifaces pyyaml
    
  10. Disable KSM.
    content_copy zoom_out_map
    systemctl disable ksm  
    systemctl disable ksmtuned 
    

    To verify that KSM is disabled run the following command.

    content_copy zoom_out_map
    cat /sys/kernel/mm/ksm/run 0
    

    The value 0 in the output indicates that KSM is disabled.

  11. Disable APIC virtualization by editing the /etc/modprobe.d/kvm.conf file and adding enable_apicv=0 to the line containing options kvm_intel.
    • content_copy zoom_out_map
      modprobe -r kvm_intel
      
    • content_copy zoom_out_map
      vi /etc/modprobe.d/kvm.conf to add the following lines
      options kvm-intel enable_apicv=0
      
    • content_copy zoom_out_map
      modprobe kvm-intel 
      

    Restart the host to disable KSM and APIC virtualization.

  12. Stop and disable Network Manager.
    content_copy zoom_out_map
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    

    If you cannot stop Network Manager, you can prevent resolv.conf from being overwritten with the chattr +I /etc/resolv.conf command.

  13. Ensure that the build directory is readable by the QEMU user.
    content_copy zoom_out_map
    chmod -R o+r,o+x build-directory-pathname
    

    As an alternative, you can configure QEMU to run as the root user by setting the /etc/libvirt/qemu.conf file to user=root.

  14. Add this line to the end of the /etc/profile file.
    content_copy zoom_out_map
    export PATH=/opt/rh/python27/root/usr/bin:$PATH
    

You can now install vMX.

Note:

When you install vMX with the sh vmx.sh -lv --install command, you might see a kernel version mismatch warning. You can ignore this warning.

Installing vMX for Different Use Cases

Installing vMX is different for specific use cases. Table lists the sample configuration requirements for some vMX use cases.

Table 1: Sample Configurations for Use Cases (supported in Junos OS Release 18.3 to 18.4)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

10:1 for VCP9 for VFP

20 GB: 4 GB for VCP16 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

10:1 for VCP9 for VFP

20 GB 4 GB for VCP16 GB for VFP

SR-IOV

Dual virtual Routing Engines

Note:

When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

virtio or SR-IOV

Table 2: Sample Configurations for Use Cases (supported in Junos OS Release 18.1 to 18.2)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Dual virtual Routing Engines

Note:

When deploying on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

Double the number of VCP resources for your particular use case is consumed when deploying both VCP instances.

virtio or SR-IOV

Table 3: Sample Configurations for Use Cases (supported in Junos OS Release 17.4 )

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 4: Sample Configurations for Use Cases (supported in Junos OS Release 15.1F6 to 17.3 )

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

5 GB: 1 GB for VCP4 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

8:1 for VCP7 for VFP

16 GB: 4 GB for VCP12 GB for VFP

virtio

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 5: Sample Configurations for Use Cases (supported in Junos OS Release 15.1F4 to 15.1F3)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

10 GB: 2 GB for VCP8 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

4:1 for VCP3 for VFP

10 GB: 2 GB for VCP8 GB for VFP

virtio or SR-IOV

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports)

Up to 80 Gbps of raw performance

8:1 for VCP7 for VFP

16 GB 4 GB for VCP12 GB for VFP

SR-IOV

Table 6: Sample Configurations for Use Cases (supported in Junos OS Release 14.1)

Use Case

Minimum vCPUs

Minimum Memory

NIC Device Type

Lab simulation

Up to 100 Mbps performance

4: 1 for VCP3 for VFP

8 GB: 2 GB for VCP6 GB for VFP

virtio

Low-bandwidth applications

Up to 3 Gbps performance

4:1 for VCP3 for VFP

8 GB: 2 GB for VCP6 GB for VFP

virtio or SR-IOV

High-bandwidth applications or performance testing

For 3 Gbps and beyond performance (with minimum of two 10Gb Ethernet ports)

Up to 80 Gbps of raw performance

5:1 for VCP4 for VFP

8 GB 2 GB for VCP6 GB for VFP

SR-IOV

Note:

From Junos OS Release 18.4R1 (Ubuntu host) and Junos OS Release 19.1R1 (RedHat host), you can set the use_native_drivers value to true in the vMX configuration file to use the latest unmodified drivers for your network interface cards for vMX installations

To install vMX for a particular use case, perform one of the following tasks:

Installing vMX for Lab Simulation

Starting in Junos OS Release 14.1, the use case for lab simulation uses the virtio NIC.

To install vMX for the lab simulation (less than 100 Mbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type : virtio

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable lite mode for the VFP.
    content_copy zoom_out_map
    user@vmx# set chassis fpc 0 lite-mode
    

Here is a sample vMX startup configuration file using the virtio device type for lab simulation:

content_copy zoom_out_map
--- 
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/vmx/vmxlite/images/junos-vmx-x86-64.qcow2"
    routing-engine-hdd        : "/home/vmx/vmxlite/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vmx/vmxlite/images/vFPC.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#vRE VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 1024 
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.94 
        macaddr   : "0A:00:DD:C0:DE:0E"

--- 
#vPFE VM parameters
FORWARDING_PLANE:
    memory-mb   : 4096
    vcpus       : 3
    console_port: 8602
    device-type : virtio 

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.98
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/1
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/1 interface"

Installing vMX for Low-Bandwidth Applications

Starting in Junos OS Release 14.1, the use case for low-bandwidth applications uses virtio or SR-IOV NICs.

To install vMX for the low-bandwidth (up to 3 Gbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: virtio or device-type: sriov

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.
    content_copy zoom_out_map
    user@vmx# set chassis fpc 0 performance-mode
    

Here is a sample vMX startup configuration file using the virtio device type for low-bandwidth applications:

content_copy zoom_out_map
--- 
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/vmx/vmx/images/junos-vmx-x86-64.qcow2"
    routing-engine-hdd        : "/home/vmx/vmx/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vmx/vmx/images/vFPC.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#vRE VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 4096 
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.94 
        macaddr   : "0A:00:DD:C0:DE:0E"

--- 
#vPFE VM parameters
FORWARDING_PLANE:
    memory-mb   : 16384
    vcpus       : 9
    console_port: 8602
    device-type : virtio 

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.98
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/1
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/1 interface"

Installing vMX for High-Bandwidth Applications

Starting in Junos OS Release 14.1, the use case for high-bandwidth applications uses the SR-IOV NICs.

To install vMX for the high-bandwidth (above 3 Gbps) application use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: sriov

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.
    content_copy zoom_out_map
    user@vmx# set chassis fpc 0 performance-mode
    

Here is a sample vMX startup configuration file using the SR-IOV device type:

content_copy zoom_out_map
--- 
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/vmx/images/junos-vmx-x86-64.qcow2"
    routing-engine-hdd        : "/home/vmx/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vmx/images/vFPC.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#VCP VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 4096 
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.94 
        macaddr   : "0A:00:DD:C0:DE:0E"

--- 
#VFP VM parameters
FORWARDING_PLANE:
    memory-mb   : 16384
    vcpus       : 9
    console_port: 8602
    device-type : sriov
    
    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.98
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     port-speed-mbps      : 10000
     nic                  : eth1
     mtu                  : 2000             
     virtual-function     : 0
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 connects to eth1"
   
   - interface            : ge-0/0/1
     port-speed-mbps      : 10000
     nic                  : eth2
     mtu                  : 2000             
     virtual-function     : 0
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/1 connects to eth2"

For more information see, Example: Enabling SR-IOV on vMX Instances on KVM.

Installing vMX with Dual Routing Engines

You can set up redundant Routing Engines on the vMX server by creating the primary Routing Engine (re0) and backup Routing Engine (re1) in the CONTROL_PLANE section of the vMX startup configuration file (default file is config/vmx.conf).

Note:

When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

Starting in Junos OS Release 18.1 to install vMX for the dual Routing Engines use case:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure the vMX instance.

    The default CONTROL_PLANE section resembles the following with one interface entry:

    content_copy zoom_out_map
    CONTROL_PLANE:
        vcpus       : 1
        memory-mb   : 2048 
        console_port: 8896
    
        interfaces  :
          - type      : static
            ipaddr    : 10.216.48.117 
            macaddr   : "0A:01:03:A1:A1:02"

    To set up the redundant Routing Engines:

    1. Navigate to CONTROL_PLANE and specify the proper number of vCPUs (vcpus) and amount of memory (memory-mb).
    2. Starting with Junos OS Release 18.1R1, add the deploy parameter to designate the Routing Engine instance deployed on this host. If you do not specify this parameter, all instances (0,1) are deployed on the host.

      When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

    3. Modify the interfaces entry to add instance : 0 after the type parameter to set up re0.

      Specify the ipaddr and macaddr parameters. This address is the management IP address for the VCP VM (fxp0).

    4. Add another entry, but specify instance : 1 to set up re1 and specify the console_port parameter for re1 after the instance : 1 parameter.

      Specify the ipaddr and macaddr parameters. This address is the management IP address for the VCP VM (fxp0).

    The revised CONTROL_PLANE section that deploys re0 on the host resembles the following example with two interface entries:

    content_copy zoom_out_map
    CONTROL_PLANE:
        vcpus        : 1
        memory-mb    : 4096
        console_port : 8896
        deploy       : 0
    
        interfaces :
          - type     : static
            instance : 0
            ipaddr   : 10.216.48.117
            macaddr  : "0A:01:03:A1:A1:02"
    
          - type         : static
            instance     : 1
            console_port : 8897
            ipaddr       : 10.216.48.118
            macaddr      : "0A:01:03:A1:A1:06"

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.
    content_copy zoom_out_map
    user@vmx# set chassis fpc 0 performance-mode
    
  6. When deploying the Routing Engines on separate hosts, you must set up a connection between the hosts for the VCPs to communicate with each other.

    For example, to set up a connection (such as br-int-vmx1) between the two hosts over an interface (such as eth1), run the following command on both hosts:

    content_copy zoom_out_map
    ifconfig eth1 up && brctl addif br-int-vmx1 eth1
    

Here is a sample vMX startup configuration file that is deploying the first Routing Engine instance on this host:

content_copy zoom_out_map
--- 
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/vmx/images/junos-vmx-x86-64.qcow2"
    routing-engine-hdd        : "/home/vmx/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vmx/images/vFPC.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#VCP VM parameters
CONTROL_PLANE:
    vcpus        : 1
    memory-mb    : 4096 
    console_port : 8601
    deploy       : 0

    interfaces  :
      - type      : static
        instance  : 0
        ipaddr    : 10.102.144.94 
        macaddr   : "0A:00:DD:C0:DE:0E"

      - type         : static
        instance     : 1
        console_port : 8612
        ipaddr       : 10.102.144.95 
        macaddr      : "0A:00:DD:C0:DE:0F"

--- 
#VFP VM parameters
FORWARDING_PLANE:
    memory-mb   : 12288
    vcpus       : 10
    console_port: 8602
    device-type : sriov
    
    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.98
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     port-speed-mbps      : 10000
     nic                  : eth1
     mtu                  : 2000             
     virtual-function     : 0
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 connects to eth1"
   
   - interface            : ge-0/0/1
     port-speed-mbps      : 10000
     nic                  : eth2
     mtu                  : 2000             
     virtual-function     : 0
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/1 connects to eth2"

Installing vMX with Mixed WAN Interfaces

Starting in Junos OS Release 17.2, the use case for mixed WAN interfaces uses the virtio and SR-IOV interfaces. Sample configuration requirements are the same as for using SR-IOV device type.

To install vMX with mixed interfaces:

  1. Download the vMX software package as root and uncompress the package.

    tar xzvf package-name

  2. Change directory to the location of the uncompressed vMX package.

    cd package-location

  3. Edit the config/vmx.conf text file with a text editor to configure a single vMX instance.

    Ensure the following parameter is set properly in the vMX configuration file:

    device-type: mixed

    When configuring the interfaces, make sure the virtio interfaces are specified before the SR-IOV interfaces. The type parameter specifies the interface type.

    See Specifying vMX Configuration File Parameters.

  4. Run the ./vmx.sh -lv --install script to deploy the vMX instance specified by the config/vmx.conf startup configuration file and provide verbose-level logging to a file. See Deploying and Managing vMX.
  5. From the VCP, enable performance mode for the VFP.
    content_copy zoom_out_map
    user@vmx# set chassis fpc 0 performance-mode
    

Here is a sample vMX startup configuration file using mixed interfaces:

content_copy zoom_out_map
--- 
#Configuration on the host side - management interface, VM images etc.
HOST:
    identifier                : vmx1   # Maximum 4 characters
    host-management-interface : eth0
    routing-engine-image      : "/home/vmx/images/junos-vmx-x86-64.qcow2"
    routing-engine-hdd        : "/home/vmx/images/vmxhdd.img"
    forwarding-engine-image   : "/home/vmx/images/vFPC.img"

---
#External bridge configuration
BRIDGES:
    - type  : external
      name  : br-ext                  # Max 10 characters

--- 
#VCP VM parameters
CONTROL_PLANE:
    vcpus       : 1
    memory-mb   : 4096 
    console_port: 8601

    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.94 
        macaddr   : "0A:00:DD:C0:DE:0E"

--- 
#VFP VM parameters
FORWARDING_PLANE:
    memory-mb   : 12288
    vcpus       : 10
    console_port: 8602
    device-type : mixed
    
    interfaces  :
      - type      : static
        ipaddr    : 10.102.144.98
        macaddr   : "0A:00:DD:C0:DE:10"

--- 
#Interfaces
JUNOS_DEVICES:
   - interface            : ge-0/0/0
     type                 : virtio
     mac-address          : "02:06:0A:0E:FF:F0"
     description          : "ge-0/0/0 interface"
   
   - interface            : ge-0/0/1
     type                 : sriov
     port-speed-mbps      : 10000
     nic                  : eth2
     mtu                  : 2000     
     virtual-function     : 0
     mac-address          : "02:06:0A:0E:FF:F1"
     description          : "ge-0/0/1 connects to eth2"
footer-navigation