Installing vMX on OpenStack
Read this topic to understand how to install vMX instance in the OpenStack environment.
Preparing the OpenStack Environment to Install vMX
Make sure the openstackrc file is sourced before you run any OpenStack commands.
To prepare the OpenStack environment to install vMX, perform these tasks:
Creating the neutron Networks
You must create the neutron networks used by vMX before you start the vMX instance. The public network is the neutron network used for the management (fxp0) network. The WAN network is the neutron network on which the WAN interface for vMX is added.
To display the neutron network names, use the neutron net-list
command.
You must identify and create the type of networks you need in your OpenStack configuration.
You can use these commands as one way to create the public network:
neutron net-create network-name --shared --provider:physical_network network-name --provider:network_type flat --router:external
neutron subnet-create network-name address --name subnetwork-name --allocation-pool start=start-address,end=end-address --gateway=gateway-address
For example:
neutron net-create public --shared --provider:physical_network public_physnet --provider:network_type flat --router:external
neutron subnet-create public 10.92.13.128/25 --name public-subnet --allocation-pool start=10.92.13.230,end=10.92.13.253 --gateway=10.92.13.254
For virtio, you can use these commands as one way to create the WAN network:
neutron net-create network-name --router:external=True --provider:network_type vlan --provider:physical_network network-name --provider:segmentation_id segment-id
neutron subnet-create network-name address --name subnetwork-name --enable_dhcp=False --allocation-pool start=start-address,end=end-address --gateway=gateway-address
For example:
neutron net-create OSP_PROVIDER_1500 --router:external=True --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 1500
neutron subnet-create OSP_PROVIDER_1500 11.0.2.0/24 --name OSP_PROVIDER_1500_SUBNET --enable_dhcp=False --allocation-pool start=11.0.2.10,end=11.0.2.100 --gateway=11.0.2.254
For SR-IOV, you can use these commands as one way to create the WAN network:
neutron net-create network-name --router:external=True --provider:network_type vlan --provider:physical_network network-name
neutron subnet-create network-name address --name subnetwork-name --enable_dhcp=False --allocation-pool start=start-address,end=end-address --gateway=gateway-address
For example:
neutron net-create OSP_PROVIDER_SRIOV --router:external=True --provider:network_type vlan --provider:physical_network physnet2
neutron subnet-create OSP_PROVIDER_SRIOV 12.0.2.0/24 --name OSP_PROVIDER_SRIOV_SUBNET --enable_dhcp=False --allocation-pool start=12.0.2.10,end=12.0.2.100 --gateway=12.0.2.254
Preparing the Controller Node
- Preparing the Controller Node for vMX
- Configuring the Controller Node for virtio Interfaces
- Configuring the Controller Node for SR-IOV Interfaces
Preparing the Controller Node for vMX
To prepare the controller node:
Configuring the Controller Node for virtio Interfaces
To configure the virtio interfaces:
Configuring the Controller Node for SR-IOV Interfaces
If you have more than one SR-IOV interface, you need one dedicated physical 10G interface for each additional SR-IOV interface.
In SRIOV mode, the communication between the Routing Engine (RE) and packet forwarding engine is enabled using virtio interfaces on a VLAN-provider OVS network. Because of this, a given physical interface cannot be part of both VirtIO and SR-IOV networks.
To configure the SR-IOV interfaces:
Preparing the Compute Nodes
Preparing the Compute Node for vMX
You no longer need to configure the compute node to pass
metadata to the vMX instances by including the config_drive_format=vfat
parameter in the /etc/nova/nova.conf file.
To prepare the compute node:
Configuring the Compute Node for SR-IOV Interfaces
If you have more than one SR-IOV interface, you need one physical 10G Ethernet NIC card for each additional SR-IOV interface.
To configure the SR-IOV interfaces:
Installing vMX
After preparing the OpenStack environment, you must create nova flavors and glance images for the VCP and VFP VMs. Scripts create the flavors and images based on information provided in the startup configuration file.
Setting Up the vMX Configuration File
The parameters required to configure vMX are defined in the startup configuration file.
To set up the configuration file:
See Also
Specifying vMX Configuration File Parameters
The parameters required to configure vMX are
defined in the startup configuration file (scripts/vmx.conf). The startup configuration file generates a file that is used to
create flavors. To create new flavors with different vcpus
or memory-mb
parameters, you must change the corresponding re-flavor-name
or pfe-flavor-name
parameter before
creating the new flavors.
To customize the configuration, perform these tasks:
Configuring the Host
To configure the host, navigate to HOST and specify the following parameters:
virtualization-type
—Mode of operation; must beopenstack
.compute
—(Optional) Names of the compute node on which to run vMX instances in a comma-separated list. If this parameter is specified, it must be a valid compute node. If this parameter is specified, vMX instance launched with flavors are only run on the specified compute nodes.If this parameter is not specified, the output of the nova hypervisor-list command provides the list of compute nodes on which to run vMX instances.
Configuring the VCP VM
To configure the VCP VM, you must provide the flavor name.
We recommend unique values for the re-flavor-name
parameter because OpenStack can create multiple entries with the
same name.
To configure the VCP VM, navigate to CONTROL_PLANE and specify the following parameters:
re-flavor-name
—Name of the nova flavor.vcpus
—Number of vCPUs for the VCP; minimum is 1.Note:If you change this value, you must change the
re-flavor-name
value before running the script to create flavors.memory-mb
—Amount of memory for the VCP; minimum is 4 GB.Note:If you change this value, you must change the
re-flavor-name
value before running the script to create flavors.
Configuring the VFP VM
To configure the VFP VM, you must provide the flavor name. Based on your requirements, you might want to change the memory and number of vCPUs. See Minimum Hardware Requirements for minimum hardware requirements.
To configure the VFP VM, navigate to FORWARDING_PLANE and specify the following parameters:
pfe-flavor-name
—Name of the nova flavor.memory-mb
—Amount of memory for the VFP; minimum is 12 GB (performance mode) and 4 GB (lite mode).Note:If you change this value, you must change the
pfe-flavor-name
value before running the script to create flavors.vcpus
—Number of vCPUs for the VFP; minimum is 7 (performance mode) and 3 (lite mode).Note:If you specify less than 7 vCPUs, the VFP automatically switches to lite mode.
Note:If you change this value, you must change the
pfe-flavor-name
value before running the script to create flavors.
Creating OpenStack Flavors
To create flavors for the VCP and VFP, you must execute the script on the vMX startup configuration file (vmx.conf).
To create OpenStack flavors:
Installing vMX Images for the VCP and VFP
To install the vMX OpenStack glance images for the VCP and VFP,
you can execute the vmx_osp_images.sh
script. The script
adds the VCP image in qcow2 format and the VFP file in vmdk format.
To install the VCP and VFP images:
For example, this command installs the VCP image as re-test from the /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2 file and the VFP image as fpc-test from the /var/tmp/vFPC-20170117.img file.
sh vmx_osp_images.sh re-test /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2
fpc-test /var/tmp/vFPC-20170117.img
To view the glance images, use the glance image-list
command.
Starting a vMX Instance
To start a vMX instance, perform these tasks:
Modifying Initial Junos OS Configuration
When you start the vMX instance, the Junos OS configuration file found in package-location/openstack/vmx-components/vms/vmx_baseline.conf is loaded. If you need to change this configuration, make any changes in this file before starting the vMX.
If you create your own vmx_baseline.conf file or move the file, make sure that the package-location/openstack/vmx-components/vms/re.yaml references the correct path.
Launching the vMX Instance
To create and start the vMX instance:
You must shut down the vMX instance before you reboot host server using the request system halt command.