Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Proxmox Virtual Environment

As another option, you can consider building a lab in Proxmox VE. Internally, the hypervisor on EVE-NG, Ubuntu native KVM with libvirtd, and Proxmox VE is the same. In all three environments, QEMU runs the VM. Each environment has its own CLI and GUI and uses either Debian or Ubuntu Linux distributions.

Proxmox VE benefits against EVE-NG and Ubuntu native KVM with libvirtd are:

  • Easy to build clusters of hypervisors, which limits the scope of a single BMS.
  • Easy to attach shared storage such as Ceph.
  • Virtualize networks amongst servers using SDN option.
  • REST API operates your systems.

Disadvantages of Proxmox VE benefits against EVE-NG and Ubuntu native KVM with libvirtd are:

  • Building an UKSM kernel cannot save RAM usage of multiple vJunos-switch instances. Hence, each vJunos-switch VM needs 5 GB RAM.
  • Does not run compressed or backing qcow2 images, instead they are expanded as raw image on the storage option. Hence, each vJunos-switch VM needs 32 GB storage.

This document includes examples of creating vJunos-switch VMs on Proxmox VE with a locally configured single Proxmox server and the standard Linux bridges. This helps to compare with the previously described other two environments. As you've not used the Proxmox GUI for VM, you must run configuration changes locally after creating juniper.conf images and Linux Bridge and VM interface post VM creation changes on Proxmox VE. The CLI example makes it easier for you to include it in a script to launch multiple vJunos-switch VMs.

Note:

For scale out labs with multiple servers, we recommend using SDN with VXLAN as network transport option instead of local Linux bridges.

Proxmox VE Preparations

After installing the hypervisor, create the networks to use for vJunos-switch VMs and others in your lab. As in the example above, use the Proxmox GUI to create standard Linux bridges as the three shown below and ensure that they are activated.

A screenshot of a computer Description automatically generated

Assign a name to each Linux bridge and you can optionally set the MTU to 9200. You can change the MTU value using the script after you create the VM. Avoid populating/changing any of the other values.

A screenshot of a computer Description automatically generated

For all the remaining steps, use SSH to the server to run BASH commands locally. First, you download the qcow2-image of vJunos-switch to the server.

Now, download your free copy of vJunos-switch VM to the directory using URL: https://support.juniper.net/support/downloads/?p=vjunos and then verify if the copy is downloaded.

Deploy a vJunos-switch VM on Proxmox VE

Note:

Avoid creating the initial vJunos-switch VM using the Proxmox GUI as GUI might add additional parameters causing the VM to not to work properly. Instead, create the initial VM through CLI and set it as a template. Then, use this template to launch all further VMs from the GUI.

Using BASH, perform the next steps on the server locally:

  1. Configure VM individually:
    1. The VM ID/Number. In the example, it is 200.
    2. The storage where the image of the VM runs from. In the example, it is storage local-lvm.
  2. Delete if an existing VM with the same ID is running. This is useful if you made an error and want to retry.
  3. Create the new vJunos-switch VM with all required parameters to start it correctly later:
    1. Name of the VM. In the example, vswitch. You can change the name.
    2. RAM and CPU. Do not change.
    3. Special BIOS and CPU options that are required for this VM to come up correctly. Do not change the options.
    4. Boot order and serial screen. Do not change.
    5. First, network net0 that gets assigned to the fxp0 interface of the VM. Change, if required but ensure that network can provide a DHCP lease for the VM.
    6. Second, more networks starting with net1, which will be the interface ge-0/0/0 of the vJunos-switch VM. You will need to change that according to your lab design using more interfaces and other Linux bridges. We recommend that you keep the option firewall=0 for each of those interfaces to not overcomplicate the internal design.
  4. Import the vJunos-switch qcow2-image into the selected storage option. You might need to change the vJunos-switch qcow2 image file location.
  5. Import the configuration image location to extract to a BASH variable.
  6. Add the image location to the created VM to boot from.
  7. Create a default juniper.conf with our initial Junos OS configuration for this VM.
  8. Use the make-config.sh script to create an image that embeds your individual juniper.conf file.
  9. Import the Junos OS configuration image to the selected storage option.
  10. Import the configuration image location to extract to a BASH variable.
  11. Add the configuration image location to the created VM.
  12. Check and review the complete configuration of the VM.
  13. Optional: Use the VM as template for future launches of vJunos-switch:
    1. Define the current VM as template.
    2. Select a new VMID for the clone.
    3. Create a clone VM to use it later.
    4. Change the interface assignments for the clone if required.
  14. Launch the VM or its clone.
  15. Review the Linux bridge assignment locally for the started VM.
  16. Review on the Proxmox GUI if the VM has started and then access the console.

Review the chapter Default Junos OS Configuration for vJunos-switch. This chapter guides you with the process of creating an individual Junos OS configuration for your vJunos-switch VM, which is similar on the other environments. This chapter also guides you to add an adopt configuration, which allows each new vJunos-switch VMs automatically appear in the Mist Cloud inventory. Here, without repeating the same steps, you used a minimal startup configuration for remote SSH access as root with the password ABC123 on the fxp0 interface.

At this point, you must have created an individual Junos OS startup configuration and continuing the process.

Now, all our preparations are complete. You can review the resulting VM configuration.

As the VM does not contain any credentials or other limiting factors, use this VM as a template before you launch it for the first time. This allows you to launch multiple VMs as full or linked to the image clones later. Follow the steps below if you decide to proceed.

If you have decided not to use a template/clone yet, then start the first vJunos-switch VM for testing now.

You can now review the VM console in the Proxmox GUI. Ensure you use the correct button to avoid any changes to the outer VM screen on the Routing Engine. The Routing Engine is where all the Junos OS configuration starts and has its own screen. See the figure below for the Console options to select.

A screenshot of a computer Description automatically generated

Linux Bridge and VM Interface Post VM Creation Changes on Proxmox VE

Launching the vJunos-switch VM does not meet the needs of most labs. You must tweak the standard Linux bridge used in the example after every new VM launch. For the detailed explanation, see chapter Linux Bridge and VM Interface Post VM Creation Changes. Hence, you do not need to repeat it here. EVE-NG automatically manages these tweaks.

Proxmox VE does not provide VM interfaces details and their names through CLI locally. However, these details are available in the REST API for the GUI. Using the provided command pvesh, you can easily access the VM interface and extract JSON based information about the created VM interfaces. Hence, it is easier to rebuild a new script vm-bridge-update.sh using pvesh and jq commands and regular BASH programming. See the instructions as shown below.

Copy and paste the below configuration to your editor. Then, save and close.

With the new script, you can now update the Linux bridges and interfaces of the VM after it is started. The selected API’s first node is suitable for a single Proxmox VE installation. If you have a cluster, you might need to change the above script.

To validate the first test for your Linux bridge enhancements, check for LLDP neighbor announcements from your vJunos-switch VM. With the juniper.conf instructions but without the tweak, you do not see the announcements using tcpdump ). See the example below.

To perform a final test, launch a second vJunos-switch connected 1:1 to the first VM. Then, establish a LAG with active LACP between the two VMs. The configuration for both virtual switches in the Mist Cloud GUI is shown below.

A screenshot of a computer Description automatically generated

If you inspect locally on the vJunos-switch console, you should see LLDP neighbors and the established LACP links between the two switches. This step verifies that your lab works as expected.