Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

close
keyboard_arrow_left
Network Management and Monitoring Guide
Table of Contents Expand all
list Table of Contents

Install Packet Flow Accelerator Diagnostics Software

date_range 14-Dec-23

Packet Flow Accelerator Diagnostics Software Overview

You can use Packet Flow Accelerator Diagnostics software to test the FPGA module in the QFX-PFA-4Q module installed on the QFX5100-24Q-AA switch as well as the data paths between the FPGA module and the QFX5100-24Q-AA switch. The Packet Flow Accelerator Diagnostics software contains standard diagnostics, orchestration diagnostics, and Precision Time Protocol (PTP) and synchronization diagnostics. In addition to the Packet Flow Accelerator Diagnostics software tests, there are utilities included in the Packet Flow Accelerator Diagnostics software that you can use to further diagnose issues on the QFX-PFA-4Q module. For information on how to install the QFX-PFA-4Q module, see Installing an Expansion Module in a QFX5100 Device.

To run the orchestration diagnostics, PTP and synchronization diagnostics, and utilities contained in the Packet Flow Accelerator Diagnostics software, you need to have a Junos OS Release 14.1X53-D27 software or later with enhanced automation installed on your QFX5100 switch. For information on how to download and install Junos OS software, see Installing Software Packages on QFX Series Devices.

The Packet Flow Accelerator Diagnostics software runs in a guest VM on the switch and requires that you configure guest VM options in the Junos OS CLI.

Verify That the QFX-PFA-4Q Expansion Module Is Installed

Before you install the Packet Flow Accelerator Diagnostics software, verify that the QFX-PFA-4Q module is installed.

From the CLI prompt, issue the show chassis hardware command.

content_copy zoom_out_map
{master:0}
root> show chassis hardware
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                VX3715020024      QFX5100-24Q-AA
Pseudo CB 0     
Routing Engine 0          BUILTIN      BUILTIN           QFX Routing Engine
FPC 0            REV 02   650-057155   VX3715020024      QFX5100-24Q-AA
  CPU                     BUILTIN      BUILTIN           FPC CPU
  PIC 0                   BUILTIN      BUILTIN           24x 40G-QSFP-AA
    Xcvr 6       REV 01   740-032986   QD334902          QSFP+-40G-SR4
  PIC 1          REV 01   711-060247   VY3115060052      QFX-PFA-4Q
Power Supply 0   REV 03   740-041741   1GA24082731       JPSU-650W-AC-AFO
Power Supply 1   REV 03   740-041741   1GA24082726       JPSU-650W-AC-AFO
Fan Tray 0                                               QFX5100 Fan Tray 0, Front to Back Airflow - AFO
Fan Tray 1                                               QFX5100 Fan Tray 1, Front to Back Airflow - AFO
Fan Tray 2                                               QFX5100 Fan Tray 2, Front to Back Airflow - AFO
Fan Tray 3                                               QFX5100 Fan Tray 3, Front to Back Airflow - AFO
Fan Tray 4                                               QFX5100 Fan Tray 4, Front to Back Airflow - AFO

From the CLI output, you can see that the four QSFP+ interfaces (4x40G QSFP+) contained in the QFX-PFA-4Q module. are installed.

Download the Packet Flow Diagnostics Software

Note:

To access the download site, you must have a service contract with Juniper Networks and an access account. If you need help obtaining an account, complete the registration form at the Juniper Networks website https://www.juniper.net/registration/Register.jsp .

To download the Packet Flow Diagnostics software package from the Juniper Networks Support website, go to https://www.juniper.net/support/ :

  1. Using a Web browser, navigate to https://www.juniper.net/support .
  2. Click Download Software.
  3. In the Switching box, click Junos OS Platforms.
  4. In the QFX Series section, click the name of the platform for which you want to download software.
  5. Click the Software tab and select the release number from the Release drop-down list.
  6. In the Install Package section on the Software tab, select the Install Package for the release.

    A login screen appears.

  7. Enter your name and password and press Enter.
  8. Read the End User License Agreement, click the I agree radio button, and then click Proceed.
  9. Save the pfadiag_vm-rXXXXX.img.gz file on your computer.
  10. Open or save the Packet Flow Diagnostics software package either to the local system in the var/tmp directory or to a remote location. If you are saving the installation package to a remote system, make sure that you can access it using HTTP, TFTP, FTP, or scp.

Copy the Packet Flow Diagnostics Software Package to the Switch

To copy the packet flow diagnostics software package to the switch:

Copy the packet flow diagnostics package to the switch using any file transfer protocol:

For example:

content_copy zoom_out_map
root% scp //hostname/pathname/pfadiag_vm-rXXXXX.img.gz /var/tmp

Install the Packet Flow Diagnostics Software on the Switch

To install the packet flow diagnostics software package on the switch:

  1. Install the Packet Flow Diagnostics software on the switch.

    This might take a few minutes.

    If the Packet Flow Diagnostics software resides locally on the switch, issue the following command:

    content_copy zoom_out_map
    {master:0}
    root> request system software add virtual-machine-package /var/tmp/pfadiag_vm-rXXXXX.img.gz
    
    Installing virtual-machine package..
    Copying virtual-machine package..
    Uncompressing virtual-machine package..
    Finished virtual-machine package installation.
    
  2. Issue the show version command to verify that the installation was successful.
    content_copy zoom_out_map
    {master:0}
    root> show version
    fpc0:                                                                                                                                                                 
    --------------------------------------------------------------------------                                                                                            
    Hostname: switch                                                                                                                                                      
    Model: qfx5100-24q-aa                                                                                                                                                 
    Junos: 14.1X53-D27_vjunos.62                                                                                                                                          
    JUNOS Base OS Software Suite [14.1X53-D26_vjunos.62]                                                                                                                  
    JUNOS Base OS boot [14.1X53-D27_vjunos.62]                                                                                                                            
    JUNOS Crypto Software Suite [14.1X53-D27_vjunos.62]                                                                                                                   
    JUNOS Online Documentation [14.1X53-D27_vjunos.62]                                                                                                                    
    JUNOS Kernel Software Suite [14.1X53-D27_vjunos.62]                                                                                                                   
    JUNOS Packet Forwarding Engine Support (qfx-ex-x86-32) [14.1X53-D27_vjunos.62]                                                                                        
    JUNOS Routing Software Suite [14.1X53-D27_vjunos.62]                                                                                                                  
    JUNOS Enterprise Software Suite [14.1X53-D27_vjunos.62]                                                                                                               
    JUNOS py-base-i386 [14.1X53-D27_vjunos.62]                                                                                                                            
    JUNOS py-extensions-i386 [14.1X53-D27_vjunos.62]                                                                                                                      
    JUNOS Host Software [14.1X53-D27_vjunos.62]                                                                                                                           
    Junos for Automation Enhancement                                                                                                                                      
    JUNOS GUEST-VM Software [pfadiag_vm-rXXXXX-ve]                                                                                                                       
                                                                                                                                                                          
    {master:0} 
    

    The CLI output shows that the Packet Flow Accelerator Diagnostics software was installed.

Configure the Guest VM Options to Launch the Guest VM on the Host

To configure the guest VM options:

  1. Configure the following options for guest VM support in the Junos OS CLI at the [edit] hierarchy.
    • Compute cluster name

    • Compute node name

    • VM instance name

    • Dedicated management interface for guest VM

    • Third-party package name

    • Internal IP address of the guest VM

  2. Configure the name of the compute cluster and compute node.

    The name of the compute cluster must be default-cluster, and the name of the name of the compute node must be default-node; otherwise, launching the guest VM fails.

    content_copy zoom_out_map
    {master:0}
    root# set services app-engine compute-cluster default-cluster compute-node default-node hypervisor
  3. Configure the name of the VM instance and the name of the third party application.
    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance instance-name package package-name 
    Note:

    The package names in the show app-engine virtual-machine-package command and the show version command should match.

    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance diagnostics package pfadiag_vm-rXXXXX-ve
  4. Associate the VM instance with the configured compute cluster and compute node.
    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance instance-name compute-cluster name compute-node name
    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance diagnostics compute-cluster default-cluster compute-node default-node
    Note:

    The name of the compute cluster must be default-cluster, and the name of the compute node must be default-node; otherwise, launching the guest VM fails.

  5. Configure the local management IP address.

    This IP address is used for the internal bridging interface. The host uses this IP address to check the availability of the guest VM.

    Note:

    Do not use 192.168.1.1 and 192.168.1.2 as IP addresses because they are used by the Host-OS and Junos OS respectively.

    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance instance-name local-management family inet address 192.168.1.X
    content_copy zoom_out_map
    {master:0}
    root# set services app-engine virtual-machines instance diagnostics local-management family inet address 192.168.1.10
  6. Configure the management interface for the guest VM.

    This management interface is separate from the one used for Junos OS.

    content_copy zoom_out_map
    {master:0}
    root # set services app-engine virtual-machines instance diagnostics management-interface em1
    Note:

    The management interface name must be either em0 or em1. The configuration will fail if you do not configure a management interface and then commit the configuration.

    The new management interface is provisioned for the guest VM.

  7. Commit the configuration.
    content_copy zoom_out_map
    {master:0}
    root# commit

    Here are the results of the configuration:

    content_copy zoom_out_map
    services {
        app-engine {
            compute-cluster default-cluster {
                compute-node default-node {
                    hypervisor;
                }
            }
            virtual-machines {
                instance diagnostics {
                    package pfadiag_vm-rXXXXX-ve;
                    local-management {
                        family inet {
                            address 192.168.1.10;
                        }
                    }
                    compute-cluster default-cluster {
                        compute-node default-node;
                    }
                    management-interface em1;
                }
            }
        }
    }
    

Verify That the Guest VM is Working

To verify that the guest VM is working:

Issue the following show commands to verify that everything is working correctly:
  • root> show app-engine status

    content_copy zoom_out_map
    Compute cluster: default-cluster
      Compute Node: default-node, Online

    The status should be Online.

  • root> show app-engine virtual-machine instance

    content_copy zoom_out_map
    VM name                  Compute cluster           VM status
    diagnostics               default-cluster           ACTIVE

    The VM status should be active.

  • root> show app-engine virtual-machine package

    content_copy zoom_out_map
    VM package: pfadiag_vm-rXXXXX-ve
    Compute cluster                   Package download status
    default-cluster                   DOWNLOADED   
    

Access the Guest VM

To access the guest VM:

  1. Log into the guest VM.
    • Specify the guest VM name using the request app-engine virtual-machine-shell guest-VM-name command. The maximum length for the guest VM name is 255 characters. Make sure you are logged in as root when you enter this command.

      content_copy zoom_out_map
      root> request app-engine virtual-machine-shell diagnostics
    • Enter a valid username and password combination for the guest VM.

      Note:

      The first time you log in, the username is root. There is no password. After you log in, you will be prompted to create a password.

      For example:

      content_copy zoom_out_map
      Maxeler Ikon Diagnostics VM r44702
      
      diagnostics login: root
      You are required to change your password immediately (root enforced)
      New password: 
      Retype new password:
      
  2. Issue the ifconfig -a command to see the names of the management interface that is used to access the guest VM from outside of the network, name of the management interface that is used for internal use, and the NIC ports used in the diagnostics VM.

    In this example, the heartbeat address is the IP address that is used for internal use , the management interface is used for external communications, and the xe-0/0/40 and xe-0/0/41 interfaces are the NIC ports used in the diagnostics VM. The heartbeat is configured by default. The IP address of the heartbeat is the same as the IP address you configured for Junos OS.

    You can associate one of the interfaces to the guest VM by issuing the set services app-engine virtual-machines instance name management-interface interface-name. command. Use the same IP address as the one you configured using the set services app-engine virtual-machines instance test local-management family inet address 192.168.1.10. The MAC addresses associated with these interfaces are used for internal bridging.

    content_copy zoom_out_map
    [root@ikondiag ~]# ifconfig -a
    heartbeat Link encap:Ethernet  HWaddr 52:54:00:5D:DB:01  
              inet addr:192.168.1.10  Bcast:0.0.0.0  Mask:255.255.255.0
              inet6 addr: fe80::5054:ff:fe5d:db01/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:282 errors:0 dropped:0 overruns:0 frame:0
              TX packets:266 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:24955 (24.3 KiB)  TX bytes:24232 (23.6 KiB)
    
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
    
    management Link encap:Ethernet  HWaddr 52:54:00:76:B3:C4  
              inet6 addr: fe80::5054:ff:fe76:b3c4/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:6 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:438 (438.0 b)  TX bytes:1836 (1.7 KiB)
    
    xe-0-0-40 Link encap:Ethernet  HWaddr EA:8B:BB:75:56:FE  
              inet6 addr: fe80::e88b:bbff:fe75:56fe/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:0 (0.0 b)  TX bytes:140 (140.0 b)
    
    xe-0-0-41 Link encap:Ethernet  HWaddr 3E:1A:00:94:ED:5B  
              inet6 addr: fe80::3c1a:ff:fe94:ed5b/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:0 (0.0 b)  TX bytes:230 (230.0 b)
    
    

Verify That the FPGA Module Is Working

You can use the following utilities to verify that the FPGA module on the QFX-PFA-4Q module is working.

To verify that the FPGA module is working:

  1. Issue the lspci |grep "RAM memory" command at the guest VM login prompt.
    content_copy zoom_out_map
    [root@ikondiag ~]# lspci |grep "RAM memory"
    00:09.0 RAM memory: Juniper Networks Device 0078

    The output shows that Juniper Networks Device 0078 is working.

  2. Issue the lspci |grep Co-processor command at the guest VM login prompt:
    content_copy zoom_out_map
    [root@ikondiag ~]# lspci |grep Co-processor
    :0a.0 Co-processor: Maxeler Technologies Ltd. Device 0006

    The output shows that Maxeler Technologies Ltd. Device 0006 is working.

  3. Issue the maxtop command at the guest VM login prompt:
    Note:

    If there are errors in the command output, relaunch the guest VM.

    content_copy zoom_out_map
    [root@ikondiag ~]# maxtop

    content_copy zoom_out_map
    MaxTop Tool 2015.1
    Found 1 card(s) running MaxelerOS 2015.1
    Card 0: QFX-PFA-4Q (P/N: 241124) S/N: 96362301684266423 Mem: 24GB
    
    Load average: 0.00, 0.00, 0.00
    
    DFE  %BUSY  TEMP   MAXFILE        PID    USER       TIME      COMMAND         
     0   0.0%   -      2fcf249cc7...  -      -          -         -               
    

Validate the Connections Between QFX5100-24Q-AA Switch Network Ports and QFX-PFA-4Q Module Ports

You can use the ikon_eth_util –all-pass-through utility to validate the connections between the QFX5100-24Q-AA switch network ports and the QFX-PFA-4Q module ports.

In this example, the ikon_eth_util --all-pass-through utility will validate the following connections between the F-ports, A-ports, B-ports, and C-ports. Table 1 provides the ports that are validated in this example.

Table 1: Validating Ports

F-Ports

A-Ports

B-Ports

C-Ports

xe-0/0/10:2

This interface is one of the 10-Gigabit Ethernet ports on the QFX5100-24Q-AA switch. You can manage these ports through the Junos OS.

xe-0/0/32

This interface connects the PFE of the QFX5100-24Q-AA switch to the B-ports on the FPGA module on the QFX-PFA-4Q module.

JDFE_XE32_10G

This interface is an Internal 10-Gigabit Ethernet port on the FPGA module on the QFX-PFA-4Q module and connects to the A-ports on the PFE of the QFX5100-24Q-AA switch.

JDFE_QSFP0_10G_PORT0 [External Port 0-0]

This interface is one of the front-facing 40-Gigabit Ethernet ports on the QFX-PFA-4Q module and connects to the guest VM running on the QFX5100-24Q-AA switch and the F-ports on the QFX5100-24Q-AA switch.

To validate the connections between the QFX5100-24Q-AA switch network ports and the QFX-PFA-4Q module ports:

  1. Configure a VLAN and VLAN ID:
    content_copy zoom_out_map
    [edit vlans]
    user@switch # set VLAN_TEST vlan-id 100
  2. Associate the F-port and A-port in this VLAN so that the FPGA and PFE can communicate:
    content_copy zoom_out_map
    [edit interfaces]
    user@switch # set xe-0/0/10:2 unit 0 family ethernet-switching vlan members VLAN_TEST 
    user@switch # set xe-0/0/32 unit 0 family ethernet-switching vlan members VLAN_TEST
  3. Commit the configuration:
    content_copy zoom_out_map
    [edit]
    user@switch # commit synchronize
  4. Verify that the VLAN has been created.
    content_copy zoom_out_map
    [edit]
    user@switch # run show vlans
        Routing instance        VLAN name             Tag          Interfaces
        default-switch          VLAN_TEST              100      
                                                                   xe-0/0/10:2.0*
                                                                   xe-0/0/32.0*
        default-switch          default               1        
                                                                    
    
  5. Issue the ikon_eth_util --all-pass-through command at the guest VM login prompt:
    content_copy zoom_out_map
    [root@ikondiag ~]# ikon_eth_util --all-pass-through
    Ikon Ethernet Pass Through Utility
    setting portConnect_JDFE_QSFP0_10G_PORT0_JDFE_XE32_10G to 1
    setting portConnect_JDFE_QSFP0_10G_PORT1_JDFE_XE33_10G to 1
    setting portConnect_JDFE_QSFP0_10G_PORT2_JDFE_XE34_10G to 1
    setting portConnect_JDFE_QSFP0_10G_PORT3_JDFE_XE35_10G to 1
    setting portConnect_JDFE_XE24_10G_JDFE_QSFP1_10G_PORT0 to 1
    setting portConnect_JDFE_XE25_10G_JDFE_QSFP1_10G_PORT1 to 1
    setting portConnect_JDFE_XE26_10G_JDFE_QSFP1_10G_PORT2 to 1
    setting portConnect_JDFE_XE27_10G_JDFE_QSFP1_10G_PORT3 to 1
    setting portConnect_JDFE_XE28_10G_JDFE_QSFP2_10G_PORT0 to 1
    setting portConnect_JDFE_XE29_10G_JDFE_QSFP2_10G_PORT1 to 1
    setting portConnect_JDFE_XE30_10G_JDFE_QSFP2_10G_PORT2 to 1
    setting portConnect_JDFE_XE31_10G_JDFE_QSFP2_10G_PORT3 to 1
    setting portConnect_JDFE_XE36_10G_JDFE_QSFP3_10G_PORT0 to 1
    setting portConnect_JDFE_XE37_10G_JDFE_QSFP3_10G_PORT1 to 1
    setting portConnect_JDFE_XE38_10G_JDFE_QSFP3_10G_PORT2 to 1
    setting portConnect_JDFE_XE39_10G_JDFE_QSFP3_10G_PORT3 to 1
    running press return key to exit
  6. Send traffic to xe-0/0/10:2 on the QFX5100-24Q-AA switch and receive traffic on the front panel port 0-0 on the QFX-PFA-4Q module.
  7. Send traffic to the front panel port 0-0 on the QFX-PFA-4Q module and receive traffic on xe-0/0/10:2 on the QFX5100-24Q-AA switch.
  8. Verify the statistics for the xe-0/0/10:2 and xe-0/0/32 interfaces by issuing the show interfaces xe-0/0/10:2 extensive and show interfaces xe-0/0/32 extensive commands.
  9. Verify the statistics for the JDFE_XE32_10G and JDFE_QSFP0_10G_PORT0 interfaces by issuing the maxnet link commands at the guest VM prompt for the Packet Flow Accelerator Diagnostics software.

    [root@ikondiag ~]# maxnet link show JDFE_XE32_10G

    content_copy zoom_out_map
    JDFE_XE32_10G:
             Link Up: true                  
         MAC address: 00:11:22:33:44:55     
          RX Enabled: true                  
           RX Frames: 1 ok    
                      0 error 
                      0 CRC error
                      0 invalid/errored
                      1 total 
          TX Enabled: true                  
           TX Frames: 0 ok    
                      0 error 
                      0 CRC error
                      0 invalid/errored
                      0 total 
    

    [root@ikondiag ~]# maxnet link show JDFE_QSFP0_10G_PORT0

    content_copy zoom_out_map
    JDFE_QSFP0_10G_PORT0:
             Link Up: true                  
         MAC address: 00:11:22:33:44:55     
          RX Enabled: true                  
           RX Frames: 0 ok    
                      0 error 
                      0 CRC error
                      0 invalid/errored
                      0 total 
          TX Enabled: true                  
           TX Frames: 1 ok    
                      0 error 
                      0 CRC error
                      0 invalid/errored
                      1 total 
    

Uninstall the Guest VM

To remove the guest VM:

  1. Delete the configuration statements and uninstall the Packet Flow Accelerator Diagnostics software package.

    For example, to remove the app-engine statement:

    content_copy zoom_out_map
    root # delete services app-engine
  2. Commit the configuration.
    content_copy zoom_out_map
    root# commit
  3. (Optional) Issue the show version command to learn the name of the Packet Flow Accelerator Diagnostics software package.
    content_copy zoom_out_map
    {master:0}
    root> show version
    fpc0:                                                                                                                                                                 
    --------------------------------------------------------------------------                                                                                            
    Hostname: switch                                                                                                                                                      
    Model: qfx5100-24q-aa                                                                                                                                                 
    Junos: 14.1X53-D27_vjunos.62                                                                                                                                          
    JUNOS Base OS Software Suite [14.1X53-D27_vjunos.62]                                                                                                                  
    JUNOS Base OS boot [14.1X53-D27_vjunos.62]                                                                                                                            
    JUNOS Crypto Software Suite [14.1X53-D27_vjunos.62]                                                                                                                   
    JUNOS Online Documentation [14.1X53-D27_vjunos.62]                                                                                                                    
    JUNOS Kernel Software Suite [14.1X53-D27_vjunos.62]                                                                                                                   
    JUNOS Packet Forwarding Engine Support (qfx-ex-x86-32) [14.1X53-D26_vjunos.62]                                                                                        
    JUNOS Routing Software Suite [14.1X53-D27_vjunos.62]                                                                                                                  
    JUNOS Enterprise Software Suite [14.1X53-D27_vjunos.62]                                                                                                               
    JUNOS py-base-i386 [14.1X53-D27_vjunos.62]                                                                                                                            
    JUNOS py-extensions-i386 [14.1X53-D27_vjunos.62]                                                                                                                      
    JUNOS Host Software [14.1X53-D27_vjunos.62]                                                                                                                           
    Junos for Automation Enhancement                                                                                                                                      
    JUNOS GUEST-VM Software [pfadiag_vm-rXXXXX-ve]                                                                                                                       
                                                                                                                                                                          
    {master:0} 
    
  4. Issue the request system software delete virtual-machine-package <package-name> command to uninstall the Packet Flow Accelerator Diagnostics software.
    content_copy zoom_out_map
    root> request system software delete virtual-machine-package pfadiag_vm-rXXXXX-ve
    fpc0:
    --------------------------------------------------------------------------
    Deleted virtual-machine package dpfadiag_vm-rXXXXX-ve ...
    
    
external-footer-nav