Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Examples: Tests

This section assumes that Test Agents (as many as are required for the tests) have been created according to Creating and Deploying a New Test Agent.

Overview of Test Orchestration

Before you can create and run a test through the REST API, you must have a template on which to base the test defined in Control Center, as explained in the chapter Test and Monitor Templates. All parameters specified in that template as "Template input" then need to be assigned values when you create a test from it in the REST API.

Creating and Running a Test

The template we will use for our test in this example is an HTTP test template.

To inspect that template through the REST API, we retrieve a list of all test templates in the account:

If our HTTP template is the only test template defined, the response will look like this:

The HTTP test in this template has two mandatory inputs left to be defined at runtime: clients (list of Test Agent interfaces playing the role of clients) and url (the URL to fetch using HTTP). The parameter names are those defined as Variable name in Control Center. Here, they are simply lowercase versions of the Control Center display names ("clients" vs. "Clients", etc.).

If there are multiple templates, and you want to inspect just a single template with a known ID, you can do that as follows:

We now create and run the HTTP test using the POST operation for tests.

Below is code supplying the required parameter settings for a test based on the HTTP test template. Depending on the structure of the template, the details here will of course vary. Another example with a slightly more complex test template is found in the section Example with a Different Test Template.

The execution of the test is displayed in Control Center:

Control Center will also respond to the REST API command with the ID of the test. In this example, the test ID is 47:

The test ID can also be found in the URL for the test in the Control Center web GUI. In this example, that URL is https://<host IP>/<account>/testing/47/.

Example with a Different Test Template

Here is another example of a test template: one for UDP, taking as input a server, a list of clients, and a UDP port number. In the Paragon Active Assurance GUI, this UDP template looks as follows:

To supply the inputs to this template, we can use the code below. Here, we have overridden the default value of port. If the default value (5000) is kept, the port section can be omitted from input_values.

Retrieving Test Results

You retrieve results for a test by pointing to the test ID. This also fetches the full configuration of the test.

The basic test results consist of a passed/failed outcome for each test step and for the test as a whole.

By default, for tests where this is relevant, the test results also include averaged metrics taken over the full duration of the test. The average metrics are found in tasks > streams > metrics_avg. You can turn these average metrics off by setting with_metrics_avg to false in the query string.

Optionally, this operation can return detailed (second-by-second) data metrics for each task performed by the test (again, for tests which produce such data). You turn on this feature by setting with_detailed_metrics to true. By default, this flag is set to false. The detailed data metrics are found under tasks > streams > metrics.

There is yet another setting with_other_results which, if set to true, causes additional test results to be returned for the Path trace task type (routes and reroute events).

Example 1: TWAMP Test

A TWAMP test is an example of a test that produces metrics continuously.

Below is Python code for getting the results from a TWAMP test:

The output will look something like this:

Example 2: DSCP Remapping Test

A DSCP remapping test is one that does not produce continuous metrics, but rather a single set of results at the end. It cannot run concurrently with anything else. The format of the output for this test is indicated below. (The Python code for retrieving the test results is the same except for the test ID.)

Generating a PDF Report on a Test

You can generate a PDF report on a test directly from the REST API. The report has the same format as that generated from the Control Center GUI.

By default, the report covers the last 15 minutes of the test. You can specify a different time interval by including the start and end parameters in a query string at the end of the URL. The time is given in UTC (ISO 8601) as specified in IETF RFC 3339.

In addition, the following options can be included in the query string:

  • worst_num: For each task in a test, you can specify how many measurement results to show, ranked by the number of errored seconds with the worst on top. The scope of a measurement result is task-dependent; to give one example, for HTTP it is the result obtained for one client. The default number is 30.
  • graphs: Include graphs in the report.

Example: