- Introduction
- Get Started
- play_arrow Apstra GUI
- play_arrow Blueprints and Dashboard
- play_arrow Analytics (Blueprints)
- Analytics Introduction
- play_arrow Dashboards
- play_arrow Anomalies
- play_arrow Widgets
- play_arrow Probes
- play_arrow Predefined Reports (Tech Preview)
- play_arrow Root Causes
- play_arrow Staged (Datacenter Blueprints)
- Blueprint-Wide Search
- play_arrow Physical
- play_arrow Build
- play_arrow Selection
- play_arrow Topology
- play_arrow Nodes
- Nodes (Datacenter)
- Unassign Device (Datacenter)
- Update Deploy Mode (Datacenter)
- Generic Systems vs. External Generic Systems
- Create Generic System
- Create External Generic System
- Create Access Switch
- Update Node Tag (Datacenter)
- Update Port Channel ID Range
- Update Hostname (Datacenter)
- Edit Generic System Name
- Edit Device Properties (Datacenter)
- View Node's Static Routes
- Delete Node
- play_arrow Links
- Links (Datacenter)
- Add Links to Leaf
- Add Links to Spine
- Add Links to Generic System
- Add Links to External Generic System
- Add Leaf Peer Links
- Add Link per Superspine (5-Stage)
- Form LAG
- Create Link in LAG
- Break LAG
- Update LAG Mode
- Update Link Tag (Datacenter)
- Update Link Speed
- Update Link Speed per Superspine (5-Stage)
- Mixed Link Speeds between Leaf and Spine
- Update Link Properties
- Delete Link (Datacenter)
- Export Cabling Map (Datacenter)
- Import Cabling Map (Datacenter)
- Edit Cabling Map (Datacenter)
- Fetch LLDP Data (Datacenter)
- play_arrow Interfaces
- play_arrow Racks
- play_arrow Pods
- play_arrow Planes
-
- play_arrow Virtual
- play_arrow Virtual Networks
- play_arrow Routing Zones
- Static Routes (Virtual)
- Protocol Sessions (Virtual)
- play_arrow Virtual Infrastructure
- play_arrow Statistics
-
- play_arrow Policies
- play_arrow Endpoints
- Security Policies
- Interface Policies
- Routing Policies
- Routing Zone (VRF) Constraints
- play_arrow Routing Zone Policy (4.2.0)
-
- play_arrow Data Center Interconnect (DCI)
- play_arrow Catalog
- play_arrow Logical Devices
- play_arrow Interface Maps
- play_arrow Property Sets
- play_arrow Configlets
- play_arrow AAA Servers
- play_arrow Tags
-
- play_arrow Tasks
- play_arrow Connectivity Templates
- Connectivity Templates Introduction
- play_arrow Primitives
- Primitive: Virtual Network (Single)
- Primitive: Virtual Network (Multiple)
- Primitive: IP Link
- Primitive: Static Route
- Primitive: Custom Static Route
- Primitive: BGP Peering (IP Endpoint)
- Primitive: BGP Peering (Generic System)
- Primitive: Dynamic BGP Peering
- Primitive: Routing Policy
- Primitive: Routing Zone Constraint
- User-defined
- Pre-defined
- Create Connectivity Template for Multiple VNs on Same Interface (Example)
- Create Connectivity Template for Layer 2 Connected External Router (Example)
- Update Connectivity Template Assignments
- Edit Connectivity Template
- Delete Connectivity Template
- play_arrow Fabric Settings (4.2.1)
- play_arrow Fabric Policy (4.2.1)
- play_arrow Severity Preferences (4.2.1)
-
- play_arrow Fabric Settings (4.2.0)
- play_arrow Fabric Policy (4.2.0)
- play_arrow Virtual Network Policy (4.2.0)
- play_arrow Anti-Affinity Policy (4.2.0)
- play_arrow Validation Policy (4.2.0)
-
- BGP Route Tagging
- play_arrow Staged (Freeform Blueprints)
- Freeform Introduction
- play_arrow Blueprints
- play_arrow Physical
- play_arrow Selection
- play_arrow Topology
- play_arrow Systems
- Systems Introduction (Freeform)
- Create Internal System (Freeform)
- Create External System (Freeform)
- Update Config Template Assignment (Freeform)
- Update System Name (Freeform)
- Update Hostname (Freeform)
- Update Device Profile Assignment (Freeform)
- Update System ID Assignment (Freeform)
- Update Deploy Mode (Freeform)
- Update System Tag Assignment (Freeform)
- Delete System (Freeform)
- Device Context (Freeform)
- play_arrow Links
-
- play_arrow Resource Management
- Resource Management Introduction (Freeform)
- play_arrow Blueprint Resources
- play_arrow Allocation Groups
- play_arrow Local Pools
- play_arrow Catalog
- play_arrow Config Templates
- play_arrow Device Profiles
- play_arrow Property Sets
- play_arrow Tags
-
- play_arrow Tasks
- play_arrow Uncommitted (Blueprints)
- play_arrow Active (Datacenter Blueprints)
- play_arrow Time Voyager (Blueprints)
- play_arrow Design
- play_arrow Logical Devices
- play_arrow Interface Maps
- play_arrow Rack Types
- play_arrow Templates
- play_arrow Config Templates
- play_arrow Configlets (Datacenter)
- play_arrow Property Sets (Datacenter)
- play_arrow TCP/UDP Ports
- play_arrow Tags
-
- play_arrow Resources
- play_arrow Analytics
- play_arrow Apstra Flow
- Apstra Flow Introduction
- System Requirements
- play_arrow Dashboards
- play_arrow Supported Flow Records
- play_arrow Flow Enrichment
- play_arrow Monitor Flow Data
- play_arrow Configuration Reference
- play_arrow API
- play_arrow Additional Documentation
- play_arrow Knowledge Base
-
- play_arrow External Systems (RBAC Providers)
- play_arrow Providers
- play_arrow Provider Role Mapping
-
- play_arrow Platform
- play_arrow User / Role Management
- play_arrow Security
- Syslog Configuration (Platform)
- Receivers (Platform)
- Global Statistics (Platform)
- Event Log (Audit Log)
- play_arrow Apstra VM Clusters
- play_arrow Developers
- play_arrow Technical Support
- Check Apstra Versions and Patent Numbers
-
- Favorites & User
- play_arrow Apstra Server Management
- Apstra Server Introduction
- Monitor Apstra Server via CLI
- Restart Apstra Server
- Reset Apstra Server VM Password
- Reinstall Apstra Server
- Apstra Database Overview
- Back up Apstra Database
- Restore Apstra Database
- Reset Apstra Database
- Migrate Apstra Database
- Replace SSL Certificate on Apstra Server with Signed One
- Replace SSL Certificate on Apstra Server with Self-Signed One
- Change Apstra Server Hostname
- Apstra CLI Utility
- play_arrow Guides
- play_arrow References
- play_arrow Feature Matrix
- play_arrow Devices
- play_arrow Analytics
- play_arrow Predefined Dashboards (Analytics)
- play_arrow Predefined Probes (Analytics)
- Probe: BGP Monitoring
- Probe: Bandwidth Utilization
- Probe: Critical Services: Utilization, Trending, Alerting
- Probe: Device Environmental Checks
- Probe: Device System Health
- Probe: Device Telemetry Health
- Probe: Device Traffic
- Probe: Drain Traffic Anomaly
- Probe: ECMP Imbalance (External Interfaces)
- Probe: ECMP Imbalance (Fabric Interfaces)
- Probe: ECMP Imbalance (Spine to Superspine Interfaces)
- Probe: ESI Imbalance
- Probe: EVPN Host Flapping
- Probe: EVPN VXLAN Type-3 Route Validation
- Probe: EVPN VXLAN Type-5 Route Validation
- Probe: External Routes
- Probe: Hot/Cold Interface Counters (Fabric Interfaces)
- Probe: Hot/Cold Interface Counters (Specific Interfaces)
- Probe: Hot/Cold Interface Counters (Spine to Superspine Interfaces)
- Probe: Hypervisor and Fabric LAG Config Mismatch Probe (Virtual Infra)
- Hypervisor and Fabric VLAN Config Mismatch Probe (Virtual Infra)
- Probe: Hypervisor MTU Mismatch Probe (Virtual Infra - NSX-T Only)
- Probe: Hypervisor MTU Threshold Check Probe (Virtual Infra)
- Probe: Hypervisor Missing LLDP Config Probe (Virtual Infra)
- Probe: Hypervisor Redundancy Checks Probe (Virtual Infra)
- Probe: Interface Flapping (Fabric Interfaces)
- Probe: Interface Flapping (Specific Interfaces)
- Probe: Interface Flapping (Specific Interfaces)
- Probe: Interface Policy 802.1x
- Probe: LAG Imbalance
- Probe: Leafs Hosting Critical Services: Utilization, Trending, Alerting
- Probe: Link Fault Tolerance in Leaf and Access LAGs
- Probe: MLAG Imbalance
- Probe: Multiagent Detector
- Probe: Optical Transceivers
- Probe: Packet Discard Percentage
- Probe: Spine Fault Tolerance
- Probe: Total East/West Traffic
- Probe: VMs without Fabric Configured VLANs Probe (Virtual Infra)
- Probe: VXLAN Flood List Validation
- play_arrow Probe Processors (Analytics)
- Processor: Accumulate
- Processor: Average
- Processor: Comparison
- Processor: EVPN Type 3
- Processor: EVPN Type 5
- Processor: Extensible Service Data Collector
- Processor: Generic Graph Collector
- Processor: Generic Service Data Collector
- Processor: Interface Counters
- Processor: Logical Operator
- Processor: Match Count
- Processor: Match Percentage
- Processor: Match String
- Processor: Max
- Processor: Min
- Processor: Periodic Average
- Processor: Range
- Processor: Ratio
- Processor: Service Data Collector
- Processor: Set Comparison
- Processor: Set Count
- Processor: Standard Deviation
- Processor: State
- Processor: Subtract
- Processor: Sum
- Processor: System Utilization
- Processor: Time in State
- Processor: Traffic Monitor
- Processor: Union
- Processor: VXLAN Floodlist
- Configlet Examples (Design)
- play_arrow Apstra CLI Commands
- Apstra EVPN Support Addendum
- Apstra Server Configuration File
- Graph
- Juniper Apstra Technology Preview
-
Extensible Telemetry Guide
Extensible Telemetry Overview
Install Apstra device drivers and telemetry collectors to collect additional telemetry that can be used in analytics probes. The device drivers enable Apstra to connect to a NOS and collect telemetry. Apstra ships with drivers for EOS, NX-OS, Ubuntu, and CentOS. To add a driver for an operating system not listed here, contact Juniper Support.
Telemetry collectors are Python modules that help collect extended telemetry. The following sections describe the pipeline for creating telemetry collectors and extending Apstra with new collectors. You need familiarity with Python to be able to develop collectors.
Set Up Development Environment
To get access to telemetry collectors (which are housed in the aos_developer_sdk repository) contact Juniper Support. Contribute any new collectors that you develop to the repository.
To keep your system environment intact, we recommend that you use a virtual environment to isolate the required Python packages (for development and testing). You can download the base development environment, aos_developer_sdk.run, from https://support.juniper.net/support/downloads/?p=apstra/. To load the environment, execute:
aos_developer_sdk$ bash aos_development_sdk.run 4d8bbfb90ba8: Loading layer [==================================================>] 217.6kB/217.6kB 7d54ea05a373: Loading layer [==================================================>] 4.096kB/4.096kB e2e40f457231: Loading layer [==================================================>] 1.771MB/1.771MB Loaded image: aos-developer-sdk:2.3.1-129 ================================================================================ Loaded AOS Developer SDK Environment Container Image aos-developer-sdk:2.3.1-129. Container can be run by docker run -it \ -v <path to aos developer_sdk cloned repo>:/aos_developer_sdk \ --name <container name> \ aos-developer-sdk:2.3.1-129 ================================================================================
This command loads the aos_developer_sdk Docker image. After the image load is complete, the command to start the environment is printed. Start the container environment as specified by the command. To install the dependencies, execute:
root@f2ece48bb2f1:/# cd /aos_developer_sdk/ root@f2ece48bb2f1:/aos_developer_sdk# make setup_env ...
The environment is now set up for developing and testing the collectors. Apstra SDK packages, such as device drivers and REST client, are also installed in the environment.
Write Collector
Collector is a class that must derive from aos.sdk.system_agent.base_telemetry_collector.BaseTelemetryCollector. Override the collect method of the collector with the logic to:
Collect Data from Device
The device driver instance inside the collector provides methods to execute
commands against the devices. For example, most Apstra device drivers provide
methods get_json
and get_text
to execute
commands and return the output.
The device drivers for aos_developer_sdk environment are preinstalled. You can explore the methods available to collect data. For example:
>>> from aos.sdk.driver.eos import Device >>> device = Device('172.20.180.10', 'admin', 'admin') >>> device.open() >>> pprint.pprint(device.get_json('show version')) {u'architecture': u'i386', u'bootupTimestamp': 1548302664.0, u'hardwareRevision': u'', u'internalBuildId': u'68f3ae78-65cb-4ed3-8675-0ff2219bf118', u'internalVersion': u'4.20.10M-10040268.42010M', u'isIntlVersion': False, u'memFree': 3003648, u'memTotal': 4011060, u'modelName': u'vEOS', u'serialNumber': u'', u'systemMacAddress': u'52:54:00:ce:87:37', u'uptime': 62620.55, u'version': u'4.20.10M'} >>> dir(device) ['AOS_VERSION_FILE', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'close', 'device_info', 'driver', 'execute', 'get_aos_server_ip', 'get_aos_version_related_info', 'get_device_aos_version', 'get_device_aos_version_number', 'get_device_info', 'get_json', 'get_text', 'ip_address', 'onbox', 'open', 'open_options', 'password', 'probe', 'set_device_info', 'upload_file', 'username']
Parse Data
The collected data needs to be parsed and re-formatted per the Apstra framework and the service schema identified above. Collectors with generic storage schema follow the following structure:
{ "items": [ { "identity": <key goes here>, "value": <value goes here>, }, { "identity": <key goes here>, "value": <value goes here>, }, ... ] }
Collectors with IBA-based schema follow the following structure:
[ { "key": <key goes here>, "value": <value goes here>, }, { "key": <key goes here>, "value": <value goes here>, }, ... ]
In the structures above, the data posted has multiple items. Each item has a key and a value. For example, to post interface specific information, there would be an identity/key-value pair for each interface you want to post to the framework.
In the case when you want to use a third party package to parse data obtained from a device, list the Python package and version in the path.
<aos_developer_sdk>/aosstdcollectors/requirements_<NOS>.txt
.
The packages installed by the dependency do not conflict with packages that
Apstra software uses. The Apstra-installed packages are available at
/etc/aos/python_dependency.txt
in the development
environment.
Post Data to Framework
When data is collected and parsed as per the required schema, post the data to
the framework. You can use the post_data
method available in
the collector. It accepts one argument, and that is the data that should be
posted to the framework.
The folder aos_developer_sdk/aosstdcollectors/aosstdcollectors
in the repository contains folders for each NOS. Add your collector to the
folder that matches the NOS. Cumulus is no longer supported as of Apstra version
4.1.0, although this example remains for illustrative purposes. For example, to
write a collector for Cumulus, add the collector to
aos_developer_sdk/aosstdcollectors/aosstdcollectors/cumulus
,
and name the file after the service name. For example, if the service name is
interface_in_out_bytes
, then name the file
interface_in_out_bytes.py
.
In addition to defining the collector class, define the function
collector_plugin
in the collector file. The function takes
one argument and returns the collector class that is implemented.
For example, a generic storage schema based collector looks like:
""" Service Name: interface_in_out_bytes Schema: Key: String, represents interface name. Value: Json String with two possible keys: rx: integer value, represents received bytes. tx: integer value, represents transmitted bytes. DOS: eos Data collected using command: 'show interfaces' Type of Collector: BaseTelemetryCollector Storage Schema Path: aos.sdk.telemetry.schemas.generic Application Schema: { 'type': 'object', 'properties': { 'identity': { 'type': 'string', }, 'value': { 'type': 'object', 'properties': { 'rx': { 'type': 'number', }, 'tx': { 'type': 'number', } }, 'required': ['rx', 'tx'], } } } """ import json from aos.sdk.system_agent.base_telemetry_collector import BaseTelemetryCollector # Inheriting from BaseTelemetryCollector class InterfaceRxTxCollector(BaseTelemetryCollector): # Overriding collect method def collect(self): # Obtaining the command output using the device instance. collected_data = self.device.get_json('show interfaces') # Data is in the format # "interfaces": { # "<interface_name>": { # .... # "interfaceCounters": { # .... # "inOctets": int # "outOctets": int # .... # } # } # ... # } # Parse the data as per the schema and structure required. parsed_data = json.dumps({ 'items': [ { 'identity': intf_name, 'value': json.dumps({ 'rx': intf_stats['interfaceCounters'].get('inOctets'), 'tx': intf_stats['interfaceCounters'].get('outOctets'), }) } for intf_name, intf_stats in collected_data['interfaces'].iteritems() if 'interfaceCounters' in intf_stats ] }) # Post the data to the framework self.post_data(parsed_data) # Define collector_plugin class to return the Collector def collector_plugin(_device): return InterfaceRxTxCollector
An IBA storage schema based collector looks like:
""" Service Name: iba_bgp Schema: Key: JSON String, specifies local IP and peer IP. Value: String. ‘1’ if state is established ‘2’ otherwise DOS: eos Data collected using command: 'show ip bgp summary vrf all' Storage Schema Path: aos.sdk.telemetry.schemas.iba_string_data Application Schema: { 'type': 'object', 'properties': { key: { 'type': 'object', 'properties': { 'local_ip': { 'type': 'string', }, 'peer_ip': { 'type': 'string', } }, 'required': ['local_ip', 'peer_ip'], }, 'value': { 'type': 'string', } } } """ from aos.sdk.system_agent.base_telemetry_collector import IBATelemetryCollector def parse_text_output(collected): result = [ {'key': {'local_ip': str(vrf_info['routerId']), 'peer_ip': str(peer_ip)}, 'value': str( 1 if session_info['peerState'] == 'Established' else 2)} for vrf_info in collected['vrfs'].itervalues() for peer_ip, session_info in vrf_info['peers'].iteritems()] return result # Inheriting from BaseTelemetryCollector class IbaBgpCollector(BaseTelemetryCollector): # Overriding collect method def collect(self): # Obtaining the command output using the device instance. collected_data = self.device.get_json('show ip bgp summary vrf all') # Parse the data as per the schema and structure required and # post to framework. self.post_data(parse_text_output(collected_data)) # Define collector_plugin class to return the Collector def collector_plugin(device): return IbaBgpCollector
Unit Test Collector
The folder aos_developer_sdk/aosstdcollectors/test
in the repository
contains folders based on the NOS. Add your test to the folder that matches the NOS.
For example, a test to a collector for Cumulus is added to
aos_developer_sdk/aosstdcollectors/test/cumulus
. We recommend
that you name the unit test with the prefix test_
.
The existing infrastructure implements a Pytest fixture
collector_factory
that is used to mock the device driver
command response. The general flow for test development is as follows.
- Use the collector factory to get a collector instance and mocked Apstra framework. The collector factory takes the collector class that you have written as input.
- Mock the device response.
- Invoke collect method.
- Validate the data posted to the mocked Apstra framework.
For example, a test looks like:
import json from aosstdcollectors.eos.interface_in_out_bytes import InterfaceRxTxCollector # Test method with prefix 'test_' def test_sanity(collector_factory): # Using collector factory to retrieve the collector instance and mocked # Apstra framework. collector, mock_framework = collector_factory(InterfaceRxTxCollector) command_response = { 'interfaces': { 'Ethernet1': { 'interfaceCounters': { 'inOctets': 10, 'outOctets': 20, } }, 'Ethernet2': { 'interfaceCounters': { 'inOctets': 30, 'outOctets': 40, } } } } # Set the device get_json method to retrieve the command response. collector.device.get_json.side_effect = lambda _: command_response # Invoke the collect method collector.collect() expected_data = [ { 'identity': 'Ethernet1', 'value': json.dumps({ 'rx': 10, 'tx': 20, }), }, { 'identity': 'Ethernet2', 'value': json.dumps({ 'rx': 30, 'tx': 40, }) } ] # validate the data posted by the collector data_posted_by_collector = json.loads(mock_framework.post_data.call_args[0][0]) assert sorted(expected_data) == sorted(data_posted_by_collector["items"])
To run the test, execute:
root@1df9bf89aeaf:/aos_developer_sdk# make test root@1df9bf89aeaf:/aos_developer_sdk# make test
This command executes all the tests in the repository.
Package Collector
All the collectors are packaged based on the NOS. To generate all packages, execute
make at aos_develop_sdk
. You can find the build packages at
aos_developer_sdk/dist
. The packages build can be broadly
classified as:
Package | Description |
---|---|
Built-In Collector Packages | These packages have the prefix aosstdcollectors_builtin_.
To collect telemetry from a device per the reference design, Apstra
requires services as listed in the <deviceblah>
section. Built-In collector packages contain collectors for these
services. The packages are generated on a per NOS basis. |
Custom Collector Packages | These package have the prefix aosstdcollectors_custom_ in their names. The packages are generated on a per NOS basis. The package named aosstdcollectors_custom_<NOS>-0.1.0-py2-none-any.whl contains the developed collector. |
Apstra SDK Device Driver Packages | These packages have a prefix apstra_devicedriver_. These packages are generated on a per NOS basis. Packages are generated for NOS that are not available by default in Apstra. |
Upload Packages
If the built-in collector packages and the Apstra SDK Device Driver for your Device Operating System (NOS) were not provided with the Apstra software, you must upload them to the Apstra server.
If you are using an offbox solution and your NOS is not EOS, you must upload the built-in collector package.
Upload the package containing your collector(s) and assign them to a Device System Agent or System Agent Profile.
Use Telemetry Collector
- Set up Telemetry Service Registry
- Start Collector
- Delete Collector
- Get Collected Data
- List Running Collector Services
Set up Telemetry Service Registry
The registry maps the service to its application schema and the storage schema
path. You can manage the telemetry service registry with the REST endpoint
/api/telemetry-service-registry
. You can't enable the
collector for a service without adding a registry entry for the particular
service. The registry entry for a service cannot be modified while the service
is in use.
When executing make
, all application schemas are packaged
together to a tar file (json_schemas.tgz) in the dist folder. With
apstra-cli, you have the option of importing all the schemas in the .tgz
file.
Start Collector
To start a service, use the POST API
/api/systems/<system_id>/services
with the following
three arguments:
Arguments | |
---|---|
Input_data | The data provided as input to the collector. Defaults to None. |
Interval | Interval at which to run the service. Defaults to 120 seconds. |
Name | Name of the service. |
You can also manage collectors via the apstra-cli utility.
Delete Collector
To delete a service, use the DELETE API
/api/systems/<system_id>/services/<service_name>
.
Get Collected Data
To retrieve collected data, use the GET API
/api/systems/<system_id>/services/<service_name>/data
.
Only the data collected in the last iteration is saved. Data does not persist
over Apstra restart.
List Running Collector Services
To retrieve the list of services enabled on a device, use the GET API
/api/systems/<system_id>/services
.