Install Juniper Alert Format Relay
SUMMARY To install Juniper Alert Format Relay, complete the pre-installation checklist and
all these procedures.
Pre-Installation Checklist
-
Gather the information that you'll need to complete the installation. See
Information Needed for Installation.
-
Ensure that you have three newly installed virtual machines (VMs) available
for the installation. See Virtual Machine Specifications.
-
For all three VMs, verify:
-
You can reach the VM through SSH.
-
The VM can reach:
-
NTP is configured, and the time is synchronized across all VMs.
-
DNS is configured, and DNS resolution is working.
Prepare the VM Hosts
-
Log in to the deployer VM and create the afr directory under /opt:
-
Copy the installation bundle TAR file
afr-release-<RELEASE-TAG>.tar.gz to the deployer VM and
extract the installation bundle under /opt/afr:
tar -xzvf afr-release-<RELEASE-TAG>.tar.gz -C /opt/afr
-
In /opt/afr/, navigate afr-release-<RELEASE-TAG> directory:
cd /opt/afr/afr-release-<RELEASE-TAG>
In the remaining steps, continue to work from
/opt/afr/afr-release-<RELEASE-TAG> as the current working
directory.
-
Create a cluster SSH key:
ssh-keygen -t rsa -b 4096 -f /root/.ssh/kube_cluster_key
chmod 600 /root/.ssh/kube_cluster_key
Set Up the Local Registry
-
Add a afr-registry entry to /etc/hosts on all the four nodes:
#On Deployer VM
echo "127.0.0.1 afr-registry" >> /etc/hosts
#AFR VMs
echo "<deployer-vm-ip> afr-registry" >> /etc/hosts
-
Enter the necessary environment variables in the afr.env file.
afr.env
-------
RELEASE_TAG=v0.0.x
ANSIBLE_SSH_USER=root
ANSIBLE_SSH_KEY=/root/.ssh/kube_cluster_key
MIST_API_URL=
MIST_API_TOKEN=
SNMP_VERSION=<v2c|v3>
SNMP_COMMUNITY=
SNMP_USER_NAME=
SNMP_ENGINE_ID=
SNMP_AUTH_PASS=
SNMP_PRIV_PASS=
SNMP_ADDR=<SNMPMaangerIP>:<Port>
SNMP_AUTH_PROTO=<MD5|SHA>
SNMP_PRIV_PROTO=<DES|AES>
SYSLOG_ADDR=tcp|udp://<SyslogServerIP>:<Port>
-
Apply the environment variables:
export $(cat afr.env | xargs)
-
Run the setup-registry.sh to bring up the local registry:
chmod +x setup-registry.sh
./setup-registry.sh
Note:
This script brings up the afr-registry, which serves as a container
registry, file server, and Helm repository. The script relies on
docker commands to bring up the container.
Update Configurations
-
On the deployer VM, navigate to the inventory directory and update the
hosts.yml and overrides.yml files.
Note:
You can create a copy of the provided sample and update the
details.
-
In hosts.yml file, update the VM IP addresses in the vars section.
# Sample Ansible hosts file
# Replace node names according to need.
---
all:
hosts:
node1:
ansible_host: "{{ node1_ip }}"
ip: "{{ node1_ip }}"
access_ip: "{{ node1_ip }}"
ansible_ssh_user: "{{ lookup('env', 'ANSIBLE_SSH_USER') }}"
ansible_ssh_private_key_file: "{{ lookup('env', 'ANSIBLE_SSH_KEY') }}"
node2:
ansible_host: "{{ node2_ip }}"
ip: "{{ node2_ip }}"
access_ip: "{{ node2_ip }}"
ansible_ssh_user: "{{ lookup('env', 'ANSIBLE_SSH_USER') }}"
ansible_ssh_private_key_file: "{{ lookup('env', 'ANSIBLE_SSH_KEY') }}"
node3:
ansible_host: "{{ node3_ip }}"
ip: "{{ node3_ip }}"
access_ip: "{{ node3_ip }}"
ansible_ssh_user: "{{ lookup('env', 'ANSIBLE_SSH_USER') }}"
ansible_ssh_private_key_file: "{{ lookup('env', 'ANSIBLE_SSH_KEY') }}"
localhost:
ansible_connection: local
ansible_python_interpreter: "/usr/local/bin/python3"
children:
ansible_controller:
hosts:
localhost:
kubespray:
hosts:
localhost:
kube_control_plane:
hosts:
node1:
node2:
node3:
kube_node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s_cluster:
children:
kube_control_plane:
kube_node:
keepaliveds:
children:
k8s_cluster:
calico_rr:
hosts: {}
vars:
node1_ip: <AFR-VM1-IP>
node2_ip: <AFR-VM2-IP>
node3_ip: <AFR-VM3-IP>
-
In the overrides.yml file, update the VIP address and the DNS server IP
address. Update webhook_whitelist_range with MIST source IPs. For the
complete list of source IPs for webhooks, see ../../../mist-management/topics/ref/firewall-ports-to-open.html#section_cx4_bxp_yxb in the
Juniper Mist™ Management Guide.
Depending on whether you have a public CA certificate, follow the
appropriate action below.
-
If you have a public CA certificate:
-
Set ingress_use_self_signed_certs as
false in the overrides.yml
file.
-
Place the certificate and the key file in the
afr-deployer container.
-
Update the path for the certificate and key in the
overrides.yml file.
ingress_use_self_signed_certs: false
ingress_tls_cert_path: "/tmp/tls.crt"
ingress_tls_key_path: "/tmp/tls.key"
-
If you don't have a public CA certificate:
. With this configuration, a self-signed certificate is generated
and applied automatically.
ingress_use_self_signed_certs: true
---
# Sample Ansible Overrides
## Alert Format Relay
# O F F L I N E
offline_setup: true
# Registry overrides
registry_host: "afr-registry:8443"
files_repo: "https://afr-registry:8443/files"
## Kubespray K8s installer vars
kubespray_container_name: kubespray
kubespray_config_dir: "/etc/{{ kubespray_container_name }}/inventory"
kubespray_log_dir: "/etc/{{ kubespray_container_name }}/logs"
# Kubernetes version
kube_version: v1.27.5
kube_log_level: 2
image_arch: amd64
etcd_version: v3.5.7
cni_version: v1.3.0
crictl_version: v1.27.1
calico_version: v3.25.2
helm_version: v3.12.3
containerd_version: 1.7.5
nerdctl_version: 1.5.0
# K8s networking
kube_service_addresses: "172.16.0.0/16"
kube_pods_subnet: "192.168.0.0/16"
# cluster_name: cluster.local
# container_manager: containerd
# kube_network_plugin: calico
# helm_enabled: true
# MetalLB
kube_cluster_vip: "<AFR-VIP>"
# DNS
upstream_dns_servers:
- <DNS-Server>
# Ingress
ingress_use_self_signed_certs: true
# specify if CA cert has to be loaded
ingress_tls_cert_path: "/tmp/tls.crt"
ingress_tls_key_path: "/tmp/tls.key"
# Ingress LB service
ingress_service_name: ingress-nginx-svc
ingress_tls_secret_name: ingress-tls
ingress_controller_name: ingress-nginx-controller
ingress_controller_ns: ingress-nginx
## AFR Infra overrides
# Base Path for K8s local Persistent volumes storage
local_storage_base_dir: "/mnt/disks"
## Juniper Alert Format Relay (AFR) apps
## webhook_whitelist_range: 54.193.71.17, 54.215.237.20
webhook_whitelist_range: <MIST-Webhook-SourceIPs>
afr_overrides:
webhook:
app_container_name: webhook
app_image_name: "{{ registry_host }}/webhook:{{ afr_release }}"
transformer:
app_container_name: transformer
app_image_name: "{{ registry_host }}/template_engine:{{ afr_release }}"
api-client:
app_container_name: api-client
app_image_name: "{{ registry_host }}/api_client:{{ afr_release }}"
dispatcher-syslog:
app_container_name: dispatcher-syslog
app_image_name: "{{ registry_host }}/dispatcher:{{ afr_release }}"
dispatcher-snmp:
app_container_name: dispatcher-snmp
app_image_name: "{{ registry_host }}/dispatcher:{{ afr_release }}"
Deploy the Installation Package
-
Navigate to afr directory:
cd /opt/afr/afr-release-<RELEASE-TAG>
-
Set up the deployer container and log in to the container shell:
# Bring up the deployer container
docker run \
-v "${PWD}/inventory:/deployer/ansible/inventory" \
-v "/root/.ssh/kube_cluster_key:/root/.ssh/kube_cluster_key" \
-v "/etc/kubespray:/etc/kubespray" \
-v "/var/run/docker.sock:/var/run/docker.sock" \
--env-file "${PWD}/afr.env" \
--add-host=afr-registry:host-gateway \
--name afr-deployer \
-d afr-registry:8443/deployer:${RELEASE_TAG} sleep infinity
# Shell into the container
docker exec -it afr-deployer bash
Note:
You can monitor the installation progress by opening a new session
and opening the install.log:
tail -f /etc/kubespray/logs/install.log
-
Deploy Kubernetes:
# Container Shell
(afr-deployer) cd /deployer/ansible
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-kubernetes.yml
# Post Deploy
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/post-deploy.yml
-
Deploy the infrastructure applications:
(afr-deployer) cd /deployer/ansible
# OLM
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-olm.yml
# Kafka
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-kafka.yml
# Redis
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-redis.yml
# Prometheus
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-prometheus.yml
# Grafana
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-grafana.yml
-
Deploy the Juniper Alert Format Relay applications:
(afr-deployer) cd /deployer/ansible
# Webhook
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-webhook.yml
# Transformer
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-transformer.yml
# Dispatcher
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-dispatcher.yml
# API Client
(afr-deployer)
ansible-playbook \
-i inventory/hosts.yml \
-e @inventory/overrides.yml \
playbooks/deploy-api-client.yml
-
To complete the deployment process, get the MIB file and load it onto your
network monitoring tool to process the traps.
/opt/afr/afr-release-<RELEASE-TAG>/mibs