- play_arrow Overview
- play_arrow Understanding Contrail Controller
-
- play_arrow Configuring Contrail
- play_arrow Configuring Virtual Networks
- Creating Projects in OpenStack for Configuring Tenants in Contrail
- Creating a Virtual Network with Juniper Networks Contrail
- Creating a Virtual Network with OpenStack Contrail
- Creating an Image for a Project in OpenStack Contrail
- Creating a Floating IP Address Pool
- Using Security Groups with Virtual Machines (Instances)
- Support for IPv6 Networks in Contrail
- Configuring EVPN and VXLAN
- Support for EVPN Route Type 5
- play_arrow Example of Deploying a Multi-Tier Web Application Using Contrail
- play_arrow Configuring Services
- play_arrow Configuring Service Chaining
- play_arrow Examples: Configuring Service Chaining
- play_arrow Adding Physical Network Functions in Service Chains
- play_arrow QoS Support in Contrail
- play_arrow BGP as a Service
- play_arrow Load Balancers
- play_arrow Optimizing Contrail
-
- play_arrow Contrail Security
- play_arrow Contrail Security
-
- play_arrow Monitoring and Troubleshooting Contrail
- play_arrow Configuring Traffic Mirroring to Monitor Network Traffic
- play_arrow Understanding Contrail Analytics
- play_arrow Configuring Contrail Analytics
- Analytics Scalability
- High Availability for Analytics
- System Log Receiver in Contrail Analytics
- Sending Flow Messages to the Contrail System Log
- Ceilometer Support in a Contrail Cloud
- User Configuration for Analytics Alarms and Log Statistics
- Alarms History
- Node Memory and CPU Information
- Role- and Resource-Based Access Control for the Contrail Analytics API
- Configuring Analytics as a Standalone Solution
- Configuring Secure Sandesh and Introspect for Contrail Analytics
- play_arrow Using Contrail Analytics to Monitor and Troubleshoot the Network
- Monitoring the System
- Debugging Processes Using the Contrail Introspect Feature
- Monitor > Infrastructure > Dashboard
- Monitor > Infrastructure > Control Nodes
- Monitor > Infrastructure > Virtual Routers
- Monitor > Infrastructure > Analytics Nodes
- Monitor > Infrastructure > Config Nodes
- Monitor > Networking
- Query > Flows
- Query > Logs
- Understanding Flow Sampling
- Example: Debugging Connectivity Using Monitoring for Troubleshooting
- play_arrow Common Support Answers
-
- play_arrow Contrail Commands and APIs
- play_arrow Contrail Commands
- play_arrow Contrail Application Programming Interfaces (APIs)
-
Installing AppFormix for OpenStack in HA
HA Design Overview
AppFormix Platform can be deployed to multiple hosts for high availability. Platform services continue to communicate using an API proxy that listens on a virtual IP address. Only one host will have the virtual IP at a time, and so only one API proxy will be the “active” API proxy at a time.
The API proxy is implemented by HAProxy. HAProxy is configured to use services in active-standby or load-balanced active-active mode, depending on the service.
At most, one host will be assigned the virtual IP at any given time. This host is considered the “active” HAproxy. The virtual IP address is assigned to a host by keepalived, which uses VRRP protocol for election.
Services are replicated in different modes of operation. In the “active-passive” mode, HAProxy sends all requests to a single “active” instance of a service. If the service fails, then HAProxy will select a new “active” from the other hosts, and begin to send requests to the new “active” service.In the “active-active” mode, HAProxy load balances requests across hosts on which a service is operational.
AppFormix Platform can be deployed in a 3-node, 5-node, or 7-node configuration for high availability.
Requirements
Each host, on which AppFormix Platform is installed, has the following requirements.
Hardware Requirements
CPU: 8 cores (virtual or physical)
Memory: 16 GB
Storage: 100 GB (recommended)
Software Requirements
docker 17.03.1-ce
docker-py 1.3.1
Ansible 1.9.6, or 2.3, httplib2
Connectivity
One virtual IP address to be shared among all the Platform Hosts. This IP address should not be used by any host before installation. It should have reachability from all the Platform Hosts after installation.
Dashboard client (in browser) must have IP connectivity to the virtual IP.
IP addresses for each Platform Host for installation and for services running on these hosts to communicate.
keepalived_vrrp_interface for each Platform Host which would be used for assigning virtual IP address. Details on how to configure this interface is described in the sample_inventory section.
The installer node needs to download the following packages from https://www.juniper.net/support/downloads/?p=appformix#sw.
appformix-openstack-images-<version>.tar.gz
appformix-platform-images-<version>.tar.gz
appformix-dependencies-images-<version>.tar.gz
AppFormix Agent Supported Platforms
AppFormix Agent runs on a host to monitor resource consumption of the host itself and the virtual machines and containers executing on that host.
Ubuntu 14.04
Red Hat Enterprise Linux 7.1
Red Hat Enterprise Linux 6.5, 6.6
CentOS 7.1
CentOS 6.5, 6.6
Installing AppFormix for High Availability
To install AppFormix to multiple hosts for high availability:
Install Ansible on the installer node. Ansible will install docker and docker-py on the appformix_controller.
content_copy zoom_out_map# sudo apt-get install python-pip python-dev build-essential libssl-dev libffi-dev # sudo pip install ansible==1.9.6 markupsafe httplib2
For Ansible 2.3:
content_copy zoom_out_map# sudo pip install ansible==2.3 markupsafe httplib2 cryptography==1.5
Install python and python-pip on all the Platform Hosts so that Ansible can run between the installer node and the appformix_controller node.
content_copy zoom_out_map# sudo apt-get install -y python python-pip
Install python pip package on the hosts where AppFormix Agents run.
content_copy zoom_out_map# apt-get install -y python-pip
To enable passwordless login to all Platform Hosts by Ansible, create an SSH public key on the node where Ansible playbooks are run and then copy the key to all the Platform Hosts.
content_copy zoom_out_map# ssh-keygen -t rsa #Creates Keys # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_1>.........#Copies key from the node to all platform hosts # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_2>.........#Copies key from the node to all platform hosts # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_3>.........#Copies key from the node to all platform hosts
Use the sample_inventory file as a template to create a host file. Add all the Platform Hosts and compute hosts details.
content_copy zoom_out_map# List all compute hosts which needs to be monitored by AppFormix [compute] 203.0.113.5 203.0.113.17 # AppFormix controller hosts [appformix_controller] 203.0.113.119 keepalived_vrrp_interface=eth0 203.0.113.120 keepalived_vrrp_interface=eth0 203.0.113.121 keepalived_vrrp_interface=eth0
Note:Note: In the case of 5-node or 7-node deployment, list all the nodes under appformix_controller.
At top-level of the distribution, create a directory named
group_vars
and then create a file namedall
inside this directory.content_copy zoom_out_map# mkdir group_vars # touch group_vars/all
Add the following entries to the newly created
all
file:content_copy zoom_out_mapappformix_vip: <ip-address> appformix_docker_images: - /path/to/appformix-platform-images-<version>.tar.gz - /path/to/appformix-dependencies-images-<version>.tar.gz - /path/to/appformix-openstack-images-<version>.tar.gz
Copy and source the
openrc
file from the OpenStack controller node (/etc/contrail/openrc
) to the AppFormix Controller to authenticate the adapter to access admin privileges over the controller services.content_copy zoom_out_maproot@installer_node:~# cat /etc/contrail/openrc export OS_USERNAME=<admin user> export OS_PASSWORD=<password> export OS_TENANT_NAME=admin export OS_AUTH_URL=http://<openstack-auth-URL>/v2.0/ export OS_NO_CACHE=1 root@installer_node:~# source /etc/contrail/openrc
Run Ansible with the created inventory file.
content_copy zoom_out_mapansible-playbook -i inventory appformix_openstack.yml
If running the playbooks as root user then this step can be skipped. As a non-root user (for example. “ubuntu”), the user “ubuntu” needs access to the
docker
user group. The following command adds the user to the docker group.content_copy zoom_out_mapsudo usermod -aG docker ubuntu
If step 8. is being done with offline installation and failed
due to step 8. not being done, then the appformix *.tar.gz need to
be removed from the /tmp/
folder on the
appformix_controller node. This is the workaround required as of version
2.11.1.