Secondary Collector Installation for Distributed Data Collection
When you install NorthStar Controller, a primary collector is installed, for use by Netconf and SNMP collection. You can improve performance of the collection tasks by also installing secondary collector workers to distribute the work. Each secondary collector worker starts a number of worker processes which is equal to the number of cores in the CPU plus one. You can create as many secondary collector servers as you wish to help with collection tasks. The primary collector manages all of the workers automatically.
Secondary collectors must be installed in a separate server from the NorthStar Controller. You cannot install secondary collectors together with the NorthStar application in the same server.
To install secondary collectors, follow this procedure:
On the secondary collector server, run the following:
rpm -Uvh rpm-filename
On the secondary collector server, run the collector.sh script:
[root@ns-sec-coll]# cd /opt/northstar/northstar_bundle_x.x.x/ [root@ns-sec-coll northstar]# ./collector.sh install
The script prompts you for the NorthStar application IP address, login, and password. If the NorthStar application is in HA mode, you need to provide the VIP address of the NorthStar application. The IP address is used by the secondary collectors to communicate with the primary collector:
Config file /opt/northstar/data/northstar.cfg does not exist copying it from Northstar APP server, Please enter below info: --------------------------------------------------------------------------------------------------------------------------- Please enter application server IP address or host name: 10.49.166.211 Please enter Admin Web UI username: admin Please enter Admin Web UI password: <not displayed> retrieving config file from application server... Saving to /opt/northstar/data/northstar.cfg Secondary collector installed.... collector: added process group collector:worker1: stopped collector:worker3: stopped collector:worker2: stopped collector:worker4: stopped collector:worker1: started collector:worker3: started collector:worker2: started collector:worker4: started
Run the following command to confirm the secondary collector (worker) processes are running:
[root@ns-sec-coll]# supervisorctl status collector:worker1 RUNNING pid 15574, uptime 0:01:28 collector:worker2 RUNNING pid 15576, uptime 0:01:28 collector:worker3 RUNNING pid 15575, uptime 0:01:28 collector:worker4 RUNNING pid 15577, uptime 0:01:28
Optionally, use the config_celery_workers.sh script to change the number of workers that are installed.
The collector.sh script installs a default number of workers, depending on the number of CPU cores on the server. After the initial installation, you can change the number of workers installed using the config_celery_workers.sh script. Table 1 shows the default workers installed, the number of total celery processes started, and the amount of RAM required.
Table 1: Default Worker Groups and Processes by Number of CPU Cores CPU Cores
Worker Groups Installed
Total Worker Processes
Minimum RAM Required
1-4
4
8-20
(CPUs +1) x 4 = 20
1 GB
5-8
2
12-18
(CPUs +1) x 2 = 18
1 GB
16
1
17
(CPUs +1) x 1 = 17
1 GB
32
1
33
(CPUs +1) x 1 = 33
2 GB
To change the number of workers, run the config_celery_workers.sh script:
[root@pcs02-q-pod08 ~]#/opt/northstar/snmp-collector/scripts/config_celery_workers.sh <option>
Use the -w worker-groups option to add a specified number of worker groups. Since this installation is on a server dedicated to providing distributed data collection, you can increase the number of workers installed up to the server storage capacity to improve performance. The following example starts six worker groups:
/opt/northstar/snmp-collector/scripts/config_celery_workers.sh -w 6