Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

close
keyboard_arrow_left
list Table of Contents
keyboard_arrow_right

Setting Up the Infrastructure (Contrail Networking Release 21.4 or Later)

date_range 04-Jul-24

SUMMARY Follow this topic to set up the infrastructure for a Contrail Networking deployment in a RHOSP 16 environment when you are using Contrail Networking Release 21.4 or later.

When to Use This Procedure

You should use this topic to set up the infrastructure for a Contrail Networking deployment in a RHOSP 16 environment when you are using Contrail Networking Release 21.4 or later.

This procedure shows you how to set up the infrastructure for the installation when the hosts are using Red Hat Virtualization (RHV). Contrail Networking was enhanced to operate with hosts using Red Hat Virtualization (RHV) in Release 21.4.

In Release 21.3 and earlier, this procedure is performed with hosts using Kernel-based Virtual Machine (KVM). See Setting Up the Infrastructure (Contrail Networking Release 21.3 or Earlier) .

Understanding Red Hat Virtualization

This procedure shows an example of how to set up the infrastructure for a Contrail Networking deployment in a RHOSP 16 environment when the hosts are using Red Hat Virtualization (RHV).

RHV is an enterprise virtualization platform built on Red Hat Enterprise Linux and KVM. RHV is developed and fully supported by Red Hat.

The purpose of this topic is to illustrate one method of deploying Contrail Networking in a RHOSP 16 environment using RHOSP 16. The documentation of related RHV components is beyond the scope of this topic.

For additional information on RHV, see Product Documentation for Red Hat Virtualization from Red Hat.

For additional information on RHV installation, see the Installing Red Hat Virtualization as a self-hosted engine using the command line document from Red Hat.

Prepare the Red Hat Virtualization Manager Hosts

Prepare the Red Hat Virtualization Manager hosts using the instructions provided by Red Hat. See the Installing Red Hat Virtualization Hosts section of the Installing Hosts for Red Hat Virtualization chapter of the Installing Red Hat Virtualization as a self-hosted engine using the command line guide from Red Hat.

Deploy Hosts with Red Hat Enterprise Linux

Red Hat Enterprise Linux (RHEL) must run to enable RHV.

This section provides an example of how to deploy RHEL8.

Install and enable required software

This example shows how to obtain, install, and enable the software required to operate Red Hat Enterprise Linux 8.

content_copy zoom_out_map
# Register node with RedHat subscription
# (for satellite check RedHat instruction)
sudo subscription-manager register \
  --username {username} \
  --password {password}

# Attach pools that allow to enable all required repos
# e.g.:
sudo subscription-manager attach \
  --pool {RHOSP16.2 pool ID} \
  --pool {Red Hat Virtualization Manager pool ID}

# Enable repos
sudo subscription-manager repos \
    --disable='*' \
    --enable=rhel-8-for-x86_64-baseos-rpms \
    --enable=rhel-8-for-x86_64-appstream-rpms \
    --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
    --enable=fast-datapath-for-rhel-8-x86_64-rpms \
    --enable=advanced-virt-for-rhel-8-x86_64-rpms \
    --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
    --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms

# Remove cloud-init (in case if it virt test setup and cloud image used for deploy)
sudo dnf remove -y cloud-init || true

# Enable dnf modules and update system
# For Red Hat Virtualization Manager 4.4 use virt:av
# (for previous versions check RedHat documentation)
sudo dnf module reset -y virt
sudo dnf module enable -y virt:av
sudo dnf distro-sync -y --nobest
sudo dnf upgrade -y --nobest

# Enable firewall
sudo dnf install -y firewalld
sudo systemctl enable --now firewalld

# Check current active zone
sudo firewall-cmd --get-active-zones
# exmaple of zones:
#     public
#       interfaces:  eth0

# Add virbr0 interface into the active zone for ovirtmgmt, e.g.
sudo firewall-cmd --zone=public --change-zone=virbr0 --permanent
sudo firewall-cmd --zone=public --add-forward --permanent
# Ensure used interfaces in one zone
sudo firewall-cmd --get-active-zones
# exmaple of zones:
#     [stack@node-10-0-10-147 ~]$ sudo firewall-cmd --get-active-zones
#     public
#       interfaces:  eth0 virbr0

# Enable https and cockpit for RHVM web access and monitoring
sudo firewall-cmd --permanent \
  --add-service=https \
  --add-service=cockpit \
  --add-service nfs

sudo firewall-cmd --permanent \
  --add-port 2223/tcp \
  --add-port 5900-6923/tcp \
  --add-port 2223/tcp \
  --add-port 5900-6923/tcp \
  --add-port 111/tcp --add-port 111/udp \
  --add-port 2049/tcp --add-port 2049/udp \
  --add-port 4045/tcp --add-port 4045/udp \
  --add-port 1110/tcp --add-port 1110/udp

# Prepare NFS Storage
# adjust sysctl settings
cat {{ EOF | sudo tee /etc/sysctl.d/99-nfs-tf-rhv.conf
net.ipv4.tcp_mem=4096 65536 4194304
net.ipv4.tcp_rmem=4096 65536 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
net.core.rmem_max=8388608
net.core.wmem_max=8388608
EOF
sudo sysctl --system
# install and enable NFS services
sudo dnf install -y nfs-utils
sudo systemctl enable --now nfs-server
sudo systemctl enable --now rpcbind
# prepare special user required by Red Hat Virtualization
getent group kvm || sudo groupadd kvm -g 36
sudo useradd vdsm -u 36 -g kvm
exports="/storage *(rw,all_squash,anonuid=36,anongid=36)\n"
for s in vmengine undercloud ipa overcloud ; do
  sudo mkdir -p /storage/$s
  exports+="/storage/$s *(rw,all_squash,anonuid=36,anongid=36)\n"
done
sudo chown -R 36:36 /storage
sudo chmod -R 0755 /storage
# add storage directory to exports
echo -e "$exports" | sudo tee /etc/exports
# restart NFS services
sudo systemctl restart rpcbind
sudo systemctl restart nfs-server
# check exports
sudo exportfs

# Rebbot system In case if newer kernel availalbe in /lib/modules
latest_kv=$(ls -1 /lib/modules | sort -V | tail -n 1)
active_kv=$(uname -r)
if [[ "$latest_kv" != "$active_kv" ]] ; then
  echo "INFO: newer kernel version $latest_kv is available, active one is $active_kv"
  echo "Perform reboot..."
  sudo reboot
fi

Confirm the Domain Names

Before proceeding, ensure that the fully qualified domain names (FQDNs) can be resolved by DNS or by the /etc/hosts on all nodes.

content_copy zoom_out_map
[stack@node-10-0-10-147 ~]$ cat /etc/hosts
# Red Hat Virtualization Manager VM
10.0.10.200  vmengine.dev.clouddomain          vmengine.dev          vmengine
# Red Hat Virtualization Hosts
10.0.10.147  node-10-0-10-147.dev.clouddomain  node-10-0-10-147.dev  node-10-0-10-147
10.0.10.148  node-10-0-10-148.dev.clouddomain  node-10-0-10-148.dev  node-10-0-10-148
10.0.10.149  node-10-0-10-149.dev.clouddomain  node-10-0-10-149.dev  node-10-0-10-149
10.0.10.150  node-10-0-10-150.dev.clouddomain  node-10-0-10-150.dev  node-10-0-10-150

Deploy Red Hat Virtualization Manager on the First Node

This section shows how to deploy Red Hat Virtual Manager (RHVM).

Enable the Red Hat Virtualization Manager Appliance

To enable the Red Hat Virtualization Manager Appliance:

content_copy zoom_out_map
sudo dnf install -y \
  tmux \
  rhvm-appliance \
  ovirt-hosted-engine-setup

Deploy the Self-Hosted Engine

To deploy the self-hosted engine:

content_copy zoom_out_map
# !!! During deploy you need answer questions
sudo hosted-engine --deploy

# example of adding ansible vars into deploy command
#   sudo hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=10.0.10
# example of an answer:
#   ...
#   Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
#   Please specify the nfs version you would like to use (auto, v3, v4, v4_0, v4_1, v4_2)[auto]:
#   Please specify the full shared storage connection path to use (example: host:/path): 10.0.10.147:/storage/vmengine
#   ...
Note:

Ensure all required interfaces are in one zone for IP Forwarding before proceeding with the NFS task during deployment.

content_copy zoom_out_map
sudo firewall-cmd --get-active-zones
# exmaple of zones:
#     [stack@node-10-0-10-147 ~]$ sudo firewall-cmd --get-active-zones
#     public
#       interfaces: ovirtmgmt eth0 virbr0

Enable virh CLI to Use oVirt Authentication

To enable virh cli to use oVirt authentication:

content_copy zoom_out_map
sudo ln -s /etc/ovirt-hosted-engine/virsh_auth.conf  /etc/libvirt/auth.conf

Enabling the Red Hat Virtualization Manager Repositories

To enable the RHVM repositories:

  1. Login into RHVM
    content_copy zoom_out_map
    ssh root@vmengine
  2. Associate the Red Hat Virtualization Manager subscription and enable repositories:
    content_copy zoom_out_map
    sudo subscription-manager register --username {username} --password {password}
    # Attach pools that allow to enable all required repos
    # e.g.:
    sudo subscription-manager attach \
      --pool {RHOSP16.2 pool ID} \
      --pool {Red Hat Virtualization Manager pool ID}
    # Enable repos
    sudo subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=advanced-virt-for-rhel-8-x86_64-rpms \
        --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
        --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
    # Enable modules and sync
    sudo dnf module -y enable pki-deps
    sudo dnf module -y enable postgresql:12
    sudo dnf distro-sync -y --nobest

Deploy Nodes and Enable Networking

Follow the tasks in this section to deploy nodes and enable networking:

Prepare the Ansible env Files

To prepare the Ansible env files:

content_copy zoom_out_map
# Common variables
# !!! Adjust to your setup - especially undercloud_mgmt_ip and
#     ipa_mgmt_ip to allow SSH to this machines (e.g. choose IPs from ovirtmgmt network)
cat << EOF > common-env.yaml
---
ovirt_hostname: vmengine.dev.clouddomain
ovirt_user: "admin@internal"
ovirt_password: "qwe123QWE"

datacenter_name: Default

# to access hypervisors
ssh_public_key: false
ssh_root_password: "qwe123QWE"

# gateway for VMs (undercloud and ipa)
mgmt_gateway: "10.0.10.1"
# dns to be set in ipa and initial dns for UC
# k8s nodes uses ipa as dns
dns_server:  "10.0.10.1"

undercloud_name: "undercloud"
undercloud_mgmt_ip: "10.0.10.201"
undercloud_ctlplane_ip: "192.168.24.1"

ipa_name: "ipa"
ipa_mgmt_ip: "10.0.10.205"
ipa_ctlplane_ip: "192.168.24.5"

overcloud_domain: "dev.clouddomain"
EOF

# Hypervisor nodes
# !! Adjust to your setup
# Important: ensure you use correct node name for already registered first hypervisor
# (it is registed at the RHVM deploy command hosted-engine --deploy)
cat << EOF > nodes.yaml
---
nodes:
  # !!! Adjust networks and power management options for your needs
  - name: node-10-0-10-147.dev.clouddomain
    ip: 10.0.10.147
    cluster: Default
    comment: 10.0.10.147
    networks:
      - name: ctlplane
        phy_dev: eth1
      - name: tenant
        phy_dev: eth2
    # provide power management if needed (for all nodes)
    # pm:
    #   address: 192.168.122.1
    #   port: 6230
    #   user: ipmi
    #   password: qwe123QWE
    #   type: ipmilan
    #   options:
    #     ipmilanplus: true
  - name: node-10-0-10-148.dev.clouddomain
    ip: 10.0.10.148
    cluster: node-10-0-10-148
    comment: 10.0.10.148
    networks:
      - name: ctlplane
        phy_dev: eth1
      - name: tenant
        phy_dev: eth2
  - name: node-10-0-10-149.dev.clouddomain
    ip: 10.0.10.149
    cluster: node-10-0-10-149
    comment: 10.0.10.149
    networks:
      - name: ctlplane
        phy_dev: eth1
      - name: tenant
        phy_dev: eth2
  - name: node-10-0-10-150.dev.clouddomain
    ip: 10.0.10.150
    cluster: node-10-0-10-150
    comment: 10.0.10.150
    networks:
      - name: ctlplane
        phy_dev: eth1
      - name: tenant
        phy_dev: eth2
# !!! Adjust storages according to your setup architecture
storage:
  - name: undercloud
    mountpoint: "/storage/undercloud"
    host: node-10-0-10-147.dev.clouddomain
    address: node-10-0-10-147.dev.clouddomain
  - name: ipa
    mountpoint: "/storage/ipa"
    host: node-10-0-10-147.dev.clouddomain
    address: node-10-0-10-147.dev.clouddomain
  - name: node-10-0-10-148-overcloud
    mountpoint: "/storage/overcloud"
    host: node-10-0-10-148.dev.clouddomain
    address: node-10-0-10-148.dev.clouddomain
  - name: node-10-0-10-149-overcloud
    mountpoint: "/storage/overcloud"
    host: node-10-0-10-149.dev.clouddomain
    address: node-10-0-10-149.dev.clouddomain
  - name: node-10-0-10-150-overcloud
    mountpoint: "/storage/overcloud"
    host: node-10-0-10-150.dev.clouddomain
    address: node-10-0-10-150.dev.clouddomain
EOF

# Playbook to register hypervisor nodes in RHVM, create storage pools and networks
# Adjust values to your setup!!!
cat << EOF > infra.yaml
- hosts: localhost
  tasks:
  - name: Get RHVM token
    ovirt_auth:
      url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
      username: "{{ ovirt_user }}"
      password: "{{ ovirt_password }}"
      insecure: true
  - name: Create datacenter
    ovirt_datacenter:
      state: present
      auth: "{{ ovirt_auth }}"
      name: "{{ datacenter_name }}"
      local: false
  - name: Create clusters {{ item.name }}
    ovirt_cluster:
      state: present
      auth: "{{ ovirt_auth }}"
      name: "{{ item.cluster }}"
      data_center: "{{ datacenter_name }}"
      ksm: true
      ballooning: true
      memory_policy: server
    with_items:
       - "{{ nodes }}"
  - name: List host in datacenter
    ovirt_host_info:
      auth: "{{ ovirt_auth }}"
      pattern: "datacenter={{ datacenter_name }}"
    register: host_list
  - set_fact:
      hostnames: []
  - name: List hostname
    set_fact:
      hostnames: "{{ hostnames + [ item.name ] }}"
    with_items:
      - "{{ host_list['ovirt_hosts'] }}"
  - name: Register in RHVM
    ovirt_host:
      state: present
      auth: "{{ ovirt_auth }}"
      name: "{{ item.name }}"
      cluster: "{{ item.cluster }}"
      address: "{{ item.ip }}"
      comment: "{{ item.comment | default(item.ip) }}"
      power_management_enabled: "{{ item.power_management_enabled | default(false) }}"
      # unsupported in rhel yet - to avoid reboot create node via web
      # reboot_after_installation: "{{ item.reboot_after_installation | default(false) }}"
      reboot_after_upgrade: "{{ item.reboot_after_upgrade | default(false) }}"
      public_key: "{{ ssh_public_key }}"
      password: "{{ ssh_root_password }}"
    register: task_result
    until: not task_result.failed
    retries: 5
    delay: 10
    when: item.name not in hostnames
    with_items:
       - "{{ nodes }}"
  - name: Register Power Management for host
    ovirt_host_pm:
      state: present
      auth: "{{ ovirt_auth }}"
      name: "{{ item.name }}"
      address: "{{ item.pm.address }}"
      username: "{{ item.pm.user }}"
      password: "{{ item.pm.password }}"
      type: "{{ item.pm.type }}"
      options: "{{ item.pm.pm_options | default(omit) }}"
    when: item.pm is defined
    with_items:
       - "{{ nodes }}"
  - name: Create storage domains
    ovirt_storage_domain:
      state: present
      auth: "{{ ovirt_auth }}"
      data_center: "{{ datacenter_name }}"
      name: "{{ item.name }}"
      domain_function: "data"
      host: "{{ item.host }}"
      nfs:
        address: "{{ item.address | default(item.host) }}"
        path: "{{ item.mountpoint }}"
        version: "auto"
    register: task_result
    until: not task_result.failed
    retries: 5
    delay: 10
    with_items:
       - "{{ storage }}"
  - name: Create logical networks
    ovirt_network:
      state: present
      auth: "{{ ovirt_auth }}"
      data_center: "{{ datacenter_name }}"
      name: "{{ datacenter_name }}-{{ item.1.name }}"
      clusters:
      - name: "{{ item.0.cluster }}"
      vlan_tag: "{{ item.1.vlan | default(omit)}}"
      vm_network: true
    with_subelements:
      - "{{ nodes }}"
      - networks
  - name: Create host networks
    ovirt_host_network:
      state: present
      auth: "{{ ovirt_auth }}"
      networks:
      - name: "{{ datacenter_name }}-{{ item.1.name }}"
        boot_protocol: none
      name: "{{ item.0.name }}"
      interface: "{{ item.1.phy_dev }}"
    with_subelements:
      - "{{ nodes }}"
      - networks
  - name: Remove vNICs network_filter
    ovirt.ovirt.ovirt_vnic_profile:
      state: present
      auth: "{{ ovirt_auth }}"
      name: "{{ datacenter_name }}-{{ item.1.name }}"
      network: "{{ datacenter_name }}-{{ item.1.name }}"
      data_center: "{{ datacenter_name }}"
      network_filter: ""
    with_subelements:
      - "{{ nodes }}"
      - networks
  - name: Revoke SSO Token
    ovirt_auth:
      state: absent
      ovirt_auth: "{{ ovirt_auth }}"
EOF

Deploy Nodes and Networking

To deploy the nodes and enable networking:

content_copy zoom_out_map
ansible-playbook \
  --extra-vars="@common-env.yaml" \
  --extra-vars="@nodes.yaml" \
  infra.yaml

Check Hosts

If a host is in Reboot status, go to the extended menu and select 'Confirm Host has been rebooted"'

Prepare images

To prepare the images:

  1. Make a folder for the images:
    content_copy zoom_out_map
    mkdir ~/images
  2. Download the RHEL8.4 base image from RedHat downloads (Red Hat account required). Move the files into the ~/images directory that you created in the previous step.

Create Overcloud VMs

Follow the instructions in this section to create the overcloud VMs:

Prepare Images for the Kubernetes Cluster

If you are deploying the Contrail Control plane in a Kubernetes cluster, follow this example to prepare the images for the Contrail Controllers:

content_copy zoom_out_map
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/overcloud.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/overcloud.qcow2
virt-customize  -a images/overcloud.qcow2 \
  --run-command 'xfs_growfs /' \
  --root-password password:${root_password} \
  --run-command 'useradd stack' \
  --password stack:password:${stack_password} \
  --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
  --chmod 0440:/etc/sudoers.d/stack \
  --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
  --run-command 'systemctl enable sshd' \
  --selinux-relabel

Note that Kubernetes has to be deployed separately on the nodes. This can be done a variety of ways. For information on performing this task using Kubespray, see this Kubespray page on Github.

Contrail Controllers can be deployed using the TF operator on top of Kubernetes. See the TF Operator Github page.

Prepare Overcloud VM Definitions

To prepare the overcloud VM definitions:

content_copy zoom_out_map
# Overcloud VMs definitions
# Adjust values to your setup!!!
# For deploying Contrail Control plane in a Kuberentes cluster
# remove contrail controller nodes as they are not managed by RHOSP. They to be created at next steps.
cat << EOF > vms.yaml
---
vms:
  - name: controller-0
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:16:54:d8"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
    cluster: node-10-0-10-148
    storage: node-10-0-10-148-overcloud
  - name: contrail-controller-0
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:d6:2b:03"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
    cluster: node-10-0-10-148
    storage: node-10-0-10-148-overcloud
  - name: contrail-controller-1
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:d6:2b:13"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
    cluster: node-10-0-10-149
    storage: node-10-0-10-149-overcloud
  - name: contrail-controller-2
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:d6:2b:23"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
    cluster: node-10-0-10-150
    storage: node-10-0-10-150-overcloud
EOF

# Playbook for overcloud VMs
# !!! Adjustto your setup
cat << EOF > overcloud.yaml
- hosts: localhost
  tasks:
  - name: Get RHVM token
    ovirt_auth:
      url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
      username: "{{ ovirt_user }}"
      password: "{{ ovirt_password }}"
      insecure: true
  - name: Create disks
    ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "{{ item.name }}"
      interface: virtio
      size: "{{ item.disk_size_gb }}GiB"
      format: cow
      image_path: "{{ item.image | default(omit) }}"
      storage_domain: "{{ item.storage }}"
    register: task_result
    ignore_errors: yes
    until: not task_result.failed
    retries: 5
    delay: 10
    with_items:
      - "{{ vms }}"
  - name: Deploy VMs
    ovirt.ovirt.ovirt_vm:
      auth: "{{ ovirt_auth }}"
      state: "{{ item.state | default('present') }}"
      cluster: "{{ item.cluster }}"
      name: "{{ item.name }}"
      memory: "{{ item.memory_gb }}GiB"
      cpu_cores: "{{ item.cpu_cores }}"
      type: server
      high_availability: yes
      placement_policy: pinned
      operating_system: rhel_8x64
      disk_format: cow
      graphical_console:
        protocol:
          - spice
          - vnc
      serial_console: yes
      nics: "{{ item.nics | default(omit) }}"
      disks:
        - name: "{{ item.name }}"
          bootable: True
      storage_domain: "{{ item.storage }}"
      cloud_init: "{{ item.cloud_init | default(omit) }}"
      cloud_init_nics: "{{ item.cloud_init_nics | default(omit) }}"
    retries: 5
    delay: 2
    with_items:
      - "{{ vms }}"
  - name: Revoke SSO Token
    ovirt_auth:
      state: absent
      ovirt_auth: "{{ ovirt_auth }}"
EOF

ansible-playbook \
  --extra-vars="@common-env.yaml" \
  --extra-vars="@vms.yaml" \
  overcloud.yaml

Create Contrail Control Plane VMs for Kubernetes-based Deployments

Follow the instructions in this section in side-by-side deployments where the Contrail Control plane is deployed as a separate Kubernetes-based cluster.

Customize VM image for Kubernetes VMs

To customize the VM image for Kubernetes VMs:

content_copy zoom_out_map
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/k8s.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/k8s.qcow2
virt-customize  -a images/k8s.qcow2 \
  --run-command 'xfs_growfs /' \
  --root-password password:${root_password} \
  --password stack:password:${stack_password} \
  --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
  --chmod 0440:/etc/sudoers.d/stack \
  --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
  --run-command 'systemctl enable sshd' \
  --selinux-relabel

Define the Kubernetes VMs

To define the Kubernetes VMs:

content_copy zoom_out_map
# !!! Adjust to your setup (addresses in ctlplane, tenant and mgmt networks)
cat << EOF > k8s-vms.yaml
---
vms:
  - name: contrail-controller-0
    state: running
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:16:54:d8"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
      - name: eth2
        interface: virtio
        profile_name: "ovirtmgmt"
    cluster: node-10-0-10-148
    storage: node-10-0-10-148-overcloud
    image: "images/k8s.qcow2"
    cloud_init:
      # ctlplane network
      host_name: "contrail-controller-0.{{ overcloud_domain }}"
      dns_search: "{{ overcloud_domain }}"
      dns_servers: "{{ ipa_ctlplane_ip }}"
      nic_name: "eth0"
      nic_boot_protocol_v6: none
      nic_boot_protocol: static
      nic_ip_address: "192.168.24.7"
      nic_gateway: "{{ undercloud_ctlplane_ip }}"
      nic_netmask: "255.255.255.0"
    cloud_init_nics:
      # tenant network
      - nic_name: "eth1"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.0.201"
        nic_netmask: "255.255.255.0"
      # mgmt network
      - nic_name: "eth2"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.10.210"
        nic_netmask: "255.255.255.0"
  - name: contrail-controller-1
    state: running
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:d6:2b:03"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
      - name: eth2
        interface: virtio
        profile_name: "ovirtmgmt"
    cluster: node-10-0-10-149
    storage: node-10-0-10-149-overcloud
    image: "images/k8s.qcow2"
    cloud_init:
      host_name: "contrail-controller-1.{{ overcloud_domain }}"
      dns_search: "{{ overcloud_domain }}"
      dns_servers: "{{ ipa_ctlplane_ip }}"
      nic_name: "eth0"
      nic_boot_protocol_v6: none
      nic_boot_protocol: static
      nic_ip_address: "192.168.24.8"
      nic_gateway: "{{ undercloud_ctlplane_ip }}"
      nic_netmask: "255.255.255.0"
    cloud_init_nics:
      - nic_name: "eth1"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.0.202"
        nic_netmask: "255.255.255.0"
      # mgmt network
      - nic_name: "eth2"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.10.211"
        nic_netmask: "255.255.255.0"
  - name: contrail-controller-2
    state: running
    disk_size_gb: 100
    memory_gb: 16
    cpu_cores: 4
    nics:
      - name: eth0
        interface: virtio
        profile_name: "{{ datacenter_name }}-ctlplane"
        mac_address: "52:54:00:d6:2b:23"
      - name: eth1
        interface: virtio
        profile_name: "{{ datacenter_name }}-tenant"
      - name: eth2
        interface: virtio
        profile_name: "ovirtmgmt"
    cluster: node-10-0-10-150
    storage: node-10-0-10-150-overcloud
    image: "images/k8s.qcow2"
    cloud_init:
      host_name: "contrail-controller-1.{{ overcloud_domain }}"
      dns_search: "{{ overcloud_domain }}"
      dns_servers: "{{ ipa_ctlplane_ip }}"
      nic_name: "eth0"
      nic_boot_protocol_v6: none
      nic_boot_protocol: static
      nic_ip_address: "192.168.24.9"
      nic_gateway: "{{ undercloud_ctlplane_ip }}"
      nic_netmask: "255.255.255.0"
    cloud_init_nics:
      - nic_name: "eth1"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.0.203"
        nic_netmask: "255.255.255.0"EOF
      # mgmt network
      - nic_name: "eth2"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "10.0.10.212"
        nic_netmask: "255.255.255.0"
EOF

ansible-playbook \
  --extra-vars="@common-env.yaml" \
  --extra-vars="@k8s-vms.yaml" \
  overcloud.yaml

Configure VLANs for RHOSP Internal API networks

To SSH to Kubernetes nodes and configure VLANS for RHOSP Internal API Networks:

content_copy zoom_out_map
# Example

# ssh to a node
ssh stack@192.168.24.7

# !!!Adjust to your setup and repeate for all Contrail Controller nodes
cat {{EOF | sudo tee /etc/sysconfig/network-scripts/ifcfg-vlan710
ONBOOT=yes
BOOTPROTO=static
HOTPLUG=no
NM_CONTROLLED=no
PEERDNS=no
USERCTL=yes
VLAN=yes
DEVICE=vlan710
PHYSDEV=eth0
IPADDR=10.1.0.7
NETMASK=255.255.255.0
EOF
sudo ifup vlan710

# Do same for external vlan if needed

Create Undercloud VM

Follow the instructions in this section to the create the undercloud VM:

Customize the image for Undercloud VM

To customer the image for the undercloud VM:

content_copy zoom_out_map
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
undercloud_name=undercloud
domain_name=dev.clouddomain
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/${undercloud_name}.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/${undercloud_name}.qcow2
virt-customize  -a images/${undercloud_name}.qcow2 \
  --run-command 'xfs_growfs /' \
  --root-password password:${root_password} \
  --hostname ${undercloud_name}.${domain_name} \
  --run-command 'useradd stack' \
  --password stack:password:${stack_password} \
  --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
  --chmod 0440:/etc/sudoers.d/stack \
  --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
  --run-command 'systemctl enable sshd' \
  --selinux-relabel

Define Undercloud VM

To define the undercloud VM:

content_copy zoom_out_map
cat << EOF > undercloud.yaml
- hosts: localhost
  tasks:
  - set_fact:
      cluster: "Default"
      storage: "undercloud"
  - name: get RHVM token
    ovirt_auth:
      url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
      username: "{{ ovirt_user }}"
      password: "{{ ovirt_password }}"
      insecure: true
  - name: create disks
    ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "{{ undercloud_name }}"
      interface: virtio
      format: cow
      size: 100GiB
      image_path: "images/{{ undercloud_name }}.qcow2"
      storage_domain: "{{ storage }}"
    register: task_result
    ignore_errors: yes
    until: not task_result.failed
    retries: 5
    delay: 10
  - name: deploy vms
    ovirt.ovirt.ovirt_vm:
      auth: "{{ ovirt_auth }}"
      state: running
      cluster: "{{ cluster }}"
      name: "{{ undercloud_name }}"
      memory: 32GiB
      cpu_cores: 8
      type: server
      high_availability: yes
      placement_policy: pinned
      operating_system: rhel_8x64
      cloud_init:
        host_name: "{{ undercloud_name }}.{{ overcloud_domain }}"
        dns_search: "{{ overcloud_domain }}"
        dns_servers: "{{ dns_server | default(mgmt_gateway) }}"
        nic_name: "eth0"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "{{ undercloud_mgmt_ip }}"
        nic_gateway: "{{ mgmt_gateway }}"
        nic_netmask: "255.255.255.0"
      cloud_init_nics:
        - nic_name: "eth1"
          nic_boot_protocol_v6: none
          nic_boot_protocol: static
          nic_ip_address: "{{ undercloud_ctlplane_ip }}"
          nic_netmask: "255.255.255.0"
      disk_format: cow
      graphical_console:
        protocol:
          - spice
          - vnc
      serial_console: yes
      nics:
       - name: eth0
         interface: virtio
         profile_name: "ovirtmgmt"
       - name: eth1
         interface: virtio
         profile_name: "{{ datacenter_name }}-ctlplane"
      disks:
        - name: "{{ undercloud_name }}"
          bootable: true
      storage_domain: "{{ storage }}"
  - name: revoke SSO token
    ovirt_auth:
      state: absent
      ovirt_auth: "{{ ovirt_auth }}"
EOF

ansible-playbook --extra-vars="@common-env.yaml" undercloud.yaml

Create FreeIPA VM

To create the FreeIPA VM:

Customize VM image for RedHat IDM (FreeIPA) VM

Follow this example to customer the VM image for the RedHat IDM image.

This example is setup for a TLS everywhere deployment.

content_copy zoom_out_map
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
ipa_name=ipa
domain_name=dev.clouddomain
qemu-img create -f qcow2 images/${ipa_name}.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/${ipa_name}.qcow2
virt-customize  -a images/${ipa_name}.qcow2 \
  --run-command 'xfs_growfs /' \
  --root-password password:${root_password} \
  --hostname ${ipa_name}.${domain_name} \
  --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
  --run-command 'systemctl enable sshd' \
  --selinux-relabel

Enable the RedHat IDM (FreeIPA) VM

To enable the RedHat IDM VM:

content_copy zoom_out_map
cat << EOF > ipa.yaml
- hosts: localhost
  tasks:
  - set_fact:
      cluster: "Default"
      storage: "ipa"
  - name: get RHVM token
    ovirt_auth:
      url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
      username: "{{ ovirt_user }}"
      password: "{{ ovirt_password }}"
      insecure: true
  - name: create disks
    ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "{{ ipa_name }}"
      interface: virtio
      format: cow
      size: 100GiB
      image_path: "images/{{ ipa_name }}.qcow2"
      storage_domain: "{{ storage }}"
    register: task_result
    ignore_errors: yes
    until: not task_result.failed
    retries: 5
    delay: 10
  - name: deploy vms
    ovirt.ovirt.ovirt_vm:
      auth: "{{ ovirt_auth }}"
      state: running
      cluster: "{{ cluster }}"
      name: "{{ ipa_name }}"
      memory: 4GiB
      cpu_cores: 2
      type: server
      high_availability: yes
      placement_policy: pinned
      operating_system: rhel_8x64
      cloud_init:
        host_name: "{{ ipa_name }}.{{ overcloud_domain }}"
        dns_search: "{{ overcloud_domain }}"
        dns_servers: "{{ dns_server | default(mgmt_gateway) }}"
        nic_name: "eth0"
        nic_boot_protocol_v6: none
        nic_boot_protocol: static
        nic_ip_address: "{{ ipa_mgmt_ip }}"
        nic_gateway: "{{ mgmt_gateway }}"
        nic_netmask: "255.255.255.0"
      cloud_init_nics:
        - nic_name: "eth1"
          nic_boot_protocol_v6: none
          nic_boot_protocol: static
          nic_ip_address: "{{ ipa_ctlplane_ip }}"
          nic_netmask: "255.255.255.0"
      disk_format: cow
      graphical_console:
        protocol:
          - spice
          - vnc
      serial_console: yes
      nics:
       - name: eth0
         interface: virtio
         profile_name: "ovirtmgmt"
       - name: eth1
         interface: virtio
         profile_name: "{{ datacenter_name }}-ctlplane"
      disks:
        - name: "{{ ipa_name }}"
          bootable: true
      storage_domain: "{{ storage }}"
  - name: revoke SSO token
    ovirt_auth:
      state: absent
      ovirt_auth: "{{ ovirt_auth }}"
EOF

ansible-playbook --extra-vars="@common-env.yaml" ipa.yaml

Access to RHVM via a web browser

RHVM can be accessed only using the engine FQDN or one of the engine alternate FQDNs. For example, https://vmengine.dev.clouddomain. Please ensure that the FQDN can be resolved.

Access to VMs via serial console

To access the VMs via serial console, see the RedHat documentation or the oVirt documentation.

external-footer-nav