인프라 설정(Contrail Networking 릴리스 21.4 이상)
요약 Contrail Networking 릴리스 21.4 이상을 사용하는 경우 RHOSP 16 환경에서 Contrail Networking 구축을 위한 인프라를 설정하려면 이 주제를 따르십시오.
이 절차를 사용하는 경우
Contrail Networking 릴리스 21.4 이상을 사용하는 경우 RHOSP 16 환경에서 Contrail Networking 구축을 위한 인프라를 설정하려면 이 주제를 사용해야 합니다.
이 절차에서는 호스트가 RHV(Red Hat Virtualization)를 사용할 때 설치를 위한 인프라를 설정하는 방법을 보여줍니다. Contrail Networking은 릴리스 21.4에서 RHV(Red Hat Virtualization)를 사용하는 호스트와 함께 작동하도록 향상되었습니다.
릴리스 21.3 및 이전 버전에서는 KVM(커널 기반 가상 머신)을 사용하는 호스트에서 이 절차가 수행됩니다. 인프라 설정(Contrail Networking 릴리스 21.3 이하)을 참조하십시오.
Red Hat Virtualization 이해
이 절차에서는 호스트가 RHV(Red Hat Virtualization)를 사용하는 경우 RHOSP 16 환경에서 Contrail Networking 구축을 위한 인프라를 설정하는 방법의 예를 보여줍니다.
RHV는 Red Hat Enterprise Linux 및 KVM을 기반으로 구축된 엔터프라이즈 가상화 플랫폼입니다. RHV는 Red Hat에서 개발하고 전폭적으로 지원합니다.
이 주제의 목적은 RHOSP 16을 사용하여 RHOSP 16 환경에 Contrail Networking을 배포하는 한 가지 방법을 설명하는 것입니다. 관련 RHV 구성 요소에 대한 설명서는 이 주제의 범위를 벗어납니다.
RHV에 대한 자세한 내용은 Red Hat의 Red Hat Virtualization 제품 설명서를 참조하십시오.
RHV 설치에 대한 자세한 내용은 Red Hat의 명령줄 문서를 사용하여 Red Hat Virtualization을 셀프 호스트 엔진으로 설치를 참조하십시오.
Red Hat Virtualization Manager 호스트 준비
Red Hat에서 제공하는 지침을 사용하여 Red Hat Virtualization Manager 호스트를 준비합니다. Red Hat의 명령줄 가이드를 사용하여 Red Hat Virtualization을 셀프 호스트 엔진으로 설치 장의 Red Hat Virtualization 호스트 설치 섹션을 참조하십시오.
Red Hat Enterprise Linux를 사용하여 호스트 배포
RHEL(Red Hat Enterprise Linux)을 실행하여 RHV를 활성화해야 합니다.
이 섹션에서는 RHEL8을 배포하는 방법의 예를 제공합니다.
필수 소프트웨어 설치 및 활성화
이 예는 Red Hat Enterprise Linux 8을 운영하는 데 필요한 소프트웨어를 얻고, 설치하고, 활성화하는 방법을 보여줍니다.
# Register node with RedHat subscription
# (for satellite check RedHat instruction)
sudo subscription-manager register \
--username {username} \
--password {password}
# Attach pools that allow to enable all required repos
# e.g.:
sudo subscription-manager attach \
--pool {RHOSP16.2 pool ID} \
--pool {Red Hat Virtualization Manager pool ID}
# Enable repos
sudo subscription-manager repos \
--disable='*' \
--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms \
--enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
--enable=fast-datapath-for-rhel-8-x86_64-rpms \
--enable=advanced-virt-for-rhel-8-x86_64-rpms \
--enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
--enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
# Remove cloud-init (in case if it virt test setup and cloud image used for deploy)
sudo dnf remove -y cloud-init || true
# Enable dnf modules and update system
# For Red Hat Virtualization Manager 4.4 use virt:av
# (for previous versions check RedHat documentation)
sudo dnf module reset -y virt
sudo dnf module enable -y virt:av
sudo dnf distro-sync -y --nobest
sudo dnf upgrade -y --nobest
# Enable firewall
sudo dnf install -y firewalld
sudo systemctl enable --now firewalld
# Check current active zone
sudo firewall-cmd --get-active-zones
# exmaple of zones:
# public
# interfaces: eth0
# Add virbr0 interface into the active zone for ovirtmgmt, e.g.
sudo firewall-cmd --zone=public --change-zone=virbr0 --permanent
sudo firewall-cmd --zone=public --add-forward --permanent
# Ensure used interfaces in one zone
sudo firewall-cmd --get-active-zones
# exmaple of zones:
# [stack@node-10-0-10-147 ~]$ sudo firewall-cmd --get-active-zones
# public
# interfaces: eth0 virbr0
# Enable https and cockpit for RHVM web access and monitoring
sudo firewall-cmd --permanent \
--add-service=https \
--add-service=cockpit \
--add-service nfs
sudo firewall-cmd --permanent \
--add-port 2223/tcp \
--add-port 5900-6923/tcp \
--add-port 2223/tcp \
--add-port 5900-6923/tcp \
--add-port 111/tcp --add-port 111/udp \
--add-port 2049/tcp --add-port 2049/udp \
--add-port 4045/tcp --add-port 4045/udp \
--add-port 1110/tcp --add-port 1110/udp
# Prepare NFS Storage
# adjust sysctl settings
cat {{ EOF | sudo tee /etc/sysctl.d/99-nfs-tf-rhv.conf
net.ipv4.tcp_mem=4096 65536 4194304
net.ipv4.tcp_rmem=4096 65536 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
net.core.rmem_max=8388608
net.core.wmem_max=8388608
EOF
sudo sysctl --system
# install and enable NFS services
sudo dnf install -y nfs-utils
sudo systemctl enable --now nfs-server
sudo systemctl enable --now rpcbind
# prepare special user required by Red Hat Virtualization
getent group kvm || sudo groupadd kvm -g 36
sudo useradd vdsm -u 36 -g kvm
exports="/storage *(rw,all_squash,anonuid=36,anongid=36)\n"
for s in vmengine undercloud ipa overcloud ; do
sudo mkdir -p /storage/$s
exports+="/storage/$s *(rw,all_squash,anonuid=36,anongid=36)\n"
done
sudo chown -R 36:36 /storage
sudo chmod -R 0755 /storage
# add storage directory to exports
echo -e "$exports" | sudo tee /etc/exports
# restart NFS services
sudo systemctl restart rpcbind
sudo systemctl restart nfs-server
# check exports
sudo exportfs
# Rebbot system In case if newer kernel availalbe in /lib/modules
latest_kv=$(ls -1 /lib/modules | sort -V | tail -n 1)
active_kv=$(uname -r)
if [[ "$latest_kv" != "$active_kv" ]] ; then
echo "INFO: newer kernel version $latest_kv is available, active one is $active_kv"
echo "Perform reboot..."
sudo reboot
fi
도메인 이름 확인
계속하기 전에 DNS 또는 모든 노드의 /etc/hosts로 FQDN(정규화된 도메인 이름)을 확인할 수 있는지 확인합니다.
[stack@node-10-0-10-147 ~]$ cat /etc/hosts # Red Hat Virtualization Manager VM 10.0.10.200 vmengine.dev.clouddomain vmengine.dev vmengine # Red Hat Virtualization Hosts 10.0.10.147 node-10-0-10-147.dev.clouddomain node-10-0-10-147.dev node-10-0-10-147 10.0.10.148 node-10-0-10-148.dev.clouddomain node-10-0-10-148.dev node-10-0-10-148 10.0.10.149 node-10-0-10-149.dev.clouddomain node-10-0-10-149.dev node-10-0-10-149 10.0.10.150 node-10-0-10-150.dev.clouddomain node-10-0-10-150.dev node-10-0-10-150
첫 번째 노드에 Red Hat Virtualization Manager 배포
이 섹션에서는 RHVM(Red Hat Virtual Manager)을 배포하는 방법을 보여줍니다.
- Red Hat Virtualization Manager 어플라이언스 활성화
- 셀프 호스트 엔진 배포
- virh CLI를 활성화하여 oVirt 인증 사용
- Red Hat Virtualization Manager 리포지토리 활성화
Red Hat Virtualization Manager 어플라이언스 활성화
Red Hat Virtualization Manager 어플라이언스를 활성화하려면:
sudo dnf install -y \ tmux \ rhvm-appliance \ ovirt-hosted-engine-setup
셀프 호스트 엔진 배포
셀프 호스트 엔진을 배포하는 방법은 다음과 같습니다.
# !!! During deploy you need answer questions sudo hosted-engine --deploy # example of adding ansible vars into deploy command # sudo hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=10.0.10 # example of an answer: # ... # Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: # Please specify the nfs version you would like to use (auto, v3, v4, v4_0, v4_1, v4_2)[auto]: # Please specify the full shared storage connection path to use (example: host:/path): 10.0.10.147:/storage/vmengine # ...
배포 중에 NFS 작업을 진행하기 전에 IP 전달에 필요한 모든 인터페이스가 한 영역에 있는지 확인합니다.
sudo firewall-cmd --get-active-zones # exmaple of zones: # [stack@node-10-0-10-147 ~]$ sudo firewall-cmd --get-active-zones # public # interfaces: ovirtmgmt eth0 virbr0
virh CLI를 활성화하여 oVirt 인증 사용
virh cli에서 oVirt 인증을 사용하도록 설정하려면:
sudo ln -s /etc/ovirt-hosted-engine/virsh_auth.conf /etc/libvirt/auth.conf
Red Hat Virtualization Manager 리포지토리 활성화
RHVM 리포지토리를 활성화하려면:
노드 구축 및 네트워킹 활성화
이 섹션의 작업에 따라 노드를 배포하고 네트워킹을 사용하도록 설정합니다.
Ansible env 파일 준비
Ansible env 파일을 준비하려면 다음을 수행합니다.
# Common variables
# !!! Adjust to your setup - especially undercloud_mgmt_ip and
# ipa_mgmt_ip to allow SSH to this machines (e.g. choose IPs from ovirtmgmt network)
cat << EOF > common-env.yaml
---
ovirt_hostname: vmengine.dev.clouddomain
ovirt_user: "admin@internal"
ovirt_password: "qwe123QWE"
datacenter_name: Default
# to access hypervisors
ssh_public_key: false
ssh_root_password: "qwe123QWE"
# gateway for VMs (undercloud and ipa)
mgmt_gateway: "10.0.10.1"
# dns to be set in ipa and initial dns for UC
# k8s nodes uses ipa as dns
dns_server: "10.0.10.1"
undercloud_name: "undercloud"
undercloud_mgmt_ip: "10.0.10.201"
undercloud_ctlplane_ip: "192.168.24.1"
ipa_name: "ipa"
ipa_mgmt_ip: "10.0.10.205"
ipa_ctlplane_ip: "192.168.24.5"
overcloud_domain: "dev.clouddomain"
EOF
# Hypervisor nodes
# !! Adjust to your setup
# Important: ensure you use correct node name for already registered first hypervisor
# (it is registed at the RHVM deploy command hosted-engine --deploy)
cat << EOF > nodes.yaml
---
nodes:
# !!! Adjust networks and power management options for your needs
- name: node-10-0-10-147.dev.clouddomain
ip: 10.0.10.147
cluster: Default
comment: 10.0.10.147
networks:
- name: ctlplane
phy_dev: eth1
- name: tenant
phy_dev: eth2
# provide power management if needed (for all nodes)
# pm:
# address: 192.168.122.1
# port: 6230
# user: ipmi
# password: qwe123QWE
# type: ipmilan
# options:
# ipmilanplus: true
- name: node-10-0-10-148.dev.clouddomain
ip: 10.0.10.148
cluster: node-10-0-10-148
comment: 10.0.10.148
networks:
- name: ctlplane
phy_dev: eth1
- name: tenant
phy_dev: eth2
- name: node-10-0-10-149.dev.clouddomain
ip: 10.0.10.149
cluster: node-10-0-10-149
comment: 10.0.10.149
networks:
- name: ctlplane
phy_dev: eth1
- name: tenant
phy_dev: eth2
- name: node-10-0-10-150.dev.clouddomain
ip: 10.0.10.150
cluster: node-10-0-10-150
comment: 10.0.10.150
networks:
- name: ctlplane
phy_dev: eth1
- name: tenant
phy_dev: eth2
# !!! Adjust storages according to your setup architecture
storage:
- name: undercloud
mountpoint: "/storage/undercloud"
host: node-10-0-10-147.dev.clouddomain
address: node-10-0-10-147.dev.clouddomain
- name: ipa
mountpoint: "/storage/ipa"
host: node-10-0-10-147.dev.clouddomain
address: node-10-0-10-147.dev.clouddomain
- name: node-10-0-10-148-overcloud
mountpoint: "/storage/overcloud"
host: node-10-0-10-148.dev.clouddomain
address: node-10-0-10-148.dev.clouddomain
- name: node-10-0-10-149-overcloud
mountpoint: "/storage/overcloud"
host: node-10-0-10-149.dev.clouddomain
address: node-10-0-10-149.dev.clouddomain
- name: node-10-0-10-150-overcloud
mountpoint: "/storage/overcloud"
host: node-10-0-10-150.dev.clouddomain
address: node-10-0-10-150.dev.clouddomain
EOF
# Playbook to register hypervisor nodes in RHVM, create storage pools and networks
# Adjust values to your setup!!!
cat << EOF > infra.yaml
- hosts: localhost
tasks:
- name: Get RHVM token
ovirt_auth:
url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
username: "{{ ovirt_user }}"
password: "{{ ovirt_password }}"
insecure: true
- name: Create datacenter
ovirt_datacenter:
state: present
auth: "{{ ovirt_auth }}"
name: "{{ datacenter_name }}"
local: false
- name: Create clusters {{ item.name }}
ovirt_cluster:
state: present
auth: "{{ ovirt_auth }}"
name: "{{ item.cluster }}"
data_center: "{{ datacenter_name }}"
ksm: true
ballooning: true
memory_policy: server
with_items:
- "{{ nodes }}"
- name: List host in datacenter
ovirt_host_info:
auth: "{{ ovirt_auth }}"
pattern: "datacenter={{ datacenter_name }}"
register: host_list
- set_fact:
hostnames: []
- name: List hostname
set_fact:
hostnames: "{{ hostnames + [ item.name ] }}"
with_items:
- "{{ host_list['ovirt_hosts'] }}"
- name: Register in RHVM
ovirt_host:
state: present
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
cluster: "{{ item.cluster }}"
address: "{{ item.ip }}"
comment: "{{ item.comment | default(item.ip) }}"
power_management_enabled: "{{ item.power_management_enabled | default(false) }}"
# unsupported in rhel yet - to avoid reboot create node via web
# reboot_after_installation: "{{ item.reboot_after_installation | default(false) }}"
reboot_after_upgrade: "{{ item.reboot_after_upgrade | default(false) }}"
public_key: "{{ ssh_public_key }}"
password: "{{ ssh_root_password }}"
register: task_result
until: not task_result.failed
retries: 5
delay: 10
when: item.name not in hostnames
with_items:
- "{{ nodes }}"
- name: Register Power Management for host
ovirt_host_pm:
state: present
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
address: "{{ item.pm.address }}"
username: "{{ item.pm.user }}"
password: "{{ item.pm.password }}"
type: "{{ item.pm.type }}"
options: "{{ item.pm.pm_options | default(omit) }}"
when: item.pm is defined
with_items:
- "{{ nodes }}"
- name: Create storage domains
ovirt_storage_domain:
state: present
auth: "{{ ovirt_auth }}"
data_center: "{{ datacenter_name }}"
name: "{{ item.name }}"
domain_function: "data"
host: "{{ item.host }}"
nfs:
address: "{{ item.address | default(item.host) }}"
path: "{{ item.mountpoint }}"
version: "auto"
register: task_result
until: not task_result.failed
retries: 5
delay: 10
with_items:
- "{{ storage }}"
- name: Create logical networks
ovirt_network:
state: present
auth: "{{ ovirt_auth }}"
data_center: "{{ datacenter_name }}"
name: "{{ datacenter_name }}-{{ item.1.name }}"
clusters:
- name: "{{ item.0.cluster }}"
vlan_tag: "{{ item.1.vlan | default(omit)}}"
vm_network: true
with_subelements:
- "{{ nodes }}"
- networks
- name: Create host networks
ovirt_host_network:
state: present
auth: "{{ ovirt_auth }}"
networks:
- name: "{{ datacenter_name }}-{{ item.1.name }}"
boot_protocol: none
name: "{{ item.0.name }}"
interface: "{{ item.1.phy_dev }}"
with_subelements:
- "{{ nodes }}"
- networks
- name: Remove vNICs network_filter
ovirt.ovirt.ovirt_vnic_profile:
state: present
auth: "{{ ovirt_auth }}"
name: "{{ datacenter_name }}-{{ item.1.name }}"
network: "{{ datacenter_name }}-{{ item.1.name }}"
data_center: "{{ datacenter_name }}"
network_filter: ""
with_subelements:
- "{{ nodes }}"
- networks
- name: Revoke SSO Token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
EOF
노드 및 네트워킹 구축
노드를 배포하고 네트워킹을 사용하도록 설정하려면:
ansible-playbook \ --extra-vars="@common-env.yaml" \ --extra-vars="@nodes.yaml" \ infra.yaml
호스트 확인
호스트가 재부팅 상태인 경우 확장 메뉴로 이동하여 '호스트가 재부팅되었는지 확인'을 선택합니다.
이미지 준비
이미지를 준비하려면:
오버클라우드 VM 생성
이 섹션의 지침에 따라 오버클라우드 VM을 생성합니다.
Kubernetes 클러스터에 대한 이미지 준비
Kubernetes 클러스터에 Contrail 컨트롤 플레인을 구축하는 경우 다음 예에 따라 Contrail Controller용 이미지를 준비하십시오.
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/overcloud.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/overcloud.qcow2
virt-customize -a images/overcloud.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--run-command 'useradd stack' \
--password stack:password:${stack_password} \
--run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
--chmod 0440:/etc/sudoers.d/stack \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--selinux-relabel
쿠버네티스는 노드에 별도로 배포해야 한다. 이것은 다양한 방법으로 수행 할 수 있습니다. Kubespray를 사용하여 이 작업을 수행하는 방법에 대한 정보는 Github의 이 Kubespray 페이지를 참조하십시오.
Contrail Controller는 Kubernetes 위에 TF 연산자를 사용하여 구축할 수 있습니다. TF Operator Github 페이지를 참조하십시오.
오버클라우드 VM 정의 준비
오버클라우드 VM 정의를 준비하려면:
# Overcloud VMs definitions
# Adjust values to your setup!!!
# For deploying Contrail Control plane in a Kuberentes cluster
# remove contrail controller nodes as they are not managed by RHOSP. They to be created at next steps.
cat << EOF > vms.yaml
---
vms:
- name: controller-0
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:16:54:d8"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
cluster: node-10-0-10-148
storage: node-10-0-10-148-overcloud
- name: contrail-controller-0
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:d6:2b:03"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
cluster: node-10-0-10-148
storage: node-10-0-10-148-overcloud
- name: contrail-controller-1
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:d6:2b:13"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
cluster: node-10-0-10-149
storage: node-10-0-10-149-overcloud
- name: contrail-controller-2
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:d6:2b:23"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
cluster: node-10-0-10-150
storage: node-10-0-10-150-overcloud
EOF
# Playbook for overcloud VMs
# !!! Adjustto your setup
cat << EOF > overcloud.yaml
- hosts: localhost
tasks:
- name: Get RHVM token
ovirt_auth:
url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
username: "{{ ovirt_user }}"
password: "{{ ovirt_password }}"
insecure: true
- name: Create disks
ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
interface: virtio
size: "{{ item.disk_size_gb }}GiB"
format: cow
image_path: "{{ item.image | default(omit) }}"
storage_domain: "{{ item.storage }}"
register: task_result
ignore_errors: yes
until: not task_result.failed
retries: 5
delay: 10
with_items:
- "{{ vms }}"
- name: Deploy VMs
ovirt.ovirt.ovirt_vm:
auth: "{{ ovirt_auth }}"
state: "{{ item.state | default('present') }}"
cluster: "{{ item.cluster }}"
name: "{{ item.name }}"
memory: "{{ item.memory_gb }}GiB"
cpu_cores: "{{ item.cpu_cores }}"
type: server
high_availability: yes
placement_policy: pinned
operating_system: rhel_8x64
disk_format: cow
graphical_console:
protocol:
- spice
- vnc
serial_console: yes
nics: "{{ item.nics | default(omit) }}"
disks:
- name: "{{ item.name }}"
bootable: True
storage_domain: "{{ item.storage }}"
cloud_init: "{{ item.cloud_init | default(omit) }}"
cloud_init_nics: "{{ item.cloud_init_nics | default(omit) }}"
retries: 5
delay: 2
with_items:
- "{{ vms }}"
- name: Revoke SSO Token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
EOF
ansible-playbook \
--extra-vars="@common-env.yaml" \
--extra-vars="@vms.yaml" \
overcloud.yaml
Kubernetes 기반 구축을 위한 Contrail 컨트롤 플레인 VM 생성
Contrail 컨트롤 플레인이 별도의 Kubernetes 기반 클러스터로 구축되는 병렬 구축에서 이 섹션의 지침을 따르십시오.
Kubernetes VM에 대한 VM 이미지 사용자 지정
Kubernetes VM에 대한 VM 이미지를 사용자 지정하려면 다음을 수행합니다.
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/k8s.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/k8s.qcow2
virt-customize -a images/k8s.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--password stack:password:${stack_password} \
--run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
--chmod 0440:/etc/sudoers.d/stack \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--selinux-relabel
Kubernetes VM 정의
Kubernetes VM을 정의하려면 다음을 수행합니다.
# !!! Adjust to your setup (addresses in ctlplane, tenant and mgmt networks)
cat << EOF > k8s-vms.yaml
---
vms:
- name: contrail-controller-0
state: running
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:16:54:d8"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
- name: eth2
interface: virtio
profile_name: "ovirtmgmt"
cluster: node-10-0-10-148
storage: node-10-0-10-148-overcloud
image: "images/k8s.qcow2"
cloud_init:
# ctlplane network
host_name: "contrail-controller-0.{{ overcloud_domain }}"
dns_search: "{{ overcloud_domain }}"
dns_servers: "{{ ipa_ctlplane_ip }}"
nic_name: "eth0"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "192.168.24.7"
nic_gateway: "{{ undercloud_ctlplane_ip }}"
nic_netmask: "255.255.255.0"
cloud_init_nics:
# tenant network
- nic_name: "eth1"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.0.201"
nic_netmask: "255.255.255.0"
# mgmt network
- nic_name: "eth2"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.10.210"
nic_netmask: "255.255.255.0"
- name: contrail-controller-1
state: running
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:d6:2b:03"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
- name: eth2
interface: virtio
profile_name: "ovirtmgmt"
cluster: node-10-0-10-149
storage: node-10-0-10-149-overcloud
image: "images/k8s.qcow2"
cloud_init:
host_name: "contrail-controller-1.{{ overcloud_domain }}"
dns_search: "{{ overcloud_domain }}"
dns_servers: "{{ ipa_ctlplane_ip }}"
nic_name: "eth0"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "192.168.24.8"
nic_gateway: "{{ undercloud_ctlplane_ip }}"
nic_netmask: "255.255.255.0"
cloud_init_nics:
- nic_name: "eth1"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.0.202"
nic_netmask: "255.255.255.0"
# mgmt network
- nic_name: "eth2"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.10.211"
nic_netmask: "255.255.255.0"
- name: contrail-controller-2
state: running
disk_size_gb: 100
memory_gb: 16
cpu_cores: 4
nics:
- name: eth0
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
mac_address: "52:54:00:d6:2b:23"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-tenant"
- name: eth2
interface: virtio
profile_name: "ovirtmgmt"
cluster: node-10-0-10-150
storage: node-10-0-10-150-overcloud
image: "images/k8s.qcow2"
cloud_init:
host_name: "contrail-controller-1.{{ overcloud_domain }}"
dns_search: "{{ overcloud_domain }}"
dns_servers: "{{ ipa_ctlplane_ip }}"
nic_name: "eth0"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "192.168.24.9"
nic_gateway: "{{ undercloud_ctlplane_ip }}"
nic_netmask: "255.255.255.0"
cloud_init_nics:
- nic_name: "eth1"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.0.203"
nic_netmask: "255.255.255.0"EOF
# mgmt network
- nic_name: "eth2"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "10.0.10.212"
nic_netmask: "255.255.255.0"
EOF
ansible-playbook \
--extra-vars="@common-env.yaml" \
--extra-vars="@k8s-vms.yaml" \
overcloud.yaml
RHOSP 내부 API 네트워크에 대한 VLAN 구성
Kubernetes 노드에 SSH하고 RHOSP 내부 API 네트워크용 VLAN을 구성하려면 다음을 수행합니다.
# Example
# ssh to a node
ssh stack@192.168.24.7
# !!!Adjust to your setup and repeate for all Contrail Controller nodes
cat {{EOF | sudo tee /etc/sysconfig/network-scripts/ifcfg-vlan710
ONBOOT=yes
BOOTPROTO=static
HOTPLUG=no
NM_CONTROLLED=no
PEERDNS=no
USERCTL=yes
VLAN=yes
DEVICE=vlan710
PHYSDEV=eth0
IPADDR=10.1.0.7
NETMASK=255.255.255.0
EOF
sudo ifup vlan710
# Do same for external vlan if needed
언더클라우드 VM 생성
이 섹션의 지침에 따라 언더클라우드 VM을 생성합니다.
Undercloud VM에 대한 이미지 사용자 지정
언더클라우드 VM에 대한 이미지를 고객화하려면:
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
undercloud_name=undercloud
domain_name=dev.clouddomain
root_password=contrail123
stack_password=contrail123
export LIBGUESTFS_BACKEND=direct
qemu-img create -f qcow2 images/${undercloud_name}.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/${undercloud_name}.qcow2
virt-customize -a images/${undercloud_name}.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--hostname ${undercloud_name}.${domain_name} \
--run-command 'useradd stack' \
--password stack:password:${stack_password} \
--run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
--chmod 0440:/etc/sudoers.d/stack \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--selinux-relabel
언더클라우드 VM 정의
언더클라우드 VM을 정의하려면:
cat << EOF > undercloud.yaml
- hosts: localhost
tasks:
- set_fact:
cluster: "Default"
storage: "undercloud"
- name: get RHVM token
ovirt_auth:
url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
username: "{{ ovirt_user }}"
password: "{{ ovirt_password }}"
insecure: true
- name: create disks
ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ undercloud_name }}"
interface: virtio
format: cow
size: 100GiB
image_path: "images/{{ undercloud_name }}.qcow2"
storage_domain: "{{ storage }}"
register: task_result
ignore_errors: yes
until: not task_result.failed
retries: 5
delay: 10
- name: deploy vms
ovirt.ovirt.ovirt_vm:
auth: "{{ ovirt_auth }}"
state: running
cluster: "{{ cluster }}"
name: "{{ undercloud_name }}"
memory: 32GiB
cpu_cores: 8
type: server
high_availability: yes
placement_policy: pinned
operating_system: rhel_8x64
cloud_init:
host_name: "{{ undercloud_name }}.{{ overcloud_domain }}"
dns_search: "{{ overcloud_domain }}"
dns_servers: "{{ dns_server | default(mgmt_gateway) }}"
nic_name: "eth0"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "{{ undercloud_mgmt_ip }}"
nic_gateway: "{{ mgmt_gateway }}"
nic_netmask: "255.255.255.0"
cloud_init_nics:
- nic_name: "eth1"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "{{ undercloud_ctlplane_ip }}"
nic_netmask: "255.255.255.0"
disk_format: cow
graphical_console:
protocol:
- spice
- vnc
serial_console: yes
nics:
- name: eth0
interface: virtio
profile_name: "ovirtmgmt"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
disks:
- name: "{{ undercloud_name }}"
bootable: true
storage_domain: "{{ storage }}"
- name: revoke SSO token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
EOF
ansible-playbook --extra-vars="@common-env.yaml" undercloud.yaml
FreeIPA VM 만들기
FreeIPA VM을 만들려면 다음을 수행합니다.
- RedHat IDM(FreeIPA) VM에 대한 VM 이미지 사용자 지정
- RedHat IDM(FreeIPA) VM 사용Enable the RedHat IDM (FreeIPA) VM
- 웹 브라우저를 통해 RHVM에 액세스
- 직렬 콘솔을 통해 VM에 액세스Access to VMs via serial console
RedHat IDM(FreeIPA) VM에 대한 VM 이미지 사용자 지정
이 예제에 따라 RedHat IDM 이미지에 대한 VM 이미지를 고객에게 제공합니다.
이 예제는 모든 곳에서 TLS 배포에 대한 설정입니다.
cd
cloud_image=images/rhel-8.4-x86_64-kvm.qcow2
ipa_name=ipa
domain_name=dev.clouddomain
qemu-img create -f qcow2 images/${ipa_name}.qcow2 100G
virt-resize --expand /dev/sda3 ${cloud_image} images/${ipa_name}.qcow2
virt-customize -a images/${ipa_name}.qcow2 \
--run-command 'xfs_growfs /' \
--root-password password:${root_password} \
--hostname ${ipa_name}.${domain_name} \
--run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
--run-command 'systemctl enable sshd' \
--selinux-relabel
RedHat IDM(FreeIPA) VM 사용Enable the RedHat IDM (FreeIPA) VM
RedHat IDM VM을 사용하도록 설정하려면 다음을 수행합니다.
cat << EOF > ipa.yaml
- hosts: localhost
tasks:
- set_fact:
cluster: "Default"
storage: "ipa"
- name: get RHVM token
ovirt_auth:
url: "https://{{ ovirt_hostname }}/ovirt-engine/api"
username: "{{ ovirt_user }}"
password: "{{ ovirt_password }}"
insecure: true
- name: create disks
ovirt_disk:
auth: "{{ ovirt_auth }}"
name: "{{ ipa_name }}"
interface: virtio
format: cow
size: 100GiB
image_path: "images/{{ ipa_name }}.qcow2"
storage_domain: "{{ storage }}"
register: task_result
ignore_errors: yes
until: not task_result.failed
retries: 5
delay: 10
- name: deploy vms
ovirt.ovirt.ovirt_vm:
auth: "{{ ovirt_auth }}"
state: running
cluster: "{{ cluster }}"
name: "{{ ipa_name }}"
memory: 4GiB
cpu_cores: 2
type: server
high_availability: yes
placement_policy: pinned
operating_system: rhel_8x64
cloud_init:
host_name: "{{ ipa_name }}.{{ overcloud_domain }}"
dns_search: "{{ overcloud_domain }}"
dns_servers: "{{ dns_server | default(mgmt_gateway) }}"
nic_name: "eth0"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "{{ ipa_mgmt_ip }}"
nic_gateway: "{{ mgmt_gateway }}"
nic_netmask: "255.255.255.0"
cloud_init_nics:
- nic_name: "eth1"
nic_boot_protocol_v6: none
nic_boot_protocol: static
nic_ip_address: "{{ ipa_ctlplane_ip }}"
nic_netmask: "255.255.255.0"
disk_format: cow
graphical_console:
protocol:
- spice
- vnc
serial_console: yes
nics:
- name: eth0
interface: virtio
profile_name: "ovirtmgmt"
- name: eth1
interface: virtio
profile_name: "{{ datacenter_name }}-ctlplane"
disks:
- name: "{{ ipa_name }}"
bootable: true
storage_domain: "{{ storage }}"
- name: revoke SSO token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
EOF
ansible-playbook --extra-vars="@common-env.yaml" ipa.yaml
웹 브라우저를 통해 RHVM에 액세스
RHVM은 엔진 FQDN 또는 엔진 대체 FQDN 중 하나를 사용해서만 액세스할 수 있습니다. 예를 들어, https://vmengine.dev.clouddomain. FQDN을 확인할 수 있는지 확인하십시오.
직렬 콘솔을 통해 VM에 액세스Access to VMs via serial console
직렬 콘솔을 통해 VM에 액세스하려면 RedHat 설명서 또는 oVirt 설명서를 참조하세요.