Skip to content

Instantly share code, notes, and snippets.

@jbadiapa
Last active August 31, 2018 06:50
Show Gist options
  • Save jbadiapa/e675e9f25aa43ace1d8fb7d570f97037 to your computer and use it in GitHub Desktop.
Save jbadiapa/e675e9f25aa43ace1d8fb7d570f97037 to your computer and use it in GitHub Desktop.
tripleo.sa.telemetry

Telemetry Platform Deployment Over Openstack

Instructions in this gist are for the deployment of the telemetry platform on top of OpenShift Origin v3.9. Deployment of the platform is done in two (2) steps. The first step is bootstrap of the virtual hosts and installation of the virtual machines (VM) as there were overcloud nodes on an Openstack. Following with the installation of the telemetry platform with the telemetry-framework

Prerequisites TBD

Telemetry Platform Layout TBD

Bootstrap and VM setup with tripleo-quickstart

The tripleo-quickstart project which helps to deploy a full openstack. Firstly, the Service Asurance (aka SA) infrastructure needs to be created.

cat > ~/openshift-sa.yaml <<EOF
# Deploy an HA openstack environment.
#
# This will require (6144 * 4) == approx. 24GB for the overcloud
# nodes, plus another 8GB for the undercloud, for a total of around
# 32GB.
control_memory: 6144
compute_memory: 6144

undercloud_memory: 12288

# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 2

undercloud_generate_service_certificate: True

# Since HA has more machines, we set the cpu for controllers and
# compute nodes to 1
default_vcpu: 2

node_count: 7

# Create three controller nodes and one compute node.
overcloud_nodes:
  - name: openshift_master_0
    flavor: openshift_master
    virtualbmc_port: 6231

  - name: openshift_node_0
    flavor: openshift_worker
    virtualbmc_port: 6232

  - name: openshift_node_1
    flavor: openshift_worker
    virtualbmc_port: 6233

  - name: openshift_node_2
    flavor: openshift_worker
    virtualbmc_port: 6234

  - name: openshift_infra_node_0
    flavor: openshift_infranode
    virtualbmc_port: 6235

  - name: openshift_infra_node_1
    flavor: openshift_infranode
    virtualbmc_port: 6236

# Tell tripleo which nodes to deploy.
topology: >-
  --compute-scale 0
  --control-scale 0

extradisks_size: 70G

undercloud_custom_env_files: "{{ working_dir }}/undercloud-parameter-defaults.yaml"
undercloud_cloud_domain: "localdomain"
undercloud_undercloud_hostname: "undercloud.{{ undercloud_cloud_domain }}"
undercloud_resource_registry_args:
  "OS::TripleO::Undercloud::Net::SoftwareConfig": "{{ overcloud_templates_path }}/net-config-undercloud.yaml"

flavors:
  openshift_master:
    memory: 8192
    disk: '{{openshift_disk|default(default_disk)}}'
    vcpu: 4
    extradisks: true

  openshift_worker:
    memory: 10240
    disk: '{{openshift_disk|default(default_disk)}}'
    vcpu: 2
    extradisks: true

  openshift_infranode:
    memory: 6192
    disk: '{{openshift_disk|default(default_disk)}}'
    vcpu: 1
    extradisks: true

  undercloud:
    memory: '{{undercloud_memory|default(undercloud_memory)}}'
    disk: '{{undercloud_disk|default(undercloud_disk)}}'
    vcpu: '{{undercloud_vcpu|default(undercloud_vcpu)}}'
EOF

./quickstart.sh -R master-tripleo-ci -N ~/openshift-sa.yaml $VIRTHOST

This will create the servers.

Deploy the VM and make them available

Create the my_roles.yaml file which configure what software is going to be installed on the nodes.

cat > my_roles.yaml <<EOF
- name: OpenShiftMaster
  description: OpenShift master
  CountDefault: 1
  HostnameFormatDefault: '%stackname%-master-%index%'
  disable_upgrade_deployment: True
  tags:
    - primary
    - controller
  ServicesDefault:
    - OS::TripleO::Services::Sshd
    - OS::TripleO::Services::Ntp

- name: OpenShiftWorker
  description: OpenShift node
  disable_upgrade_deployment: True
  HostnameFormatDefault: '%stackname%-node-%index%'
  CountDefault: 3 
  ServicesDefault:
    - OS::TripleO::Services::Sshd
    - OS::TripleO::Services::Ntp

- name: OpenShiftInfranode
  description: OpenShift infra node
  disable_upgrade_deployment: True
  HostnameFormatDefault: '%stackname%-infra-node-%index%'
  CountDefault: 2
  ServicesDefault:
    - OS::TripleO::Services::Sshd
    - OS::TripleO::Services::Ntp
EOF

And run the following command to deploy the VM

source stackrc
openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates/ --stack openshift --disable-validations -e /usr/share/openstack-tripleo-heat-templates/environments/networks-disable.yaml -e /home/stack/network-environment.yaml -r /home/stack/my_roles.yaml

After the above commands and if everything went ok. you should be able to see the baremetal nodes and its IPs

nova list
+--------------------------------------+------------------------+--------+------------+-------------+------------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks               |
+--------------------------------------+------------------------+--------+------------+-------------+------------------------+
| 422619cd-4025-46e3-a0ea-6366d702091e | openshift-infra-node-0 | ACTIVE | -          | Running     | ctlplane=192.168.24.20 |
| 9169aa34-704c-4881-b688-7e8cf9a56d96 | openshift-infra-node-1 | ACTIVE | -          | Running     | ctlplane=192.168.24.12 |
| 73cd2bec-6792-4b93-8de5-d925908c4c04 | openshift-master-0     | ACTIVE | -          | Running     | ctlplane=192.168.24.13 |
| 4e75295a-b63f-4059-bbc0-2f8306f6f5bf | openshift-node-0       | ACTIVE | -          | Running     | ctlplane=192.168.24.26 |
| a24dcf89-08fe-4e60-b98e-a4fea6a6c980 | openshift-node-1       | ACTIVE | -          | Running     | ctlplane=192.168.24.7  |
| 46b054f8-d3f2-40d2-9dbf-ed078259728e | openshift-node-2       | ACTIVE | -          | Running     | ctlplane=192.168.24.11 |
+--------------------------------------+------------------------+--------+------------+-------------+------------------------+

Install NetworkManager on the nodes

Creates an ansible playbook to install NetworkManager on every node.

mkdir -p openshift/roles/packages/defaults
mkdir -p openshift/roles/packages/tasks
cat > openshift/roles/packages/defaults/main.yaml <<EOF 
---
packages:
  - NetworkManager
EOF
cat > openshift/roles/packages/tasks/main.yaml <<EOF 
---
# Install the packages needed
- name: Install packages Nedded
  package:
    name: "{{ packages }}"
    state: present
- name: Enable and start NetworkManager
  service:
    name: NetworkManager
    state: started
    enabled: yes
EOF
cat >openshift/playbook.yaml<<EOF 
---
- name: set the hosts on all hosts
  hosts: all
  gather_facts: false
  roles:
  - packages 
EOF
cat > openshift/inventory <<EOF
openshift-infra-node-0 ansible_host=192.168.24.20
openshift-infra-node-1 ansible_host=192.168.24.12
openshift-master-0 ansible_host=192.168.24.13
openshift-node-0 ansible_host=192.168.24.26
openshift-node-1 ansible_host=192.168.24.7
openshift-node-2 ansible_host=192.168.24.11
[all:vars]
ansible_user=heat-admin
ansible_ssh_extra_args='-o UserKnownHostsFile=/dev/null  -o StrictHostKeyChecking=no'
ansible_become=yes
EOF

ansible-playbook -i openshift/inventory openshift/playbook.yaml

Install Service Assurance

Donwload the telemetry-platform and install it.

cd ~
git clone https://github.com/redhat-nfvpe/telemetry-framework
cd telemetry-framework/
./scripts/bootstrap.sh
cd openshift-ansible/

Create the inventory

cat > inventory/telemetry.inventory <<EOF
# vim: set ft=yaml shiftwidth=2 tabstop=2 expandtab :
openshift-infra-node-0 ansible_host=192.168.24.20
openshift-infra-node-1 ansible_host=192.168.24.12
openshift-master-0 ansible_host=192.168.24.13
openshift-node-0 ansible_host=192.168.24.26
openshift-node-1 ansible_host=192.168.24.7
openshift-node-2 ansible_host=192.168.24.11

[OSEv3:children]
masters
nodes
etcd
glusterfs

[OSEv3:vars]
ansible_become=yes
debug_level=2

# install telemetry
sa_telemetry_install=true
sa_telemetry_namespace=sa-telemetry
sa_telemetry_node_labels=[{'name': 'blue','host': 'openshift-infra-node-0'},{'name': 'green', 'host': 'openshift-infra-node-1'}]


# storage
openshift_storage_glusterfs_namespace=glusterfs
openshift_storage_glusterfs_name=storage
openshift_storage_glusterfs_storageclass_default=true

# service broker
openshift_enable_service_catalog=true
openshift_service_catalog_image_version=v3.9

# main setup
openshift_disable_check=disk_availability,memory_availability,docker_image_availability,package_version
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_deployment_type=origin
openshift_release=v3.9
openshift_pkg_version=-3.9.0-1.el7.git.0.ba7faec
enable_excluders=false
openshift_clock_enabled=true

# Native HA
openshift_master_cluster_method=native
openshift_master_cluster_hostname=master.192.168.24.20.nip.io
openshift_master_cluster_public_hostname=console.192.168.24.20.nip.io

# hostname setup
openshift_hostname_check=true
openshift_master_default_subdomain=apps.192.168.24.20.nip.io

# ansible service broker
#ansible_service_broker_registry_user=dockerhub_username
#ansible_service_broker_registry_password=dockerhub_password
ansible_service_broker_registry_organization=ansibleplaybookbundle
ansible_service_broker_registry_whitelist=[".*-apb$"]
ansible_service_broker_local_registry_whitelist=[".*"]

[masters]
openshift-master-0

[etcd]
openshift-master-0

[nodes]
openshift-master-0 openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
openshift-node-0 openshift_node_labels="{'region': 'primary', 'zone': 'default', 'node': 'blue', 'application': 'sa-telemetry'}"
openshift-node-1 openshift_node_labels="{'region': 'primary', 'zone': 'default', 'node': 'green', 'application': 'sa-telemetry'}"
openshift-node-2 openshift_node_labels="{'region': 'primary', 'zone': 'default', 'node': 'blue', 'application': 'sa-telemetry'}"
openshift-infra-node-0 openshift_node_labels="{'region': 'infra', 'zone': 'default', 'node': 'green'}"
openshift-infra-node-1 openshift_node_labels="{'region': 'infra', 'zone': 'default', 'node': 'blue'}"

[glusterfs]
openshift-node-[0:2]

[glusterfs:vars]
glusterfs_devices=[ "/dev/vdb" ]
r_openshift_storage_glusterfs_use_firewalld=false
r_openshift_storage_glusterfs_firewall_enabled=true
openshift_storage_glusterfs_timeout=900
openshift_storage_glusterfs_wipe=true

[all:vars]
ansible_user=heat-admin
ansible_ssh_extra_args='-o UserKnownHostsFile=/dev/null  -o StrictHostKeyChecking=no'
ansible_become=yes
EOF

Deploy the Telemetry Framework

ansible-playbook -i inventory/telemetry.inventory ~/telemetry-framework/working/openshift-ansible/playbooks/prerequisites.yml
ansible-playbook -i inventory/telemetry.inventory ~/telemetry-framework/working/openshift-ansible/playbooks/deploy_cluster.yml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment