Skip to content

Instantly share code, notes, and snippets.

@csiens
csiens / RKE-TF-installer.sh
Last active Feb 24, 2020
Wrapper to setup an RKE cluster with TungstenFabric as the CNI
View RKE-TF-installer.sh
#!/usr/bin/env bash
#
# Run this as root on the first master node. You must be able to ssh as the root user to each node via ssh keys
# installed at /root/.ssh/ on the first master node. The public ssh key MUST be in the /root/.ssh/authorized_keys
# file on ALL nodes including the first master. Use "ssh-keygen" to create an ssh keypair and use "ssh-copy-id NODE_IP"
# to distribute the public key to ALL nodes.
#
# The following commands are used to prepare a generic EC2 or GCE instance and run the script.
# # enter an interactive sudo session
# sudo -i
@csiens
csiens / RKE_TungstenFabric_install.sh
Last active Jan 26, 2020
Script to install an RKE cluster with N masters and N workers with TungstenFabric as the CNI
View RKE_TungstenFabric_install.sh
#!/bin/sh
#
#This script will setup an RKE Kubernetes cluster with TungstenFabric as the CNI with one master and two worker nodes running Ubuntu 18.04
#
#Run this as root from the initial control plane node
#
#You will need to create an ssh keypair and place them in /root/.ssh/ and then use ssh-copy-id to distribute the public key to all other nodes
#
#Set the control_plane_ip and worker_ip variables and update the embedded cluster.yml to the correct ip addresses for your environment
#
View AWS_Ubuntu_Nginx_Packer.json
{
"variables": {
"ansible_user": "ubuntu",
"name": "ubuntu_nginx_packer",
"source_ami": "ami-0d5d9d301c853a04a",
"access_key":"",
"secret_key":"",
"region":"us-east-2"
},
"builders": [{
@csiens
csiens / mood_ansible_playbook.yml
Created Jan 23, 2020
mood music on hold updater python script and ansible playbook
View mood_ansible_playbook.yml
---
- hosts: all
sudo: true
tasks:
- block:
- name: ping
ping:
- name: delete old moh files
@csiens
csiens / Packer docker builder with ansible provisioner demo playbook.yml
Last active Jan 18, 2020
Packer docker builder with ansible provisioner demo
View Packer docker builder with ansible provisioner demo playbook.yml
---
- hosts: all
sudo: true
tasks:
- name: install nginx for ubuntu
apt: name=nginx state=latest
when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
- name: install nginx for centos
@csiens
csiens / Rancher Kubernetes Engine with TungstenFabric on Ubuntu.txt
Last active Jan 19, 2020
Rancher Kubernetes Engine with TungstenFabric on Ubuntu
View Rancher Kubernetes Engine with TungstenFabric on Ubuntu.txt
1) Install Ubuntu on nodes and set hostname and IP on all nodes
2) Prepare nodes. Run these commands as the root user on all nodes
#turn off swap
swapoff -a
#install packages
apt-get install -y ntp docker.io
@csiens
csiens / Tf-devstack on Ubuntu 16.04 with KVM and linuxbridge.txt
Last active Sep 29, 2019
Tf-devstack on Ubuntu 16.04 with KVM and linuxbridge
View Tf-devstack on Ubuntu 16.04 with KVM and linuxbridge.txt
This will walk you through setting up KVM and a bridged network connection on an Ubuntu 16.04 host and installing a Tf-devstack guest VM.
This example uses the 192.168.1.0/24 subnet with 192.168.1.20 for the Ubuntu host and 192.168.1.21 for the Centos guest.
1) Install needed virtualization and network bridge packages with
sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils virt-viewer
2) Next we need to make sure the CPU/MOBO has the virtuilizion extensions enabled with
sudo kvm-ok
3) Now we must setup a network bridge for our KVM guest. First backup the current network config with
@csiens
csiens / kubespray-tf.sh
Created Jul 30, 2019
Install Kubernetes and Tungsten Fabric with Kubespray on Ubuntu 16.04 LTS or Centos 7
View kubespray-tf.sh
#!/bin/bash
#RUN THIS AS ROOT! this script assumes you have installed Ubuntu 16.04 LTS or Centos 7 on your nodes and can ssh to each node as root
#scp ssh keys to /root/.ssh/ and scp kubespray-tf.sh to /root/ on first master
#edit k8s_api_ip variable and master_ip_list variable and then edit the inventory.ini to match your environment
#edit pod and service cidr or k8s version in k8s-cluster.yml file below to match your needs
#set this to the IP of your first k8s master. this will eventually need to be the ip of an haproxy pointing at the master_ip_list for the K8s API
k8s_api_ip="10.9.8.21"
You can’t perform that action at this time.