Skip to content

Instantly share code, notes, and snippets.

View kacole2's full-sized avatar

Kendrick Coleman kacole2

View GitHub Profile
@kacole2
kacole2 / capv.sh
Created April 8, 2020 19:51
QuickStart for using the Cluster API provider for VMware vSphere (CAPI with CAPV)
#!/bin/bash
# CAPV Bootstrapping to deploy a management-cluster and a single workload cluster (with any number of workers) with Calico.
# This script has been tested on Ubuntu 18.04 with Cluster API 0.3, kind 0.7, and base images of Ubuntu and Photon 1.17.3.
# This assumes there are DHCP addresses available for the clusters being deployed in the vSphere environment. Static IPs are not supported with this quickstart.
# WE NEED A GITHUB TOKEN!! THIS BUG IS BEING WORKED ON. Get yours at https://github.com/settings/tokens
export GITHUB_TOKEN=<YOUR TOKEN>
# An SSH Public Key is Needed.
@kacole2
kacole2 / harbor.sh
Last active September 7, 2024 15:14
Quick Start Harbor Installation Script on Ubuntu 18.04
#!/bin/bash
#Harbor on Ubuntu 18.04
#Prompt for the user to ask if the install should use the IP Address or Fully Qualified Domain Name of the Harbor Server
PS3='Would you like to install Harbor based on IP or FQDN? '
select option in IP FQDN
do
case $option in
IP)
//******************************************************************************************
// File: ST_Anything_Multiples_Thingshield_570Doors.ino
// Authors: Dan G Ogorchock & Daniel J Ogorchock (Father and Son)
//
// Summary: This Arduino Sketch, along with the ST_Anything library and the revised SmartThings
// library, demonstrates the ability of one Arduino + SmartThings Shield to
// implement a multi input/output custom device for integration into SmartThings.
// The ST_Anything library takes care of all of the work to schedule device updates
// as well as all communications with the SmartThings Shield.
//
@kacole2
kacole2 / ST_Anything_Multiples_Thingshield_DoorsOnly_CLEAN.ino
Created September 30, 2019 20:28
ST_Anything Example for Doors and Window contact sensors
//******************************************************************************************
// File: ST_Anything_Multiples_Thingshield_570Doors.ino
// Authors: Dan G Ogorchock & Daniel J Ogorchock (Father and Son)
//
// Summary: This Arduino Sketch, along with the ST_Anything library and the revised SmartThings
// library, demonstrates the ability of one Arduino + SmartThings Shield to
// implement a multi input/output custom device for integration into SmartThings.
// The ST_Anything library takes care of all of the work to schedule device updates
// as well as all communications with the SmartThings Shield.
//
@kacole2
kacole2 / 1a-steps.md
Last active April 13, 2021 14:50
Kubernetes 1.14.1 Installation using kubeadm on vSphere with CentOS7

Steps to Install Kubernetes on CentOS7 with Kubeadm and vSphere

  1. On the master node: Edit the vsphere.conf file within the kubeadm-master.sh to match your environment. Copy kubeadm-master.sh to the master node:
sudo chmod u+x kubeadm-master.sh
sudo ./kubeadm-master.sh
  1. On each worker node copy kubeadm-worker.sh:
@kacole2
kacole2 / IstioOnPKS.md
Last active July 11, 2019 02:01
How to Install Istio with Helm on PKS and VMware Cloud PKS

How to Install Istio with Helm on PKS and VMware Cloud PKS

The following guide is based on using a newly created Kubernetes cluster that plans to use Istio for its service mesh layer. This guide is not intended for backwards compatibility of injecting Istio into a cluster that has pods currently running.

Pre-Requisites

These pre-requesites determine the resources and software versions required.

PKS

  • PKS 1.2=<
  • NSX-T 2.3=<
@kacole2
kacole2 / README.md
Last active September 16, 2019 07:49
Installing OpenFaaS on Pivotal Container Service (PKS) with Helm

The OpenFaaS documentation for faas-netes gives a clear explanation of how to install with Helm, but Pivotal Container Service (PKS) has 2 caveats since provisoned Kubernetes clusters are non-RBAC but are token backed and LoadBalancer inclusion with NSX-T. This is going to be a quick streamline of the documentation that adds out of the box support for PKS.

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
$ kubectl -n kube-system create sa tiller && kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --skip-refresh --upgrade --service-account tiller
$ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
$ helm repo add openfaas https://openfaas.github.io/faas-netes/
$ helm repo update && helm upgrade openfaa
@kacole2
kacole2 / install.sh
Last active February 26, 2019 08:51
Install Harbor via Bash
#!/bin/bash
# This script will pre-install everything needed to install Harbor on CentOS 7
# It will install Harbor using the Online Version which pulls images from DockerHub
# Python & Docker Pre-reqs
yum install gcc openssl-devel bzip2-devel wget yum-utils device-mapper-persistent-data lvm2 -y
# Install Python 2.7.15
cd /usr/src
@kacole2
kacole2 / README.md
Last active April 25, 2020 11:51
EBS and EFS Volumes with Docker For AWS using REX-Ray

EBS and EFS Volumes with Docker For AWS using REX-Ray

This procedure will deploy Docker For AWS and go through the steps to build REX-Ray containers. This process will have some hiccups because Docker for AWS will provision resources in different availability zones (AZs). Multiple workers/agents will be spread across AZs (not regions) which means a potential host failure will trigger Swarm to restart containers that could spread across an AZ. If a container is restarted in a different AZ, the pre-emption mechanism for REX-Ray will not work because it no longer has access to the volume in the former AZ.

Deploy Docker for AWS.

Launch Stack

SSH into one of your Docker Manager Nodes

@kacole2
kacole2 / not-working.md
Last active January 11, 2017 21:12
Testing REX-Ray with Docker 1.13 Plugins

Using REX-Ray with Docker 1.13 Plugins

current status is FAILING

The rexay.sock file is never created under the /var/run/docker/plugins directory. However, everything in /var/run/rexray and /var/run/libstorage all seem to be in working order. /var/log/rexray/rexray.log has the error:

time="2017-01-11T20:46:34Z" level=panic msg="error initializing instance ID cache" inner.lsx="/var/lib/libstorage/lsx-linux" inner.args=[scaleio instanceID] inner.inner.Stderr=[101 114 114 111 114 58 32 101 114 114 111 114 32 103 101 116 116 105 110 103 32 105 110 115 116 97 110 99 101 32 73 68 58 32 112 114 111 98 108 101 109 32 103 101 116 116 105 110 103 32 115 100 99 32 103 117 105 100 10]

Standup a functional ScaleIO environment