Skip to content

Instantly share code, notes, and snippets.

View cmcconnell1's full-sized avatar

Chris McConnell cmcconnell1

View GitHub Profile
@cmcconnell1
cmcconnell1 / create_rds
Created June 14, 2017 22:51
Ansible 2.3.1 create_rds module creates rds instance using gp2 where ansible rds module only supports either magnetic or io1 with min iops=1,000 which is way overkill for use case
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Ansible 2.3.1 create_rds module
# creates rds instance using gp2 disk
# Why? because ansible rds module currently only supports either magnetic (default) or io1 min iops=1,000
# Note: the boto3 module returns stdout which causes ansible to consider it a failure
# I tried using json and got errors that boto3 stdout was not json serializable
# also tried following https://docs.python.org/2.7/howto/logging.html#configuring-logging-for-a-library
# using: logging.getLogger('foo').addHandler(logging.NullHandler())
# as well as numerous other methods to prevent boto3 from returning stdout
@cmcconnell1
cmcconnell1 / remote-ssh-kube-commands.sh
Created June 22, 2017 18:18
Run remote command_list on all of the specified kubernetes clusters in AWS EC2: controllers, etcd, or workers
#!/usr/bin/env bash
#
# Author: Chris McConnell
#
# Summary:
# Run remote command_list on all of the specified kubernetes clusters: controllers, etcd, or workers.
#
# Why:
# We have kubernetes and want to run CM jobs / commands on the kube nodes, but CoreOS doesnt have python etc. on it so we can't use CM tools here unless we hack 'em up (which shouldn't), so shell always works.
# Plan to continue to build tools on this and we can take output of this script and slurp up into database, feed to graylog, etc.
@cmcconnell1
cmcconnell1 / secure-kube-ssh-access.sh
Created June 22, 2017 20:01
restrict (dynamically created kubernetes) AWS security groups ssh access (older kube-aws versions created SGs' with: 0.0.0.0/0 on 22)
#!/usr/bin/env bash
# Why: within a few minutes of deploying a kube cluster, hackers start brute forcing on ssh
# for some time, older kube-aws versions had the dynamic SG allow on 0.0.0.0/0 for 22/ssh
#
# This was used immediately after deploying fresh kube-aws clusters to restrict their ssh access to specified CIDR ranges.
# Usage:
# cd kube-aws-dir ; $path_to_script/secure-kube-ssh-access.sh
#
# Note disregard errors like the below due to either the rule we want to remove doesnt exist (deis security groups) or the rules have already been applied by this script or another process.
# An error occurred (InvalidPermission.NotFound) when calling the RevokeSecurityGroupIngress operation: The specified rule does not exist in this security group.
@cmcconnell1
cmcconnell1 / source-redis-cluster-vars.sh
Created September 29, 2017 23:07
kubernetes redis source script sets current shell env vars for all redis_cluster and redis_sentinel_cluster nodes and the sentinel service IP
#!/usr/bin/env bash
# author: cmcc
# Usage:
# source $0
#
# Info:
# Print current pods in both the redis_cluster and the sentinel_cluster
# echo "$redis_cluster" | xargs
# echo "$sentinel_cluster" | xargs
@cmcconnell1
cmcconnell1 / idempotent-postgresql-rds-create-role.py
Last active November 9, 2021 12:58
Provides idempotent remote (RDS) PostgreSQL create role/user from python without CM modules, etc.
#!/usr/bin/env python3
# Overview:
# Provides idempotent remote RDS PostgreSQL (application) role/user creation from python for use outside of CM modules.
# Because PostgreSQL doesn't have something like 'CREATE ROLE IF NOT EXISTS' which would be nice.
# ref: https://stackoverflow.com/questions/8546759/how-to-check-if-a-postgres-user-exists
# Requirements:
# Python3 and psycopg2 module
# cmcc
import psycopg2
@cmcconnell1
cmcconnell1 / dist-upated-tls-certs-etcd-nodes.sh
Created April 26, 2018 19:30
Distributes updated x509 tls certs to etcd2 kube nodes and resolves outdated cert problems
#!/usr/bin/env bash
#
# Summary:
# Distributes updated x509 tls certs and resolves outdate cert problems which effectively kill your kube cluster
# ref: https://github.com/kubernetes-incubator/kube-aws/issues/1132
# ref: https://github.com/kubernetes-incubator/kube-aws/issues/1057
#
# NOTES: Ensure this is the correct process for your etcd2 kube cluster before using.
# Test on a dev/test cluster first.
# Use at own risk.
@cmcconnell1
cmcconnell1 / kubernetes-cluster-autoscaler-pod-logs-pr#1268
Created May 1, 2018 01:06
fresh kube-aws cluster deploy 3rd restart for the AS pod with modified code from https://github.com/kubernetes-incubator/kube-aws/pull/1268
kk logs cluster-autoscaler-59998c8cbf-9hqwq
I0501 01:00:45.176755 1 flags.go:52] FLAG: --address=":8085"
I0501 01:00:45.177259 1 flags.go:52] FLAG: --alsologtostderr="false"
I0501 01:00:45.177275 1 flags.go:52] FLAG: --application-metrics-count-limit="100"
I0501 01:00:45.177280 1 flags.go:52] FLAG: --azure-container-registry-config=""
I0501 01:00:45.177286 1 flags.go:52] FLAG: --balance-similar-node-groups="false"
I0501 01:00:45.177290 1 flags.go:52] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I0501 01:00:45.177294 1 flags.go:52] FLAG: --cloud-config=""
I0501 01:00:45.177386 1 flags.go:52] FLAG: --cloud-provider="aws"
I0501 01:00:45.177390 1 flags.go:52] FLAG: --cloud-provider-gce-lb-src-cidrs="209.85.204.0/22,130.211.0.0/22,35.191.0.0/16,209.85.152.0/22"
@cmcconnell1
cmcconnell1 / virtualbox-driver-vpn-disabled-minishift.out
Created June 26, 2018 19:40
minishift-start-fails-with-virtualbox-driver-but-works-with-xhyve-if-vpn-inactive
-- minishift version: v1.20.0+53c500a
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.9.0'
@cmcconnell1
cmcconnell1 / cluster.yaml
Created September 14, 2018 18:50
kube-aws new/latest version v0.10.2 cluster.yaml kube2iam CrashLoopBackOff error: level=fatal msg="route ip+net: no such network interface"
clusterName: opsinfra
s3URI: s3://my-bucket-kube-aws-us-west-1/
releaseChannel: stable
amiId: "ami-0a86d340ea7fde077"
disableContainerLinuxAutomaticUpdates: true
apiEndpoints:
- # The unique name of this API endpoint used to identify it inside CloudFormation stacks
name: default
dnsName: opsinfra.myfoo.com
loadBalancer:
@cmcconnell1
cmcconnell1 / kube-aws-v0.10.2-autoscaler-node-pools-kiam-cluster.yaml
Created September 24, 2018 17:00
proposal for kube-aws docs to include working examples of kube-aws cluster.yaml files with working nodepools, autoscaler, kiam, etc.
# this is an example offered as proposal to include in kube-aws docs/examples
# ref: https://github.com/kubernetes-incubator/kube-aws/issues/1050
clusterName: opsinfra
s3URI: s3://my-bucket-kube-aws-us-west-1/
releaseChannel: stable
amiId: "ami-0a86d340ea7fde077"
disableContainerLinuxAutomaticUpdates: true
apiEndpoints:
- # The unique name of this API endpoint used to identify it inside CloudFormation stacks
name: default