Skip to content

Instantly share code, notes, and snippets.

@cmcconnell1
Created June 22, 2017 18:18
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save cmcconnell1/18b56ec2dcc979d45f8ff28ef0c22cba to your computer and use it in GitHub Desktop.
Save cmcconnell1/18b56ec2dcc979d45f8ff28ef0c22cba to your computer and use it in GitHub Desktop.
Run remote command_list on all of the specified kubernetes clusters in AWS EC2: controllers, etcd, or workers
#!/usr/bin/env bash
#
# Author: Chris McConnell
#
# Summary:
# Run remote command_list on all of the specified kubernetes clusters: controllers, etcd, or workers.
#
# Why:
# We have kubernetes and want to run CM jobs / commands on the kube nodes, but CoreOS doesnt have python etc. on it so we can't use CM tools here unless we hack 'em up (which shouldn't), so shell always works.
# Plan to continue to build tools on this and we can take output of this script and slurp up into database, feed to graylog, etc.
# Thinking we'll use this script initially to keep track of kubernetes/docker ps bug on the workers, etc.
# Requirements:
# kube-aws provisioned Kubernetes clusters in AWS else modify below 'for private_ip' code with appropriate logic to grab tag and metadata from your cloud provider
#
# Usage Examples:
# $0 -n foo-dev -c "systemctl status docker.service" -t worker # show worker status on all workers
# $0 -n foo-stage -c "etcdctl cluster-health" -t etcd # etcdctl check on all etcd nodes in cluster
# $0 -n foo-prod -c "hostname; docker ps | wc -l" -t worker # how many docker processes are running on your workers--helpful with docker bugs freezing nodes
#
# for kube_cluster in $(echo $cluster_list); do $0 -n $kube_cluster -c "docker ps | wc -l" -t worker; done # check docker ps on all clusters workers
usage() {
printf '\nUsage: $0 [-n <cluster_name> ] [-c <command_list> ] [-t target_type (controller, etcd, or worker)]\n\n'
exit 1
}
while getopts ":n:c:t:" o; do
case "${o}" in
n)
n=${OPTARG}
export cluster_name=${OPTARG}
;;
c)
c=${OPTARG}
export command_list=${OPTARG}
;;
t)
t=${OPTARG}
export target_type=${OPTARG}
;;
*)
usage
;;
esac
done
shift $((OPTIND-1))
# Bail and bark if we dont provide cluster_name and command_list
if [ -z "${n}" ] || [ -z "${c}" ] || [ -z "${t}" ]; then
usage
fi
# target must be specified as one of three options else bail and bark:
[ $target_type == "controller" ] || [ $target_type == "etcd" ] || [ $target_type == "worker" ] || usage
# Remote ssh commands for all clusters worker nodes
for private_ip in $(aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value|[0],State.Name,PrivateIpAddress,PublicIpAddress]' --output text | column -t | grep -i "$cluster_name" | grep 'running' | grep "$target_type" | awk '{print $4}');
do
printf "\n=======================\nPRIV IP: $private_ip\n"
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no core@$private_ip "${command_list}"
echo ""
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment