Skip to content

Instantly share code, notes, and snippets.

@deinarson
deinarson / AzCLI_PythonKeyVault.sh
Last active December 13, 2018 18:19
Azure's web portal is a punishment to use. This is me trying to not use it - but even then this does not work. I cant wait for microsoft to make thier examples work. If I can type this then I am sure they can find someone to place something like this in a doc.
#!/bin/bash
# This is meant to be use with a modified version of this
# https://github.com/Azure-Samples/app-service-msi-keyvault-python
API_KEYNAME=
API_TOKEN=
vault_name=
vault_rg=
vault_rg_location=
@deinarson
deinarson / remote_lvsync_example.sh
Last active October 20, 2016 20:07
I was asked to investigate snapshots on lvm. I still prefer ZFS over LVM, but if you only have LVM you at least have lvmsync ( Thanks to Matt for his fine work on https://github.com/mpalmer/lvmsync )
#!/bin/bash
# this must exit on any error
set -e
initiate_pull_snapshots(){
ssh ${DEST_HOST} lvcreate --size 10 --snapshot --name ${LOGICAL_VOL}-current ${VOL_NAME}/${LOGICAL_VOL}
ssh ${DEST_HOST} dd if=/dev/${VOL_NAME}/${LOGICAL_VOL}-current bs=1M | dd of=/dev/${CLOUD_VOLUME}/${LOGICAL_VOL} bs=1M
}
# NB: The disk must have ${CHANGE_SIZE} space free to store all of the changes untill the next snapshot
@deinarson
deinarson / 00.dedup_largefiles_rsync.sh
Last active October 11, 2016 20:28
dedup rsync transfers, This prly wont work but might help if the hash has already been transferred
#!/bin/bash
#
# This script will probably only wirk with fixed size DBs, and prly not be ideal on any
# DB with compression on :\
#
# BUT!
# Here we take large files and split them in to several and then sync the by the hash
# This is probabilistically always be quicker than rsync by itself, since it
# sends everything
#
Host proxy
hostname jumphost.domain.com
IdentityFile ~/.ssh/key
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
User username
LogLevel ERROR
DynamicForward 18080
@deinarson
deinarson / get_latest_ami.py
Last active February 12, 2016 12:32
I have needed this for a while but I would have thought someone would have done this by now, yet I have never seen any code that does this. Get the latest AMI. -- In this case we are hard coding everything but, merely change the `parse` and `filters` phrases, and you can grep for the latest on what you want.
#!/usr/bin/env python
import boto
import sys
from parse import *
import datetime
ec2 = boto.connect_ec2(debug=0)
image_meta = ec2.get_all_images(filters={'name': 'amzn-ami-hvm-*gp2'})
############################################
# ECS and S3
############################################
resource "aws_elb" "s3-registry-elb" {
name = "s3-registry-elb"
availability_zones = ["${split(",", var.availability_zones)}"]
security_groups = ["${aws_security_group.ecs.id}"]

takeover states

  1. Rolling take-over; sawp out one node at a time
  • pros seamless migration of node replacement
  • cons no rollback
  • depending on LB config can produce end-user hiccups (RR vs load)
  1. Full takeover - aka LB swap-out ; Using DNS to swap HA/LB
  • pro quickly sawp from cluster a to cluster b - ie usage schema change or braking change
  • con DNS ttl lag time to changeover
  1. Populate to replace ; add nodes until the LB starts using them then kill old nodes.

Password-Store

This example is to point out that password-store facilitates

  1. The encryption of files for one or a list of users
  2. The use of git
    • once you have initialized git with pass git init everything is automatically tracked in the local git repo
    • Once you have added a remote git repo, you are required to manually push when desired
  3. Auto generation of passwords creating a file