Skip to content

Instantly share code, notes, and snippets.

@pinkeen
Last active June 24, 2020 20:30
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pinkeen/6df32dc729545aef8af8151ec89e75e1 to your computer and use it in GitHub Desktop.
Save pinkeen/6df32dc729545aef8af8151ec89e75e1 to your computer and use it in GitHub Desktop.
Useful scripts, patterns and stuff

Bash programming patterns

Create an array of "dictionaries" ("objects", "hashes") and iterate over it

DIR_TO_SHOW="$HOME"

ITEMS=(
    " ITEM_NAME='files'  ; ITEM_DESCRIPTION='Show me the files!'             ; ITEM_CMD='ls -t ${DIR_TO_SHOW}'  "
    " ITEM_NAME='day'    ; ITEM_DESCRIPTION='What day is it, $(whoami)?'     ; ITEM_CMD='date'                  "
    " ITEM_NAME='uptime' ; ITEM_DESCRIPTION='How long have I been working?!' ; ITEM_CMD='uptime'                "
)

for I in "${!ITEMS[@]}" ; do
    eval "${ITEMS[$I]}"

    echo "[$((I + 1))/${#ITEMS[@]}] Processing $ITEM_NAME: $ITEM_DESCRIPTION"

    $ITEM_CMD
done

Closures in bash?

#!/usr/bin/env bash

set -e

function multiplyBy() {
	X="$1"

	cat <<-EOF
		Y="\$1"
		echo "$X * \$Y = \$(( $X * \$Y ))"
	EOF
}

function callFunc() {
	CODE="$1"
	shift

	eval "$CODE"
}

MULT_BY_2=`multiplyBy 2`
MULT_BY_4=`multiplyBy 4`

callFunc "$MULT_BY_2" 10
callFunc "$MULT_BY_4" 10

Bash configuration and other to make cli usage better

Some helpers for bashrc

Enter clean (umodified environment) with original bash

I use it for MacOS for testing script which should work on a clean MacOS system (which has bash 3)

In .bashrc:

# First line in file, store original env
export CLEAN_ENV="$(env | xargs)"

# [...] Do whatever customzations that you do [...]
# I need sometimes to get rid of any customizations (GNU coreutils especially)
# for testing stuff that needs to be portable (usually CI automation).
# Also use the build-in old-as-hell bash :/
function clean-env() {
  env -i $CLEAN_ENV 'PS1=\[\e[0;31m\](clean)\[\e[0m\] \[\e[0;33m\]\u\[\e[0;32m\]@\h\[\e[0m\] \[\e[0;34m\]\W\[\e[0m\] $ ' /bin/bash --norc --noprofile
}

Make ls better

By using aliases

alias ls='ls --color=auto -N'
alias ll='ls -lh --group-directories-first'
alias l='ls -lAh --group-directories-first'
alias lst='ls -lht'
alias lss='ls -lhS'
alias l.='l -d .*' 
alias la='l'
alias lla='l'

And customizing the date/time format

export TIME_STYLE="+%Y-%m-%d %H:%M"

Bash snippets for various things

Generic

Exit if not running as root

! (( ${EUID:-0} || $(id -u) )) || (echo "Run this script as root / with sudo!"; exit 1)

Check if PID is running

function is_pid_running() {
    (ps -p "$1" 2>&1 > /dev/null; [[ $? -eq 0 ]] && echo "yes" || echo "no") || true
}

Format mount output for better readability

Especially useful with complicated opts (overlayfs! docker!).

mount | sed -e 's/$/\n/g' | sed -e 's/(\([^)]\+\))/(\n \1\n)/g' | sed -e 's/,/,\n /g' | sed -e 's/:/:\n  /g' | sed -e 's/=/=\n  /g'

The same but in a one-line function that allows grepping...

function mntgrep() { mount | grep $@ | sed -e 's/$/\n/g' | sed -e 's/(\([^)]\+\))/(\n  \1\n)/g' | sed -e 's/,/,\n  /g' | sed -e 's/:/:\n    /g' | sed -e 's/=/=\n    /g'; }

Example output

overlay on /var/lib/docker/overlay2/3437d9dbad97a86246dd19286d642ea99150625bb636ba699ff8780bdfa83621/merged type overlay (
 rw,
 relatime,
 lowerdir=
  /var/lib/docker/overlay2/l/AU2SFSYBUK7MAXSPLMTE7WDPCL:
  /var/lib/docker/overlay2/l/WYLIWNY3O6ORPQRGRW2WV5N45S:
  /var/lib/docker/overlay2/l/OSTUXN4WFDPDMSCR2WAJIUWRHL:
  /var/lib/docker/overlay2/l/KAX5BB3XQFMRKHSYOQT6NMYYBZ:
  /var/lib/docker/overlay2/l/OTUEIXT3M7PINGAD5KNCCWLRW7:
  /var/lib/docker/overlay2/l/GGANPDIO7LWJAMPXFV65PI7CBB:
  /var/lib/docker/overlay2/l/EZLAO3H6BUP66KALK77SFP6GMG:
  /var/lib/docker/overlay2/l/R4TPZEFLP24Q2FCSSTFZBMZIKC,
 upperdir=
  /var/lib/docker/overlay2/3437d9dbad97a86246dd19286d642ea99150625bb636ba699ff8780bdfa83621/diff,
 workdir=
  /var/lib/docker/overlay2/3437d9dbad97a86246dd19286d642ea99150625bb636ba699ff8780bdfa83621/work
)

shm on /var/lib/docker/containers/9f35262ee7bc2e0781e55c1bf8755cbca9190d92675073305416f20d4cf8be91/mounts/shm type tmpfs (
 rw,
 nosuid,
 nodev,
 noexec,
 relatime,
 size=
  65536k
)

nsfs on /run/desktop/docker/netns/d2cbd479cc09 type nsfs (
 rw
)

DD copy an OS image with progressbar

Install pv first

dd bs=4m if=Downloads/Fedora-Minimal-29-1.2.aarch64.raw | pv -s `stat -f%z Downloads/Fedora-Minimal-29-1.2.aarch64.raw` | dd bs=4m of=/dev/disk6

SSH

Start a gobal ssh agent

export SSH_AGENT_ENV="$HOME/.ssh/agent-env"
export SSH_AUTH_SOCK="$HOME/.ssh/auth-sock"

if [[ ! -f "$SSH_AGENT_ENV" || ! -S "$SSH_AUTH_SOCK" ]] ; then
    ssh-agent -s -a "$SSH_AUTH_SOCK" | sed 's/^echo/#echo/' > "$SSH_AGENT_ENV"
    chmod 600 "$SSH_AGENT_ENV" "$SSH_AUTH_SOCK"

    ssh-add -q -A -K
    ssh-add -q ~/.ssh/id_rsa
    ssh-add -q ~/.ssh/id_rsa_work
else
    source ~/.ssh/agent-env
fi

Fast Directory Copy Over SSH by tarring on the fly

#!/usr/bin/env bash

set -e

if [[ $# -lt 2 ]] ; then
   echo "Usage: $0 <local source path> <ssh-host>[:<remote dest path>] [extra ssh args]"
   exit 1
fi


SRCDIR="$1"
DEST="$2"
shift 2
SSHARGS="$@"

HOST=`echo "$DEST" | cut -d':' -f1`
DESTDIR=`echo "$DEST" | cut -s -d':' -f2`

[ -z "$DESTDIR" ] && DESTDIR="."

echo -e "On the fly ssh tar copying $SRCDIR -> $DESTDIR on $HOST...\n\n"

tar -C "$(dirname $SRCDIR)" -zc "$(basename $SRCDIR)" | ssh "$HOST" $SSHARGS "tar -zxv -C '$DESTDIR'"

Remote file editing with local GUI (mate)

#!/usr/bin/env bash

pgrep -xq -- "TextMate" || open -a TextMate -gj -F

ssh "$@" '[[ -e "$HOME/.local/bin-rmate/rmate" ]] || (echo "Installing rmate..." && mkdir -p ~/.local/bin-rmate && curl -s https://raw.githubusercontent.com/sclukey/rmate-python/master/bin/rmate -o ~/.local/bin-rmate/rmate && chmod +x ~/.local/bin-rmate/rmate && echo -e export PATH=\"\$HOME/.local/bin-rmate:\$PATH\" >> .profile >> .zprofile)'
ssh -R 52698:localhost:52698 -A "$@"
 @pinkeen

Network

Get IP for host (without using dig, just nslookup, OSX)

nslookup loft.creativestyle.pl | awk -F': ' 'NR==6 { print $2 } '

Check if we're connected to VPN (OSX)

ifconfig ppp0 2>&1 > /dev/null && echo "VPN is connected"

HTTP

Continously monitor URL status code in a terminal tab

while true; do echo "$(date +%H:%M:%S) $(curl -s -X HEAD https://target-host.com/path/ -w 'Status: %{http_code} Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total}')"; sleep 1s; done

AWS

Get AWS subnets for specific region

For example this can be used for routing this traffic through VPN...

php -r 'foreach (json_decode(file_get_contents("https://ip-ranges.amazonaws.com/ip-ranges.json"), true)["prefixes"] as $prefixData) if (trim($argv[1]) === $prefixData["region"]) echo $prefixData["ip_prefix"] . "\n";' eu-central-1

GIT

Remote remote branches matching pattern

git branch -a | grep 'remotes/origin/branch-name-pattern' | sed -E 's~remotes/origin/~~' | xargs -I{} git push origin :{}

Various server maintaince helpers

Reclaim space

See what's taking the space (on Jenkins)

This will take a looong time - go get a coffee or sth!

(Divided into two steps, this way you can see progress and not get bored.)

echo 'This will take a long time, be patient...' \
    && (echo -e '\n--- Counting logs ---\n';            du -sh -t 5M   --time /var/log/*                   | tee -a /tmp/var_log_sizes) \
    && (echo -e '\n--- Counting workspaces ---\n';      du -sh -t 100M --time /var/lib/jenkins/workspace/* | tee -a /tmp/jenkins_workspace_sizes) \
    && (echo -e '\n--- Counting jobs ---\n';            du -sh -t 20M  --time /var/lib/jenkins/jobs/*      | tee -a /tmp/jenkins_job_sizes) \
    && (echo -e '\n--- Biggest system logs ---\n';      cat /tmp/var_log_sizes                             | sort -hrs ) \
    && (echo -e '\n--- Biggest workspaces ---\n';       cat /tmp/jenkins_workspace_sizes                   | sort -hrs ) \
    && (echo -e '\n--- Biggest job archives ---\n';     cat /tmp/jenkins_job_sizes                         | sort -hrs )

Note: You can safely remove workspaces in most cases (unless a job is running!) because it will be rebuilt next time job is ran (just it will take a while longer). But YMMV so execute caution.

I recommend removing workspace which were last used long time ago (hence the --time switch). They probably won't be used anytime soon.

Reclaim space used by docker

CAUTION! This will remove all docker data. Generally all images will be fetched automatically the next day from repos, so rather no biggie.

docker system prune -a -f && docker volume prune -f

Reclaim space used by archived builds of some type (build-app-* as example)

find /var/lib/jenkins/jobs/ -type d -path '*/build-app-*' -name 'builds' | xargs rm -rf
#!/usr/bin/env bash
# Install goaccess first - https://goaccess.io/
# Examples:
#
# Show all
# $ analyze-logs target-host.com
#
# Grep by path
# $ analyze-logs target-host.com 'customer/section/load'
#
# Grep by IP and path
# $ analyze-logs target-host.com '134\.169\.31\.42.*customer/section/load'
if [ "$#" -lt 1 ]; then
echo "Usage: $0 <hostname-or-ip> [grep-filter-pattern] [additional-goaccess-opts]"
exit 1
fi
HOST="$1"
PATTERN="$2"
GOACCESSOPTS="$3"
USER="ec2-user"
LOGFILE="/var/log/nginx/access.log"
TIMESTAMP=`date '+%Y-%m-%d-%H-%M-%S'`
TMPDIR="/tmp/goaccess"
HTTPPORT=$(perl -e 'print int(rand(65000-2000)) + 2000')
WSPORT=$(perl -e 'print int(rand(65000-2000)) + 2000')
REPORTFILE="$HOST-$TIMESTAMP.html"
REPORTPATH="$TMPDIR/$REPORTFILE"
REPORTURL="http://localhost:$HTTPPORT/$REPORTFILE"
GOACCESPREFS='{"perPage": 20, "layout": "vertical"}'
function title() {
if [ -z "$PATTERN" ] ; then
echo "$HOST"
else
echo "$HOST | grep '$PATTERN'"
fi
}
function analyze() {
goaccess "--color-scheme=3" -"-port=$WSPORT" --agent-list --with-output-resolver --real-time-html "--log-format=COMBINED" "--output=$REPORTPATH" "--date-spec=hr" "--hour-spec=min" "--html-report-title=$(title)" "--html-prefs=$GOACCESPREFS" $3 -
}
function filter() {
if [ -z "$PATTERN" ] ; then
cat -
else
grep -e "$PATTERN"
fi
}
function sshtail() {
ssh -o StrictHostKeyChecking=no -t "${USER}@${HOST}" "sudo tail -n +1 -f $LOGFILE"
}
echo "Will attempt to show live stats using $USER@$HOST from log $LOGFILE!"
echo "---"
echo "Report: $REPORTPATH | $REPORTURL"
echo "Websocket: localhost:$WSPORT"
echo "---"
echo "*** Press CTRL+C (Twice!) to stop ***"
echo "---"
echo "Setup temps..."
mkdir -p "$TMPDIR"
echo "<html><head><meta http-equiv="refresh" content="2"></head><center><h1>Loading report for <br/><em>$(title)</em></h1> <br/><br/>Refresh in a moment...<center></html>" > $REPORTPATH
echo "Starting PHP webserver..."
php -S "127.0.0.1:$HTTPPORT" -t "$TMPDIR" &
echo "Open report in browser..."
open "$REPORTURL"
echo "Starting goaccess parser with logs tailed live via SSH..."
set -e -x
sshtail | filter | analyze
rm -rf "$REPORTFILE"
- hosts: localhost
connection: local
vars:
test_list:
- a: a1
b: b1
- a: a2
c: b2
- a: a3
b: b3
c: c3
test_list_nested:
- a: a1
l:
- one1
- two1
- three1
- a: a2
- a: a3
l:
- one3
- three3
test_dict:
one:
x: eks1
y: uaj1
z: zet1
two:
x: eks2
y: uaj2
z: zet3
tasks:
- debug: var=_test
vars:
_test: "{{ test_dict|flatten(levels=1) }}"
- debug: msg="{{ index }}={{ item }}"
loop: "{{ simple_list }}"
loop_control:
index_var: index
vars:
simple_list: [a, b, c]
- debug: msg="{{ item.key }}={{ item.value }}"
loop: "{{ simple_dict|dict2items }}"
vars:
simple_dict:
one: a
two: b
three: c
- set_fact:
"global_{{ item.key }}": "{{ item.value }}"
loop: "{{ to_global|dict2items }}"
vars:
to_global:
string: I am a global prefxied string
number: 12345
- debug: var=global_string
- set_fact:
prefixed_string_list: "{{ list_prefix|map('regex_replace', '^', 'prefix___')|list }}"
vars:
list_prefix:
- 'stringone'
- 'stringtwo'
- 'stringthree'
- set_fact:
prefixed_string_list: "{{ list_prefix|map('regex_replace', '^', 'https://')|list + list_prefix|map('regex_replace', '^', 'http://')|list }}"
vars:
list_prefix:
- 'stringone'
- 'stringtwo'
- 'stringthree'
- debug: var=prefixed_string_list
# - set_fact:
# collected: []
# - set_fact:
# collected: "{{ collected + [item.b] }}"
# when: item.b is defined
# loop: "{{ test_list }}"
# - set_fact:
# collected: "{{ collected + [item] }}"
# loop: "{{ test_list_nested|selectattr('l', 'defined')|map(attribute='l')|flatten(levels=1) }}"
# - set_fact:
# tests:
# collected: "{{ collected }}"
# collected2: |
# {{
# test_list|selectattr('b', 'defined')|map(attribute='b')|list +
# test_list_nested|selectattr('l', 'defined')|map(attribute='l')|flatten(levels=1) +
# ['manual_element']
# }}
# sa: "{{ test_list_nested|selectattr('l', 'defined')|list }}"
# mp: "{{ test_list_nested|map(attribute='l')|list }}"
# - debug: var=tests
# Split CIDR into address and prefix
- hosts: localhost
connection: local
vars:
cidrs:
- '192.168.0.0/24'
- '10.13.37.5/16'
- '192.168.23.5/32'
- '82.35.56.23/32'
- '82.35.56.0/32'
tasks:
- debug: var=mask
vars:
mask:
# The `ipaddr` filter outputs an empty string if CIDR has /32 prefix.
network: "{% if item|ipaddr('network') %}{{ item|ipaddr('network') }}{% else %}{{ item|ipaddr('address') }}{% endif %}"
prefix: "{{ item|ipaddr('prefix') }}"
loop: "{{ cidrs }}"
# Extract vars based on key prefix
- hosts: localhost
connection: local
tasks:
- debug: var=item
loop: "{{ vars|dict2items|selectattr('key', 'regex', '^ansible_')|list }}"
#!/usr/bin/env bash
# And old convenience script for running various ansible env tasks
# might be still useful as a boilerplate
set -e
if [ $# -ne 2 ] || [ $1 == '--help' ] || [ $1 == '-h' ] ; then
echo -e "Usage:\n ${0} {stage_name} {action/playbook}"
cat << HELPEND
Available actions:
* update-cron - updates cronjobs
* update-workers - updates supervisord worker configs
* update-vhosts - updates nginx vhosts
* update-firewall - updates firewall rules
* update-users - updates interactive users
* update-email - updates postfix spool setup
* restart-fpm
* restart-nginx
* restart-supervisord
* restart-elasticsearch
* maintenance - perform various maintenance tasks
* deploy - deploys current build
* site - runs full provisioning
* upgrade - upgrades system on all nodes
HELPEND
exit 1
fi
STAGE="$1"
ACTION="$2"
function play() {
INVENTORY="inventories/${STAGE}.ini"
STAGE_VARS="stage_vars/${STAGE}.yml"
COMMAND="$1"
shift
if [ ! -f "$INVENTORY" ] ; then
echo "Could not find inventory file for the selected stage: $INVENTORY" 1>&2
exit 1
fi
if [ ! -f "$STAGE_VARS" ] ; then
echo "Could not find stage vars file for the selected stage: $STAGE_VARS" 1>&2
exit 1
fi
set -x
${COMMAND} -i "${INVENTORY}" -e "env=${STAGE}" "$@"
}
function playbook() {
play "ansible-playbook" "$@"
}
function adhoc() {
play "ansible" "$@"
}
function restart() {
GROUP="$1"
SERVICE="$2"
ACTION="state=restarted name=${SERVICE}"
adhoc "${GROUP}" -u root -m service -a "$ACTION"
}
case "$ACTION" in
"update-cron") playbook "site.yml" --tags "stage_vars,cron" ;;
"update-workers") playbook "site.yml" --tags "stage_vars,supervisord" ;;
"update-vhosts") playbook "site.yml" --tags "stage_vars,nginx" ;;
"update-firewall") playbook "site.yml" --tags "stage_vars,firewall" ;;
"update-users") playbook "site.yml" --tags "stage_vars,users" ;;
"update-email") playbook "site.yml" --tags "stage_vars,email" ;;
"maintenance") playbook "site.yml" --tags "stage_vars,maintenance" -e "force_perform_maintenance=True" --limit worker;;
"restart-fpm") restart "web" "php-fpm" ;;
"restart-nginx") restart "web" "nginx" ;;
"restart-supervisord") restart "worker" "supervisord" ;;
"restart-elasticsearch") restart "search" "elasticsearch" ;;
"upgrade") adhoc all -a "yum -y upgrade" -u root ;;
*) playbook "${ACTION}.yml" ;;
esac
/* AWS Lambda sources for notyfing slack on CloudWatch Logs entry */
var zlib = require('zlib'),
https = require('https');
function postToSlack(payload) {
payload = JSON.stringify(payload);
return new Promise((resolve, reject) => {
var req = https.request({
hostname: 'hooks.slack.com',
path: '/services/path/to/your/slack/hook',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(payload)
}
}).on('response', (res) => {
if (res.statusCode !== 200) {
return reject(new Error('Error ' + res.statusCode));
}
console.log('Posted to slack: ' + JSON.stringify(payload));
resolve(res);
});
req.write(payload);
req.end();
});
}
function parseGroup(group) {
var elements = group.split('/');
if (elements.length < 4) {
return {
project: 'unknown',
env: 'unknown'
}
}
return {
project: elements[1],
env: elements[2],
};
}
function handleLogItem(data, project, env, link) {
var message = data.message;
if (message.length > 600) {
message = message.substring(0, 600) + '\n[...]';
}
return postToSlack({
"text": `:skull_and_crossbones: CRITICAL error on *${project}*-_${env}_: <${link}|See details here>`,
"attachments": [
{
"color": "#D00000",
"fields":[
{
"title": "Exception",
"value": message,
"short": false
}
]
}
]
});
}
function handleData(data) {
var handled = [],
group = data.logGroup,
stream = data.logStream,
groupData = parseGroup(group),
project = groupData.project,
env = groupData.env,
region = process.env.AWS_DEFAULT_REGION,
link = `https://${region}.console.aws.amazon.com/cloudwatch/home?region=${region}#logEventViewer:group=${encodeURIComponent(group)};stream=${encodeURIComponent(stream)}`
data.logEvents.forEach((item) => {
handled.push(
handleLogItem(
item,
project,
env,
link
)
);
})
return Promise.all(handled);
}
exports.handler = (event, context, done) => {
var payload = new Buffer(event.awslogs.data, 'base64');
zlib.gunzip(payload, (e, result) => {
handleData(JSON.parse(result.toString('utf-8')), done).then(() => {
done(null);
}, (e) => {
done(new Error(e));
})
});
};
<?php
class OOMDebugger
{
/**
* @var string
*/
private $reserved;
/**
* @var string
*/
private $logFname;
/**
* @var resource
*/
private $logFile;
public function __construct(string $logPath = '.', int $reserveSz = 1024)
{
$this->reserved = str_repeat('☠', $reserveSz * 1024);
$this->logFname = rtrim($logPath, '/') . '/oom-report-' . date('d-m-Y.H-i-s.') . uniqid() . '.log';
register_shutdown_function([$this, 'onShutdown']);
}
private function log(string $message)
{
if (!$this->logFile) {
$this->logFile = fopen($this->logFname, 'w');
}
fwrite($this->logFile, $message . "\n");
fflush($this->logFile);
}
public function __destruct()
{
if ($this->logFile) {
fclose($this->logFile);
}
}
public function onShutdown()
{
unset($this->reserved);
$error = error_get_last();
if ($error['type'] !== E_ERROR || strpos($error['message'], 'Allowed memory size of') !== 0) {
/* Catch only fatal memory limit errors */
return;
}
$this->log(sprintf('Caught OOM, peak mem usage %.2fMiB: %s', memory_get_peak_usage(true) / 0x100000, $error['message']));
if (function_exists('xdebug_time_index')) {
$this->log(sprintf('Script crashed after %.2f sec', xdebug_time_index()));
}
$this->log(sprintf("> _SERVER\n\n%s\n\n", json_encode($_SERVER, JSON_PRETTY_PRINT)));
$this->log(sprintf("> _GET\n\n%s\n\n", json_encode($_GET, JSON_PRETTY_PRINT)));
$this->log(sprintf("> _POST\n\n%s\n\n", json_encode($_POST, JSON_PRETTY_PRINT)));
if (function_exists('xdebug_get_function_stack')) {
$this->log(sprintf("> Stack Trace\n\n%s\n\n", json_encode(xdebug_get_function_stack(), JSON_PRETTY_PRINT)));
}
}
}
$debugger = new OOMDebugger('.');
function fillItBaby(int $crapCount, string $testParam)
{
$arr = [];
for ($i = 0; $i < $crapCount; ++$i) {
$arr[] = str_repeat(chr($i % 256), $i);
}
}
// Go BUM!
fillItBaby(1000000, 'just a test');
#!/bin/bash
set -e
USER_NAME="app"
GROUP_NAME="app"
NEW_UID="$1"
NEW_GID="$2"
shift 2
if grep -q "$GROUP_NAME:" /etc/group ; then
groupmod --non-unique --gid "$NEW_GID" "$GROUP_NAME"
else
groupadd --non-unique --gid "$NEW_GID" "$GROUP_NAME"
fi
if id "$USER_NAME" >/dev/null 2>&1; then
usermod --gid "$NEW_GID" --shell /bin/bash --uid "$NEW_UID" --non-unique "$USER_NAME"
else
adduser --gid "$NEW_GID" --shell /bin/bash --uid "$NEW_UID" --non-unique "$USER_NAME"
fi
if [[ $# -gt 0 ]] ; then
sudo -E -u "$USER_NAME" -g "$GROUP_NAME" -- $@
fi
#!/usr/bin/env bash
set -e
function get_preset_name() { xidel "$@" --ignore-namespaces --silent --extract '//Name'; }
function get_file_ext() { echo "${@##*.}"; }
function strip_file_ext() { echo "${@%.*}"; }
function escape_for_sed() { echo "$@" | sed -e 's/[\/&]/\\&/g'; }
function stream_normalize_charset() { gsed -E 's/[^-_A-Za-z0-9 ]+//g'; }
function stream_normalize_delimiters() { gsed -r 's/([-_ ]+)/ /g'; }
function stream_camelcase_to_delimiter() { gsed -r 's/([a-z])([A-Z])/\1 \2/g'; }
function stream_delimiters_to_camelcase() { gsed -r 's/(^|[-_ ]+)([0-9a-zA-Z])/\U\2/g' | gsed -r 's/([A-Z])([A-Z]+)/\1\L\2/g'; }
function transform_preset_name_to_file_name() {
echo "$@" | stream_normalize_charset | stream_normalize_delimiters | stream_delimiters_to_camelcase;
}
function transform_file_name_to_preset_name() {
transform_preset_name_to_file_name "$@" | stream_camelcase_to_delimiter;
}
if [[ $# -lt 2 ]] ; then
echo "Usage: $0 <command> [filename] [filename2] [filename3] ..."
echo
echo "Available commands:"
echo " rename-file renames file name to preset name"
echo " rename-preset renames preset name to file name"
echo " deduplicate removes duplicate presets"
echo " dry-run just show what would happen"
exit 1
fi
function log_rename_file() {
echo "[F] Renaming file: $FILE_PATH => $RENAMED_FILE_PATH"
}
function log_rename_preset() {
echo "[P] Renaming preset: $PRESET_NAME => $RENAMED_PRESET_NAME"
}
function cmd_rename_file() {
log_rename_file
mv "$FILE_PATH" "$RENAMED_FILE_PATH"
}
function cmd_rename_preset() {
log_rename_preset
local PRESET_NAME_ESCAPED=`escape_for_sed "$PRESET_NAME"`
local RENAMED_PRESET_NAME_ESCAPED=`escape_for_sed "$RENAMED_PRESET_NAME"`
gsed -i -e "s/${PRESET_NAME_ESCAPED}/${RENAMED_PRESET_NAME_ESCAPED}/g" "$FILE_PATH"
}
function cmd_dry_run() {
log_rename_file
log_rename_preset
}
CMD="$1"; shift
case "$CMD" in
"rename-file") FUNC="cmd_rename_file" ;;
"rename-preset") FUNC="cmd_rename_preset" ;;
"deduplicate") FUNC="cmd_deduplicate" ;;
"dry-run") FUNC="cmd_dry_run" ;;
*) echo "Unknown command: $1"; exit 2
esac
for FILE_PATH in "$@" ; do
FILE_DIR=`dirname "$FILE_PATH"`
FILE_BASE=`basename "$FILE_PATH"`
FILE_BASE_BARE=`strip_file_ext "$FILE_BASE"`
FILE_EXT=`get_file_ext "$FILE_BASE"`
PRESET_NAME="$(get_preset_name "$FILE_PATH" || true)"
RENAMED_FILE_PATH="${FILE_DIR}/$(transform_preset_name_to_file_name "$PRESET_NAME").${FILE_EXT}"
RENAMED_PRESET_NAME=`transform_file_name_to_preset_name "$FILE_BASE_BARE"`
if [[ -z "$PRESET_NAME" ]] ; then
echo "[-] No preset found in: $FILE_PATH"
continue
fi
echo "[+] Found preset '$PRESET_NAME' in: $FILE_PATH"
"$FUNC"
done
#!/usr/bin/env bash
set -e
TARGET_VPN_IFACE="ppp0"
TARGET_VPN_NETWORK="10.10.0.0/22"
TARGET_GATEWAY_PREFIX="10.10.0"
TARGET_EC2_REGION="eu-central-1"
TARGET_UP_SCRIPT="/etc/ppp/ip-up"
function gen_nets() {
echo "$TARGET_VPN_NETWORK"
# Route traffic to some host throught the VPN
echo "$(nslookup some-domain.com | awk -F': ' 'NR==6 { print $2 } ')/32"
# Route AWS ips to some host
php -r 'foreach (json_decode(file_get_contents("https://ip-ranges.amazonaws.com/ip-ranges.json"), true)["prefixes"] as $prefixData) if (trim($argv[1]) === $prefixData["region"]) echo $prefixData["ip_prefix"] . "\n";' "$TARGET_EC2_REGION"
}
function gen_routes() {
gen_nets | while read NET ; do echo "/sbin/route add -net '$NET' -interface '$TARGET_VPN_IFACE'" ; done
}
! (( ${EUID:-0} || $(id -u) )) || (echo "Run this script as root / with sudo!"; exit 1)
_ROUTE_CMDS="$(gen_routes)"
_SELF="$0"
_TSTAMP="$(date)"
cat > "$TARGET_UP_SCRIPT" <<-EOF
#!/bin/sh
# --- Generated automatically by "$_SELF" on ${_TSTAMP} ---
if (echo "\$4" | grep '$TARGET_GATEWAY_PREFIX' 2>&1 > /dev/null) ; then
echo "Detected target VPN connection..."
echo "nameserver 10.10.1.1" >> /etc/resolv.conf
${_ROUTE_CMDS}
fi
EOF
chmod 755 "$TARGET_UP_SCRIPT"
ifconfig "$TARGET_VPN_IFACE" 2>&1 > /dev/null && (echo "Detected VPN is already connected, adding routes immediately..."; sh "$TARGET_UP_SCRIPT")
#!/usr/bin/env bash
# This will start a local temporary instance of `ssh-agent` which lifetime is tied
# to the lifetime of the script.
#
# This might be useful for scripts that need to
# - Make sure agent is always started
# - Need custom socket location (e.g. to forward it to docker)
# - Need to add custom keys
set -e
function sshas_start() {
[[ $# -gt 0 ]] && echo "SSHAS_TMP_DIR='$1'"
cat <<-'SSHAS'
if [[ -z "$SSHAS_PARENT_LVL" ]] || (( $SSHAS_PARENT_LVL != ($SHLVL - 1) )) ; then
SSHAS_PARENT_LVL="$SHLVL"
SSHAS_SCRIPT_ARGS="$@"
SSHAS_SCRIPT_NAME=`basename "$0"`
SSHAS_SCRIPT_PATH=`realpath "$0"`
SSHAS_SCRIPT_DIR=`dirname "$SSHAS_SCRIPT_PATH"`
SSHAS_SUBSHELL_CMD="$SSHAS_SCRIPT_PATH $SSHAS_SCRIPT_ARGS"
[[ -z "$SSHAS_TMP_DIR" ]] && SSHAS_TMP_DIR="$(mktemp -d "$SSHAS_SCRIPT_DIR/.tmp-${SSHAS_SCRIPT_NAME}-ssh-agent.$SHLVL.XXXXXX")"
(SSHAS_PARENT_LVL=$SSHAS_PARENT_LVL SSHAS_TMP_DIR="$SSHAS_TMP_DIR" ssh-agent -a "$SSHAS_TMP_DIR/auth-sock" "$SHELL" -c "$SSHAS_SUBSHELL_CMD"; SSHAS_EXIT_CODDE="$?"; rm -rf "$SSHAS_TMP_DIR"; exit $SSHAS_EXIT_CODE)
fi
SSHAS
}
function context_info() {
echo "---------- $@ [$SHLVL] -------------"
echo "SSHAS_PARENT_LVL: $SSHAS_PARENT_LVL"
echo "TEMPDIR: $SSHAS_TMP_DIR"
echo "PID: $SSH_AGENT_PID"
echo "SOCK: $SSH_AUTH_SOCK"
echo "Keys in the agent: $(ssh-add -l)"
}
context_info "Before start"
# This must be executed after any other function definitions but before any global vars are defined!
eval "$(sshas_start)"
ssh-add
context_info "After start"
exit $(( 39 + $SHLVL ))
#!/usr/bin/env bash
set -e
PRV="$1"
PUB="$2"
CSR="$3"
if [ $# -lt 2 ] ; then
echo "Usage: $0 <priv key file> <pub key file> [csr file]"
exit 1
fi
openssl rsa -in "$PRV" -check > /dev/null && echo "[OK] Private key '$PRV' is valid" || (echo "[ERROR] Something is wrong with private key '$PRV'!" >&2 && exit 10)
openssl x509 -in "$PUB" -noout > /dev/null && echo "[OK] Certificate '$PUB' is valid" || (echo "[ERROR] Something is wrong with certificate '$PRV'!" >&2 && exit 10)
if [ ! -z "$CSR" ] ; then
openssl req -text -noout -verify -in "$CSR" > /dev/null 2>&1 && echo "[OK] Signing request '$CSR' is valid" || (echo "[ERROR] Something is wrong with signing request '$CSR'!" >&2 && exit 10)
CSR_MOD=$(openssl req -noout -modulus -in "$CSR" | openssl md5)
# echo "Signing request modulus: $CSR_MOD"
fi
PUB_MOD=$(openssl x509 -noout -modulus -in "$PUB" | openssl md5)
PRV_MOD=$(openssl rsa -noout -modulus -in "$PRV" | openssl md5)
# echo "Certificate modulus: $PUB_MOD"
# echo "Private key modulus: $PRV_MOD"
[ "$PUB_MOD" = "$PRV_MOD" ] && echo '[OK] Cert matches the priv key' || (echo '[ERROR] Key does not match the certificate!' >&2 && exit 10)
[ "$CSR_MOD" = "$PRV_MOD" ] && echo '[OK] Signing request matches the priv key' || (echo '[ERROR] Key does not match the CSR!' >&2 && exit 10)
openssl x509 -noout -text -in "$PUB" | sed -ne '
s/^\( *\)Subject:/\1/p;
/X509v3 Subject Alternative Name/{
N;
s/^.*\n//;
:a;
s/^\( *\)\(.*\), /\1\2\n\1/;
ta;
p;
q;
}' | tr -s " "
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment