View sort_zset_cols.py
'''
sort_zset_cols.py
Copyright 2013 Josiah Carlson
Released into the public domain.
'''
'''
Let's imagine that there are 3 restaurants with price, score, distance info
being:
View transform.js
const transform = require('camaro')
const fs = require('fs')
const xml = fs.readFileSync('./test.xml', 'utf8')
const template = {
responses: [
'//responses[@type="C-FIND"]/data-set', {
AccessionNumber: 'element[1]/text()',
PatientName: 'element[2]/text()',
PatientID: 'element[3]/text()',
View 1README.md

How to test if Dataloader is working correctly, we are going to turn on server logging in order to see how many queries are being made to our db. If Dataloader is setup correctly, we should only see one hit on our db perrequest, even if duplicate data is being requested. Here's how to enable logging on postgresql. Note - This is the Mac way to enable logging.

  • subl /usr/local/var/postgres/postgresql.conf

  • around line 434 log_statement = 'all' uncomment and set to all log_statement = 'all'

  • then brew service restart postgresql

View enable_aptx_aac_macos.sh
# (c) 2018 Marcelo Novaes
# License - MIT
# Enable AptX and AAC codecs on bluetooth connections on macOS
sudo defaults write bluetoothaudiod "Enable AptX codec" -bool true
sudo defaults write bluetoothaudiod "Enable AAC code" -bool true
# Reads set values, should return something like:
# {
View check-mime-type-from-base64-string-node.js
const fileType = require('file-type')
const base64String = 'iVBORw0KGgoAAAANSUhEUgAAAC0AAAAtCAYAAAA6GuKaAAAAAXNSR0IArs4c6QAAAAlwSFlzAAAhOAAAITgBRZYxYAAAABxpRE9UAAAAAgAAAAAAAAAXAAAAKAAAABcAAAAWAAABW85tpoQAAAEnSURBVGgF7JbLDYMwDIYZoSN0pI7QUdikI+QESAXJozACR6SCRO2IHIBgnIRHKhUpQsSO/fFjnCRJwDWAun3exbOvsldX5oBjYAaQH/nTuoC0fkv7Kn8gnGIAOXhjUxTHj8BhFSXBUQfCGmh9p3iHwLdFcUdQ2BPWEgsoj4OG665Ug5igsSSZKLaTvaF86zQCS1dm6U4wji+YpQK8pcvYERyTsd3DKRblX1IxM9cpPH9poeJjDTupcmQJbdb42CXO+umkwjRsV0HF4EjVAmKDtZqpwQcElarm7WfdgGhnihy6nqgdu8pGzInaOBl6+PH+7AZIeFdabTomChecBcbm0cfa2PryloC6b1+9XW9Bzu16e8dJmBsif4YkckBrff+hz/pqP6n0FwAA//85LScuAAAA+0lEQVTV1u0NgyAQBuAboSN0pI7QUdykI/ALSaoJozgCP5tUE8tL0FiQSNNWORM/IEQfLucB9U09cjuJGxhetmjNLNqahlbeOKHhpeddXTmh4aVRixMnNLyEw6IFE7hwYFyGtr5wQMM5oz28Kxluwd0b2KOLjnYU5WkGNtKl1mw9GaP7Q6mzhZvC0sTAFWGXHaXVbVeXl8DUc9/IqoxoyyplXO0/enl3y/WqbKPzuIh/GOFwHj7H9/o5TXYOh9Cw7avKv8uh3qwSISynjQKPlemXPynel1w4clC5Y/ARC/92kyV2wYaTwjYROegrzVb6aIzD+Hl7Gb4ws/0Cqd8IYgt7isgAAAAASUVORK5CYII='
const mimeInfo = fileType(Buffer.from(ba
View create.sh
#!/usr/bin/env bash
export CLUSTER_NAME=${CLUSTER_NAME:-example.cluster.k8s.local}
export KUBERNETES_VERSION=${KUBERNETES_VERSION:-https://storage.googleapis.com/kubernetes-release/release/v1.9.0/}
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-us-west-2}
# Get all available AZs
export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text | awk -v OFS="," '$1=$1')"
# Create a unique s3 bucket name, or use an existing S3_BUCKET environment variable
View gist:22e0fec5e43f6f56190a10df4822ed00
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheese
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: stilton.example.com
http:
View install.rabbitmq.sh
#!/bin/sh
# Variables
USER="admin"
PASS="password"
# Assert Root User
SCRIPTUSER=`whoami`
if [ "$SCRIPTUSER" != "root" ]
then
View README.md

Backup MySQL to Amazon S3

This is a simple way to backup your MySQL tables to Amazon S3 for a nightly backup - this is all to be done on your server :-)

Sister Document - Restore MySQL from Amazon S3 - read that next

1 - Install s3cmd

this is for Centos 5.6, see http://s3tools.org/repositories for other systems like ubuntu etc

View git-cheat-list.md

Git cheat list

  • all commits that your branch have that are not yet in master

    git log master..<HERE_COMES_YOUR_BRANCH_NAME>
    
  • setting up a character used for comments

    git config core.commentchar <HERE_COMES_YOUR_COMMENT_CHAR>