Skip to content

Instantly share code, notes, and snippets.

@farahfa
farahfa / gist:fcfc2f613e9fc38bb41bf06f130bdb25
Last active May 8, 2020 07:10 — forked from trongthanh/gist:2779392
How to move a folder from one repo to another and keep its commit history
# source: http://st-on-it.blogspot.com/2010/01/how-to-move-folders-between-git.html
# First of all you need to have a clean clone of the source repository so we didn't screw the things up.
git clone git://server.com/my-repo1.git
# After that you need to do some preparations on the source repository, nuking all the entries except the folder you need to move. Use the following command
git filter-branch --subdirectory-filter your_dir -- -- all
# For multiple dirs
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster-${TEAM_NAME}-${BUILD_NUMBER}
labels:
app: redis-cluster-${TEAM_NAME}-${BUILD_NUMBER}
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster-${TEAM_NAME}-${BUILD_NUMBER}
labels:
app: redis-cluster-${TEAM_NAME}-${BUILD_NUMBER}
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
@farahfa
farahfa / sinatra-server.rb
Created February 12, 2018 18:49 — forked from jeffjohnson9046/sinatra-server.rb
A simple sinatra server that accepts a POST with JSON content.
# To make this server publicly available on the inter-webs while running from localhost, use ngrok, which can be found here:
# https://ngrok.com/download. Follow the installation instructions for ngrok and start it up:
#
# ./ngrok 4567 # (or whatever port you want to listen on).
#
# ngrok will spit out an ugly but unique URL. After ngrok starts up, you should be able to POST to the sinatra server:
#
# http://6eee766f.ngrok.com/payload
require 'sinatra'
require 'json'
- name: register nginx service with curl check
consul:
host: 10.96.2.123
service_name: nginx
service_port: 80
script: "curl http://localhost > /dev/null 2>&1"
interval: 60s
- name: fetch elastic search from s3
s3:
mode: get
region: "{{ aws_region }}"
aws_access_key: "{{ myaccesskey }}"
aws_secret_key: "{{ mysecretkey }}"
s3_url: "{{s3_url}}"
bucket: "{{es_bucket_name}}"
dest: /tmp/
object: "/elasticsearch-{{version}}.tar.gz"
- name: Create new Elastic Search load balancer if it doesn't exists
ec2_elb_lb:
name: "{{ elb_name }}"
state: present
region: "{{aws_region}}"
scheme: internal
subnets:
- "{{ elb_subnet_1 }}"
- "{{ elb_subnet_2 }}"
- "{{ elb_subnet_3 }}"
@farahfa
farahfa / pipeline2.groovy
Created June 27, 2017 05:25
Ansible jenkins pipeline 2
stage( 'Switch to master' ) {
agent {
node {
label 'ansible'
}
}
when {
environment name: "SERVER", value: "master"
}
steps {
@farahfa
farahfa / pipeline1.groovy
Created June 27, 2017 05:24
Ansible jenkins pipeline
stage( 'Switch to backup' ) {
agent {
node {
label 'ansible'
}
}
when {
environment name: "SERVER", value: "backup-master"
}
environment {
@farahfa
farahfa / pre-commit.sh
Created March 7, 2016 20:34 — forked from czardoz/pre-commit.sh
Git pre-commit hook that checks for AWS keys
#!/usr/bin/env bash
if git rev-parse --verify HEAD >/dev/null 2>&1
then
against=HEAD
else
# Initial commit: diff against an empty tree object
EMPTY_TREE=$(git hash-object -t tree /dev/null)
against=$EMPTY_TREE
fi