Skip to content

Instantly share code, notes, and snippets.

View jbenninghoff's full-sized avatar

John Benninghoff jbenninghoff

  • Ventura, CA, United States
View GitHub Profile
@jbenninghoff
jbenninghoff / EMR-HUE-SAML-conf.md
Last active April 3, 2020 22:53
Hue SAML configuration on EMR

To enable HUE to use SAML authentication, the Service Provider (Hue) and the Identity Provider (samltest.id) must exchange meta-data to accept each others identity. The procedure to do that on the EMR master node is outlined below.

  1. Hue is the Service Provider and http://samltest.id is the Identity Provider in this example
  2. Install the tools to enable Hue to handle SAML:
    1. yum install git gcc python-devel swig openssl
    2. yum install --enablerepo=epel xmlsec1 xmlsec1-openssl
  3. Acquire the IDP metadata from http://samltest.id and save into samltest.xml file
  4. Put the xml file in /etc/hue/conf/security/samltest.xml
  5. Verify key and cert files exist.
@jbenninghoff
jbenninghoff / note1.md
Last active March 13, 2020 06:17
Ad hoc MD

Text or markdown

code

  • A
  • B
  • C
@jbenninghoff
jbenninghoff / FIOnotes.txt
Created June 22, 2017 17:05
FIO 4K vs 64K throughput
FIO 4K vs 64K file size throughput. 64K and 512K file counts. Loopback NFS mount on single node cluster. fio and NFS/MFS on same machine. 16 threads in fio client.
rm smallFiles.*; fio --name=smallFiles --numjobs=16 --nrfiles=$[8*8] --filesize=4K --bs=4k --thread=1 --direct=1 --rw=write > fio-smallFiles-64x4K-16T-nfs-mfs.log
rm smallFiles.*; fio --name=smallFiles --numjobs=16 --nrfiles=$[8*8*8] --filesize=4K --bs=4k --thread=1 --direct=1 --rw=write > fio-smallFiles-512x4K-16T-nfs-mfs.log
rm smallFiles.*; fio --name=smallFiles --numjobs=16 --nrfiles=$[8*8*8] --filesize=64K --bs=4k --thread=1 --direct=1 --rw=write > fio-smallFiles-512x64K-16T-nfs-mfs.log
rm smallFiles.*; fio --name=smallFiles --numjobs=16 --nrfiles=$[8*8] --filesize=64K --bs=4k --thread=1 --direct=1 --rw=write > fio-smallFiles-64x64K-16T-nfs-mfs.log
==> fio-smallFiles-512x4K-16T-nfs-mfs.log <==
@jbenninghoff
jbenninghoff / aws-cluster.sh
Last active February 10, 2017 14:51
Create AWS nodes for MapR cluster
#!/bin/bash
# jbenninghoff 2015-Nov-13 vi: set ai et sw=3 tabstop=3 retab:
cnt=5 #cluster node count, does not include edge node
keyname=xxx #replace with your AWS Key Name
# Make sure your AWS firewall (security-group) does not block DNS (UDP)
# Launch AWS server(VM) to host Mesos Master, DNS, Marathon, Myriad host (edge node)
edgeID=$(aws ec2 run-instances --image-id ami-d2c924b2 --count 1 --instance-type m4.xlarge --key-name $keyname --security-group-ids sg-6730dd03 --subnet-id subnet-8deb24fa --block-device-mapping "DeviceName=/dev/sda1,Ebs={VolumeSize=300}" --query 'Instances[0].InstanceId')
@jbenninghoff
jbenninghoff / fixes-via-clush.txt
Last active November 22, 2016 19:22
Fix list and clush fixes
Fix list from cluster-audit.sh findings:
Push mapr repos from node1 to all others.
Yum install dstat jq nmap nc tmux tuned vim xml2 zsh
Disable /etc/selinux/config and setenforce Permissive
chkconfig iptables off
echo 'vm.swappiness = 1' >> /etc/sysctl.conf
Not in tmpwatch: /tmp/hadoop-mapr/nm-local-dir
New hostname in /etc/sysconfig/network
Add all hosts to /etc/hosts
@jbenninghoff
jbenninghoff / LVM mods
Last active February 10, 2023 23:16
Linux LVM modifications for MapR
#!/bin/bash
umount /home
lsblk -P /dev/sdb | grep -o MOUNTPOINT.*
lvremove -f vg_$(hostname -s|tr A-Z a-z)/lv_home
parted /dev/sdb -- rm 1
grep home /etc/fstab
sed -i.bak '/home/d' /etc/fstab
vgreduce -f vg_$(hostname -s|tr A-Z a-z) --removemissing
vgreduce -f vg_${HOSTNAME,,} --removemissing
@jbenninghoff
jbenninghoff / sh
Last active July 13, 2017 01:59
Bash Idioms template
#!/bin/bash
#jbenninghoff 2015-Dec-28 vi: set ai et sw=3 tabstop=3 retab:
: << '--BLOCK-COMMENT--'
Bash idioms template
Save as ~/.vim/templates/sh
Above requires vim templates plugin: https://github.com/ap/vim-templates
Useful site for lots of Bash info: http://wiki.bash-hackers.org/
--BLOCK-COMMENT--
@jbenninghoff
jbenninghoff / ycsbtest.sh
Created December 3, 2015 21:18
YCSB test run script
#!/bin/bash
# jbenninghoff 2013-Sep-13 vi: set ai et sw=3 tabstop=3:
# Assumes MapR YCSB branch to handle large tables: https://github.com/mapr/YCSB
# Assumes MapR HBase client software installed. Can be an edge/gateway node
export HBASE_CLASSPATH=core/lib/core-0.1.4.jar:hbase-binding/lib/hbase-binding-0.1.4.jar
table=/benchmarks/usertable #YCSB uses table named 'usertable' by default
thrds=4
count=$[100*1000*1000] #table row count
@jbenninghoff
jbenninghoff / mesos-install.sh
Created December 2, 2015 00:00
Mesos install steps
#!/bin/bash
echo 'Script not ready for execution. Copy and paste line by line into a shell instead'
echo 'Assumes clush installed and /etc/hosts propagated to all nodes'
exit 1
#Configure edge node as MapR client
vi /etc/yum.repos.d/maprtech.repo # We should have rpm to install+enable like EPEL rpm
yum clean all
#Insure iptables (firewall) is off and disabled everywhere
@jbenninghoff
jbenninghoff / TestHBase.java
Created August 18, 2015 19:00
HBase Test Case
/*
* Compile and run with:
* javac -cp $(hbase classpath) TestHBase.java
* java -cp .:$(hbase classpath) TestHBase
*/
import java.net.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;