Skip to content

Instantly share code, notes, and snippets.

Avatar
🎯
Focusing

Stefan Dunkler Condla

🎯
Focusing
View GitHub Profile
View kittyshop-mouse.js
import { sleep, group, check } from 'k6'
import http from 'k6/http'
import { chromium } from 'k6/x/browser';
import exec from 'k6/execution';
import { SharedArray } from 'k6/data';
import { vu } from 'k6/execution';
let user= "Stefan";
export const options = {
View METRON_ES_TEMPLATE.md
@Condla
Condla / METRON_SOLR_COLLECTION_README.md
Last active July 26, 2018 21:19
Solr default schema for onboarding a new data source in Metron
View METRON_SOLR_COLLECTION_README.md
./bin/solr create -c <collection-name> -d <path/to/directory> 
  • Referenced directory above must contain schema.xml and solrconfig.xml

  • Note that I also added schema.xml.j2 and solrconfig.xml.j2

  • These contain the variables:

  • {{ item.solr_collection_name }}

@Condla
Condla / kafka-cheat-sheet.md
Last active July 18, 2018 08:51 — forked from ursuad/kafka-cheat-sheet.md
Quick command reference for Apache Kafka
View kafka-cheat-sheet.md

Kafka Topics

List existing topics

bin/kafka-topics.sh --zookeeper localhost:2181 --list

Describe a topic

bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic

Purge a topic

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-name mytopic --entity-type topics --config retention.ms=1000

... wait a minute ...

View hdfs-distcp-diff.sh
#!/bin/bash
# specify the cluster names and don't forget the last "/" (!)
#export FULL_PATH1="hdfs://cluster1:8020/path/to/source/dir/"
#export FULL_PATH2="hdfs://cluster2:8020/target/dir/"
# count dashes in path
dash="/"
i1=$(( $(grep -o "$dash" <<< "$FULL_PATH1" | wc -l) + 1 ))
i2=$(( $(grep -o "$dash" <<< "$FULL_PATH2" | wc -l) + 1 ))
View hive-schema-copy.sh
#!/usr/bin/bash
export DUMP_PATH=/tmp
mysqldump -u$HIVE_USER -p$HIVE_PASSWORD -h $MYSQL_HOST hive > $DUMP_PATH/hive.dump
sed -i s/$CLUSTERNAME/$CLUSTERNAME2/g $DUMP_PATH/hive.dump
echo 'DROP DATABASE hive; CREATE DATABASE hive; USE hive;' | cat - $DUMP_PATH/hive.dump > $DUMP_PATH/temp && mv $DUMP_PATH/temp $DUMP_PATH/hive.dump
mysql -u$HIVE_USER2 -p$HIVE_PASSWORD2 -h $MYSQL_HOST2 hive < $DUMP_PATH/hive.dump
rm $DUMP_PATH/hive.dump
@Condla
Condla / .vimrc
Created February 7, 2018 15:53
my vimrc :-)
View .vimrc
set nocompatible " be iMproved, required
filetype off " required
" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
" alternatively, pass a path where Vundle should install plugins
"call vundle#begin('~/some/path/here')
" let Vundle manage Vundle, required
@Condla
Condla / hadoop-workshop-nodes.md
Last active December 12, 2017 07:02
List the nodes of all Participants
View hadoop-workshop-nodes.md

Nodes

1

hdp-utility 1 52.215.53.220

hdp-worker 1 34.242.81.246

hdp-worker 1

View keybase.md

Keybase proof

I hereby claim:

  • I am condla on github.
  • I am condla (https://keybase.io/condla) on keybase.
  • I have a public key ASBd4hdolDFQWih1NIKLAX4JuTy5Xv9YYgSuMkdQjuqqXQo

To claim this, I am signing this object:

@Condla
Condla / 00_Pig_Examples.md
Last active March 30, 2018 11:11
An Apache Pig script that shows how to read data from Apache HBase, sort it by some value and store it as CSV.
View 00_Pig_Examples.md

Pig Examples

You can run the pig examples below with the following commands. Note: You need to have Pig, Tez, HDFS, YARN setup, HBase and Hive tables must exist with the name used in the scripts.

hive_to_hbase.pig

Run:

pig -Dtez.queue.name=myQueue -x tez -useHCatalog -param "my_datetime=2018-03-30_13:05:21" -f hive_to_hbase.pig