Keybase proof
I hereby claim:
- I am condla on github.
- I am condla (https://keybase.io/condla) on keybase.
- I have a public key ASBd4hdolDFQWih1NIKLAX4JuTy5Xv9YYgSuMkdQjuqqXQo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
hdp-utility 1 52.215.53.220
hdp-worker 1 34.242.81.246
hdp-worker 1
set nocompatible " be iMproved, required | |
filetype off " required | |
" set the runtime path to include Vundle and initialize | |
set rtp+=~/.vim/bundle/Vundle.vim | |
call vundle#begin() | |
" alternatively, pass a path where Vundle should install plugins | |
"call vundle#begin('~/some/path/here') | |
" let Vundle manage Vundle, required |
#!/usr/bin/bash | |
export DUMP_PATH=/tmp | |
mysqldump -u$HIVE_USER -p$HIVE_PASSWORD -h $MYSQL_HOST hive > $DUMP_PATH/hive.dump | |
sed -i s/$CLUSTERNAME/$CLUSTERNAME2/g $DUMP_PATH/hive.dump | |
echo 'DROP DATABASE hive; CREATE DATABASE hive; USE hive;' | cat - $DUMP_PATH/hive.dump > $DUMP_PATH/temp && mv $DUMP_PATH/temp $DUMP_PATH/hive.dump | |
mysql -u$HIVE_USER2 -p$HIVE_PASSWORD2 -h $MYSQL_HOST2 hive < $DUMP_PATH/hive.dump | |
rm $DUMP_PATH/hive.dump |
#!/bin/bash | |
# specify the cluster names and don't forget the last "/" (!) | |
#export FULL_PATH1="hdfs://cluster1:8020/path/to/source/dir/" | |
#export FULL_PATH2="hdfs://cluster2:8020/target/dir/" | |
# count dashes in path | |
dash="/" | |
i1=$(( $(grep -o "$dash" <<< "$FULL_PATH1" | wc -l) + 1 )) | |
i2=$(( $(grep -o "$dash" <<< "$FULL_PATH2" | wc -l) + 1 )) |
You can run the pig examples below with the following commands. Note: You need to have Pig, Tez, HDFS, YARN setup, HBase and Hive tables must exist with the name used in the scripts.
Run:
pig -Dtez.queue.name=myQueue -x tez -useHCatalog -param "my_datetime=2018-03-30_13:05:21" -f hive_to_hbase.pig
bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-name mytopic --entity-type topics --config retention.ms=1000
... wait a minute ...
./bin/solr create -c <collection-name> -d <path/to/directory>
Referenced directory above must contain schema.xml and solrconfig.xml
Note that I also added schema.xml.j2 and solrconfig.xml.j2
These contain the variables:
{{ item.solr_collection_name }}
More details on how to create a template here: https://datahovel.com/2018/11/27/how-to-define-elastic-search-templates-for-apache-metron/
Command to upload the template
export ELASTICSEARCH_MASTER=condla0.field.hortonworks.com:9200
export PARSER_NAME=squid
import { sleep, group, check } from 'k6' | |
import http from 'k6/http' | |
import { chromium } from 'k6/x/browser'; | |
import exec from 'k6/execution'; | |
import { SharedArray } from 'k6/data'; | |
import { vu } from 'k6/execution'; | |
let user= "Stefan"; | |
export const options = { |