Skip to content

Instantly share code, notes, and snippets.

View Condla's full-sized avatar

Stefan List Condla

View GitHub Profile
Condla / collector.yaml
Created December 1, 2023 11:01
Example configuration of an AWS Lambda ADOT OTel Collector to ingest traces of AWS Lambda functions into Grafana Cloud
#collector.yaml in the root directory
#Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/collector.yaml'
endpoint: "localhost:4317"
endpoint: "localhost:4318"
Condla / nginx-otel.conf
Last active November 27, 2023 12:14
This is a working example config of nginx server emitting OTel traces. To test this setup run curl localhost:8080/produce_200 or any of the other included proxy_pass configs.
load_module modules/;
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/;
events {
Condla / audit-dashboard.json
Created November 13, 2023 13:53
Audit Dashboard
"annotations": {
"list": [
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
"enable": true,
import { sleep, group, check } from 'k6'
import http from 'k6/http'
import { chromium } from 'k6/x/browser';
import exec from 'k6/execution';
import { SharedArray } from 'k6/data';
import { vu } from 'k6/execution';
let user= "Stefan";
export const options = {
Condla /
Last active July 26, 2018 21:19
Solr default schema for onboarding a new data source in Metron
./bin/solr create -c <collection-name> -d <path/to/directory> 
  • Referenced directory above must contain schema.xml and solrconfig.xml

  • Note that I also added schema.xml.j2 and solrconfig.xml.j2

  • These contain the variables:

  • {{ item.solr_collection_name }}

Condla /
Last active July 18, 2018 08:51 — forked from ursuad/
Quick command reference for Apache Kafka

Kafka Topics

List existing topics

bin/ --zookeeper localhost:2181 --list

Describe a topic

bin/ --zookeeper localhost:2181 --describe --topic mytopic

Purge a topic

bin/ --zookeeper localhost:2181 --alter --entity-name mytopic --entity-type topics --config

... wait a minute ...

# specify the cluster names and don't forget the last "/" (!)
#export FULL_PATH1="hdfs://cluster1:8020/path/to/source/dir/"
#export FULL_PATH2="hdfs://cluster2:8020/target/dir/"
# count dashes in path
i1=$(( $(grep -o "$dash" <<< "$FULL_PATH1" | wc -l) + 1 ))
i2=$(( $(grep -o "$dash" <<< "$FULL_PATH2" | wc -l) + 1 ))
export DUMP_PATH=/tmp
mysqldump -u$HIVE_USER -p$HIVE_PASSWORD -h $MYSQL_HOST hive > $DUMP_PATH/hive.dump
echo 'DROP DATABASE hive; CREATE DATABASE hive; USE hive;' | cat - $DUMP_PATH/hive.dump > $DUMP_PATH/temp && mv $DUMP_PATH/temp $DUMP_PATH/hive.dump
mysql -u$HIVE_USER2 -p$HIVE_PASSWORD2 -h $MYSQL_HOST2 hive < $DUMP_PATH/hive.dump
rm $DUMP_PATH/hive.dump
Condla / .vimrc
Created February 7, 2018 15:53
my vimrc :-)
set nocompatible " be iMproved, required
filetype off " required
" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
" alternatively, pass a path where Vundle should install plugins
"call vundle#begin('~/some/path/here')
" let Vundle manage Vundle, required