Skip to content

Instantly share code, notes, and snippets.

View Condla's full-sized avatar
🎯
Focusing

Stefan List Condla

🎯
Focusing
View GitHub Profile
@Condla
Condla / collector.yaml
Created December 1, 2023 11:01
Example configuration of an AWS Lambda ADOT OTel Collector to ingest traces of AWS Lambda functions into Grafana Cloud
#collector.yaml in the root directory
#Set an environment variable 'OPENTELEMETRY_COLLECTOR_CONFIG_FILE' to '/var/task/collector.yaml'
receivers:
otlp:
protocols:
grpc:
endpoint: "localhost:4317"
http:
endpoint: "localhost:4318"
@Condla
Condla / nginx-otel.conf
Last active November 27, 2023 12:14
This is a working example config of nginx server emitting OTel traces. To test this setup run curl localhost:8080/produce_200 or any of the other included proxy_pass configs.
load_module modules/ngx_otel_module.so;
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
@Condla
Condla / audit-dashboard.json
Created November 13, 2023 13:53
Audit Dashboard
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
import { sleep, group, check } from 'k6'
import http from 'k6/http'
import { chromium } from 'k6/x/browser';
import exec from 'k6/execution';
import { SharedArray } from 'k6/data';
import { vu } from 'k6/execution';
let user= "Stefan";
export const options = {
@Condla
Condla / METRON_SOLR_COLLECTION_README.md
Last active July 26, 2018 21:19
Solr default schema for onboarding a new data source in Metron
./bin/solr create -c <collection-name> -d <path/to/directory> 
  • Referenced directory above must contain schema.xml and solrconfig.xml

  • Note that I also added schema.xml.j2 and solrconfig.xml.j2

  • These contain the variables:

  • {{ item.solr_collection_name }}

@Condla
Condla / kafka-cheat-sheet.md
Last active July 18, 2018 08:51 — forked from ursuad/kafka-cheat-sheet.md
Quick command reference for Apache Kafka

Kafka Topics

List existing topics

bin/kafka-topics.sh --zookeeper localhost:2181 --list

Describe a topic

bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic

Purge a topic

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-name mytopic --entity-type topics --config retention.ms=1000

... wait a minute ...

@Condla
Condla / 00_Pig_Examples.md
Last active March 30, 2018 11:11
An Apache Pig script that shows how to read data from Apache HBase, sort it by some value and store it as CSV.

Pig Examples

You can run the pig examples below with the following commands. Note: You need to have Pig, Tez, HDFS, YARN setup, HBase and Hive tables must exist with the name used in the scripts.

hive_to_hbase.pig

Run:

pig -Dtez.queue.name=myQueue -x tez -useHCatalog -param "my_datetime=2018-03-30_13:05:21" -f hive_to_hbase.pig 
#!/bin/bash
# specify the cluster names and don't forget the last "/" (!)
#export FULL_PATH1="hdfs://cluster1:8020/path/to/source/dir/"
#export FULL_PATH2="hdfs://cluster2:8020/target/dir/"
# count dashes in path
dash="/"
i1=$(( $(grep -o "$dash" <<< "$FULL_PATH1" | wc -l) + 1 ))
i2=$(( $(grep -o "$dash" <<< "$FULL_PATH2" | wc -l) + 1 ))
#!/usr/bin/bash
export DUMP_PATH=/tmp
mysqldump -u$HIVE_USER -p$HIVE_PASSWORD -h $MYSQL_HOST hive > $DUMP_PATH/hive.dump
sed -i s/$CLUSTERNAME/$CLUSTERNAME2/g $DUMP_PATH/hive.dump
echo 'DROP DATABASE hive; CREATE DATABASE hive; USE hive;' | cat - $DUMP_PATH/hive.dump > $DUMP_PATH/temp && mv $DUMP_PATH/temp $DUMP_PATH/hive.dump
mysql -u$HIVE_USER2 -p$HIVE_PASSWORD2 -h $MYSQL_HOST2 hive < $DUMP_PATH/hive.dump
rm $DUMP_PATH/hive.dump