Skip to content

Instantly share code, notes, and snippets.

View skyrocknroll's full-sized avatar

Yuvaraj L skyrocknroll

View GitHub Profile
@skyrocknroll
skyrocknroll / connect.md
Last active September 18, 2017 14:39
kafka connect debezium setup
# config.storage.topic=connect-configs
$ bin/kafka-topics --create --zookeeper localhost:2181 --topic connect-configs --replication-factor 3 --partitions 1 --config cleanup.policy=compact

# offset.storage.topic=connect-offsets
$ bin/kafka-topics --create --zookeeper localhost:2181 --topic connect-offsets --replication-factor 3 --partitions 50 --config cleanup.policy=compact

# status.storage.topic=connect-status
$ $ bin/kafka-topics --create --zookeeper localhost:2181 --topic connect-statuses --replication-factor 3 --partitions 50 --config cleanup.policy=compact

Job Anti-Affinity in Action

Overview

This guide will walk you through creating and executing a job that will demonstrate Nomad's job anti-affinity rules and, in clusters with memory limited Nomad clients, filtering based resource exhaustion.

Sample Environment

  • One Nomad Server Node
  • Three Nomad Client Nodes
    • 768 MB RAM total (providing 761 MB RAM in nomad node-status -self)
@skyrocknroll
skyrocknroll / help.md
Created July 11, 2017 18:53
setting up email server smtp
@skyrocknroll
skyrocknroll / envoy.json
Created June 11, 2017 20:24
envoy grpc bridge
{
"listeners": [
{
"address": "tcp://0.0.0.0:5050",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"codec_type": "auto",
@skyrocknroll
skyrocknroll / envoy-r.json
Last active June 11, 2017 18:27
envoy envoy.json ==== service envoy envoy-r.json
{
"listeners": [
{
"address": "tcp://0.0.0.0:4321",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"codec_type": "auto",
@skyrocknroll
skyrocknroll / nginx-tuning.md
Created June 9, 2017 18:50 — forked from denji/nginx-tuning.md
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.

@skyrocknroll
skyrocknroll / opensslcheck.md
Created May 22, 2017 10:37
openssl ssl verify

openssl s_client -connect ":" -showcerts -servername ""

@skyrocknroll
skyrocknroll / telegraf.conf
Last active May 23, 2017 17:26
nomad consul telemetry telegraf template
templates = [
"consul.consul.* measurement.field*",
"consul.raft.* measurement.field*",
"consul.memberlist.* measurement.field*",
"consul.runtime.* measurement.field*",
"consul.serf.* measurement.field*",
"consul.*.*.*.*.memberlist.* measurement.hostname.hostname.hostname.hostname.field*",
"consul.*.*.*.*.runtime.* measurement.hostname.hostname.hostname.hostname.field*",
"consul.*.*.*.*.consul.* measurement.hostname.hostname.hostname.hostname.field*",
"nomad.runtime.* measurement.field*",
@skyrocknroll
skyrocknroll / telegraf.conf
Created May 21, 2017 06:39 — forked from burdandrei/telegraf.conf
Receive consul and nomad telemetry in influx in usable form
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "3s"
precision = ""
debug = false
from locust.stats import RequestStats
from locust import Locust, TaskSet, task, events
import os
import sys, getopt, argparse
from random import randint,random
import json
from locust.events import EventHook
import requests
import re
import grpc