Skip to content

Instantly share code, notes, and snippets.

@aspirina765
aspirina765 / benchmarking-tools.md
Created December 11, 2024 15:51 — forked from aliesbelik/benchmarking-tools.md
Benchmarking & load testing tools
@aspirina765
aspirina765 / openshift-cheatsheet.md
Created August 28, 2023 10:20 — forked from rafaeltuelho/openshift-cheatsheet.md
My Openshift Cheatsheet

My Openshift Cheatsheet

Project Quotes, Limits and Templates

  • Cluster Quota
oc create clusterquota env-qa \
    --project-label-selector environment=qa \
    --hard pods=10,services=5
    
oc create clusterquota user-qa \

wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue.

Basic Usage

wrk -t12 -c400 -d30s http://127.0.0.1:8080/index.html
@aspirina765
aspirina765 / stress.sh
Created July 13, 2023 15:09 — forked from mikepfeiffer/stress.sh
Install Stress Utility on Amazon Linux 2
sudo amazon-linux-extras install epel -y
sudo yum install stress -y
@aspirina765
aspirina765 / remote_crc.md
Created April 29, 2023 00:08 — forked from tmckayus/remote_crc.md
Running 'crc' on a remote server

Overview: running crc on a remote server

This document shows how to deploy an OpenShift instance on a server using CodeReady Containers (crc) that can be accessed remotely from one or more client machines (sometimes called a "headless" instance). This provides a low-cost test and development platform that can be shared by developers. Deploying this way also allows a user to create an instance that uses more cpu and memory resources than may be available on his or her laptop.

While there are benefits to this type of deployment, please note that the primary use case for crc is to deploy a local OpenShift instance on a workstation or laptop and access it directly from the same machine. The headless setup is configured completely outside of crc itself, and supporting a headless setup is beyond the mission of the crc development team. Please do not ask for changes to crc to support this type of deployment, it will only cost the team time as they politely decline :)

The instructions here were tested with F

@aspirina765
aspirina765 / benchmark-commands.txt
Created January 1, 2023 19:17 — forked from jkreps/benchmark-commands.txt
Kafka Benchmark Commands
Producer
Setup
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test-rep-one --partitions 6 --replication-factor 1
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test --partitions 6 --replication-factor 3
Single thread, no replication
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test7 50000000 100 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196
@aspirina765
aspirina765 / 1_Partitioner.scala
Created December 11, 2022 01:45 — forked from sriggin/1_Partitioner.scala
Spark-compatible Kafka DefaultPartitioner
// This is directly, lazily translated to Scala from the original Java source from the Kafka Clients lib
def murmur2(data: Array[Byte]): Int = {
val length = data.length
val seed = 0x9747b28c
// 'm' and 'r' are mixing constants generated offline.
// They're not really 'magic', they just happen to work well.
val m = 0x5bd1e995
val r = 24
// Initialize the hash to a random value
@aspirina765
aspirina765 / kafka.md
Created September 14, 2022 21:00 — forked from DevoKun/kafka.md
How to operate Kafka, mostly using Docker

Kafka Distributed Streaming Platform

Publish and Subscribe / Process / Store

Start Kafka

  • Kafka uses ZooKeeper as a distributed backend.

Start Zookeeper