Skip to content

Instantly share code, notes, and snippets.

View rodolfodpk's full-sized avatar
🏠
Working from home

rodolfodpk rodolfodpk

🏠
Working from home
View GitHub Profile
@VaughnVernon
VaughnVernon / CONTENTS.md
Last active June 21, 2024 18:06
Hexagonal / Ports and Adapters Is Just This Simple
@samgj18
samgj18 / EventStoreDB.yml
Last active March 1, 2024 23:18
EventStoreDB docker compose for M1 Mac
version: "3.8"
services:
eventstore.db:
restart: always
image: "ghcr.io/eventstore/eventstore:23.6.0-alpha-arm64v8"
environment:
- EVENTSTORE_CLUSTER_SIZE=1
- EVENTSTORE_RUN_PROJECTIONS=All
- EVENTSTORE_START_STANDARD_PROJECTIONS=true
- EVENTSTORE_EXT_TCP_PORT=1113
@phortuin
phortuin / postgres.md
Last active July 16, 2024 04:19
Set up postgres + database on MacOS (M1)

Based on this blogpost.

Install with Homebrew:

$ brew install postgresql@14

(The version number 14 needs to be explicitly stated. The @ mark designates a version number is specified. If you need an older version of postgres, use postgresql@13, for example.)

@rponte
rponte / using-uuid-as-pk.md
Last active July 24, 2024 18:42
Não use UUID como PK nas tabelas do seu banco de dados

Pretende usar UUID como PK em vez de Int/BigInt no seu banco de dados? Pense novamente...

TL;TD

Não use UUID como PK nas tabelas do seu banco de dados.

Um pouco mais de detalhes

Scaling CockroachDB to 200k writes per second

I really like my job, I get to work with interesting use cases. The following investigation is result of one of these conversations. A customer was evaluating a write-heavy use case with a KV workload, where row is very small consisting of a key and a value. CockroachDB has an equivalent workload conveniently named kv. The gist of the challenge was scaling the writes to 200,000 rows/sec. I've seen good performance with CockroachDB on various workloads but never evaluated write-heavy and write-only workloads. I decided to investigate the feasibility of this and set out to scale CRDB to reach my target. After several attempts, I settled on the following architecture:

  • AWS us-east-2 region
  • 30 c5d.4xl CockroachDB nodes
  • 1 c5d.4xl client machine
  • CockroachDB 20.1.8

I conducted this test using RocksDB storage engine as well as our new engine called [Pebble](https:/

@cescoffier
cescoffier / Retry.java
Last active March 31, 2023 06:45
Various examples of retries with Mutiny
//usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS io.smallrye.reactive:smallrye-mutiny-vertx-web-client:1.1.0
//DEPS io.smallrye.reactive:mutiny:0.7.0
//DEPS org.slf4j:slf4j-nop:1.7.30
package io.vertx.mutiny.retry;
import io.smallrye.mutiny.Uni;
import io.vertx.mutiny.core.Vertx;

I will continue to use a docker-compose environment for the following tutorial as it fits nicely with the iterative model of development and deployment with schema migration tools. We will need a recent CockroachDB image. My current folder tree looks like so:

crdb-flyway
└── docker-compose.yml

0 directories, 1 file

My docker-compose file looks like so:

CockroachDB and Docker Compose

This is the first in a series of tutorials on CockroachDB and Docker Compose

  • Information on CockroachDB can be found here.
  • Information on Docker Compose can be found here
  1. Install Docker Desktop

Because we already have an official CockroachDB docker image, we will use that in our docker-compose.yml file. We recommend you use one of the current tags instead of latest.

I've been working with Apache Kafka for over 7 years. I inevitably find myself doing the same set of activities while I'm developing or working with someone else's system. Here's a set of Kafka productivity hacks for doing a few things way faster than you're probably doing them now. 🔥

Get the tools

@colomboe
colomboe / fx-test.kt
Created July 4, 2019 19:25
Porting of "Simple example of testing with ZIO environment" to Kotlin
// Porting of https://gist.github.com/jdegoes/dd66656382247dc5b7228fb0f2cb97c8
typealias UserID = String
data class UserProfile(val name: String)
// The database module:
interface DatabaseService {
suspend fun dbLookup(id: UserID): UserProfile
suspend fun dbUpdate(id: UserID, profile: UserProfile)
}