Skip to content

Instantly share code, notes, and snippets.

View jonico's full-sized avatar
🪐
@ home

Johannes Nicolai jonico

🪐
@ home
View GitHub Profile
@jonico
jonico / github-collaborators.sh
Last active April 24, 2023 19:33 — forked from muhammaddadu/github-add-colaborator
List, add and remove multiple collaborators from multiple repositories
#!/bin/bash
function help {
echo "Add collaborators to one or more repositories on github"
echo ""
echo "Syntax: $0 -u user [-l] [-D] -r repo1,repo2 <collaborator id>"
echo ""
echo " -u OAuth token to access github"
echo " -l list collaborators"
echo " -r repositories, list as owner/repo[,owner/repo,...]"
@jonico
jonico / index.js
Last active September 20, 2022 11:44
How to debug PlanetScale database-js API with a proxy hooked into node-js 18's fetch API (based on undici)
import { connect } from '@planetscale/database'
import dotenv from 'dotenv'
import express from 'express'
import { ProxyAgent } from 'undici';
const agent = new ProxyAgent('http://localhost:5555');
global[Symbol.for('undici.globalDispatcher.1')] = agent;
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';
@jonico
jonico / README.md
Last active July 5, 2022 11:01
Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

As the docker-compose files 👇 are using PlanetScale's MySQL-compatible Vitess database as an example, each database (temporal and temporal_internal) use different keyspaces and connection strings. Unfortunately, temporalio/auto-setup does not seem to support multiple connection strings for database creation and schema updates (using temporal-sql-tool), so the following commands would need to be run manually before starting up docker-compose:

./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal setup-schema -v 0.0
./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned
./temporal
@jonico
jonico / counting-affected-rows-in-potsgresql.md
Last active June 7, 2022 17:30
Counting processed rows (read/written) in PostgreSQL

Scripts to determine PostgreSQL database size and rows processed

Differences between PostgreSQL and MySQL storage format and why this matters for billing estimations

tl;dr Rows processed numbers between Postgres and MySQL and database size may differ due to different index and row storage

PostgreSQL and MySQL are both relational databases with strong transactional capabilities. The way their storage engines store rows and corresponding indexes and how those indexes are used during queries differs significantly though. Check out this article from Uber Engineering for the technical details behind those differences.

Due to those index and row storage format differences, any numbers about rows read / written and database size from PostgreSQL will differ from the numbers you can expect once migrated to MySQL. If you are using similar indexes for your queries, the numbers should be pretty similar but depending on your exact queries and read/write pa

#!/bin/bash
zero_commit="0000000000000000000000000000000000000000"
# we have to change the home directory of GPG
# as in the default environment, /root/.gnupg is not writeable
export GNUPGHOME=/tmp/
# Do not traverse over commits that are already in the repository
# (e.g. in a different branch)
@jonico
jonico / query_github_audit_log.graphql
Created June 6, 2019 22:11
How to query GitHub's audit log with GraphQL
query {
organization(login: "se-saml") {
auditLog(first: 50) {
edges {
node {
... on RepositoryAuditEntryData {
repository {
name
}
}
@jonico
jonico / Jenkinsfile
Last active January 7, 2022 04:37
Jenkins in Kubernetes cluster
#!groovy
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
/*
Environment variables: These environment variables should be set in Jenkins in: `https://github-demo.ci.cloudbees.com/job/<org name>/configure`:
For deployment purposes:
- HEROKU_PREVIEW=<your heroku preview app>
- HEROKU_PREPRODUCTION=<your heroku pre-production app>
@jonico
jonico / wait-for-ps-branch-readiness.sh
Created October 4, 2021 21:36
shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
#!/bin/sh
# shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
# usage: wait-for-ps-branch-readiness.sh <db name> <branch name> <max retries>
function wait_for_branch_readiness {
local retries=$1
local db=$2
local branch=$3
@jonico
jonico / Jenkinsfile.groovy
Created February 19, 2018 16:37
Abbreviated Jenkinsfile to build on multiple archs (from conan.io project)
for (x in slaves) {
def slave = x
for (y in pyvers) {
def pyver = y
builders["${slave} - ${pyver}"] = {
node(slave) {
stage("${slave} - ${pyver}"){
step ([$class: 'WsCleanup'])
checkout scm
def bn = env.BUILD_NUMBER
@jonico
jonico / migrate_from_azure_blob_to_s3.sh
Last active August 20, 2021 03:08
Co-pilot examples (only comments have been written by me)
# copy Azure bloc storage files to S3 buckets
#
# Usage:
# ./copy-azure-blob-storage-to-s3.sh <blob storage container> <s3 bucket>
#
# Example:
# ./copy-azure-blob-storage-to-s3.sh my-container s3://my-bucket
#
# Note:
# This script requires a working Azure CLI.