Skip to content

Instantly share code, notes, and snippets.

View jonico's full-sized avatar
🪐
@ home

Johannes Nicolai jonico

🪐
@ home
View GitHub Profile
@jonico
jonico / modify-examples.js
Created December 12, 2023 14:39
Duplicate Postman collection responses
var fs = require('fs'), // needed to read JSON file from disk
Collection = require('postman-collection').Collection,
Response = require('postman-collection').Response,
myCollection;
// Load a collection to memory from a JSON file on disk (say, sample-collection.json)
myCollection = new Collection(JSON.parse(fs.readFileSync('sample-collection.json').toString()));
// iterate through all requests in the collection
myCollection.forEachItem(function (item) {
@jonico
jonico / postman-issue-ops.yml
Created June 20, 2023 13:44
GitHub Action based IssueOps workflow to create Postman releases and a tag directly from a pull request by issuing a comment starting with /pm-release [<release-name>] ["release notes"]
name: Postman IssueOps commands
on:
issue_comment:
types: [created]
jobs:
prechecks:
name: Permission pre-check
if: github.event.issue.pull_request != null && (startsWith(github.event.comment.body, '/pm-release') || startsWith(github.event.comment.body, '/pm-publish'))
@jonico
jonico / Jenkinsfile
Last active January 17, 2024 14:12
Jenkinsfile showing advanced Postman CLI, portman (contract test generation) and newman integration including custom reporters and reporting status back to Postman
def checkout () {
context="continuous-integration/jenkins/"
context += isPRMergeBuild()?"pr-merge/checkout":"branch/checkout"
def scmVars = checkout scm
setBuildStatus ("${context}", 'Checking out completed', 'SUCCESS')
if (isPRMergeBuild()) {
prMergeRef = "refs/pull/${getPRNumber()}/merge"
mergeCommit=sh(returnStdout: true, script: "git show-ref ${prMergeRef} | cut -f 1 -d' '")
echo "Merge commit: ${mergeCommit}"
return [prMergeRef, mergeCommit]
@jonico
jonico / index.js
Last active September 20, 2022 11:44
How to debug PlanetScale database-js API with a proxy hooked into node-js 18's fetch API (based on undici)
import { connect } from '@planetscale/database'
import dotenv from 'dotenv'
import express from 'express'
import { ProxyAgent } from 'undici';
const agent = new ProxyAgent('http://localhost:5555');
global[Symbol.for('undici.globalDispatcher.1')] = agent;
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';
@jonico
jonico / README.md
Last active July 5, 2022 11:01
Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

Docker compose files for temporal.io with external MySQL databases for temporal and temporal_visibility tables (using PlanetScale as example)

As the docker-compose files 👇 are using PlanetScale's MySQL-compatible Vitess database as an example, each database (temporal and temporal_internal) use different keyspaces and connection strings. Unfortunately, temporalio/auto-setup does not seem to support multiple connection strings for database creation and schema updates (using temporal-sql-tool), so the following commands would need to be run manually before starting up docker-compose:

./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal setup-schema -v 0.0
./temporal-sql-tool --ep $TEMPORAL_PSCALE_HOSTSTRING --user $TEMPORAL_PSCALE_USER --tls  --password $TEMPORAL_PASSWORD-p 3306 --plugin mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned
./temporal
@jonico
jonico / MySQLBYOD.py
Last active July 12, 2023 21:20
Copying from one PlanetScale table to another using AWS Glue (and MySQL 8.0 JDBC driver)
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext, SparkConf
from awsglue.context import GlueContext
from awsglue.job import Job
import time
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
sc = SparkContext()
@jonico
jonico / counting-affected-rows-in-potsgresql.md
Last active June 7, 2022 17:30
Counting processed rows (read/written) in PostgreSQL

Scripts to determine PostgreSQL database size and rows processed

Differences between PostgreSQL and MySQL storage format and why this matters for billing estimations

tl;dr Rows processed numbers between Postgres and MySQL and database size may differ due to different index and row storage

PostgreSQL and MySQL are both relational databases with strong transactional capabilities. The way their storage engines store rows and corresponding indexes and how those indexes are used during queries differs significantly though. Check out this article from Uber Engineering for the technical details behind those differences.

Due to those index and row storage format differences, any numbers about rows read / written and database size from PostgreSQL will differ from the numbers you can expect once migrated to MySQL. If you are using similar indexes for your queries, the numbers should be pretty similar but depending on your exact queries and read/write pa

@jonico
jonico / README.md
Last active September 14, 2023 15:13
How to create an RDS database that is suitable for PlanetScale's import feature

MySQL on RDS configured for import to PlanetScale example

This folder contains an example Terraform configuration that deploys a MySQL database - using RDS in an Amazon Web Services (AWS) account - that is properly configured for PlanetScale's DB import feature.

It will make sure that the RDS database has the binlog exposed, gtid-mode set to ON and is using ROW-based replication. All these are pre-requisites for imports to work without zero downtime.

If you are going to write a lot of data in your database in a very short amount of time, don't forget the only manual step after the Terraform setup.

@jonico
jonico / .env
Last active June 1, 2023 09:04
How to run PlanetScale alongside with an MySQL enabled app that does not have any other internet access
PLANETSCALE_DB=brandnewdb
PLANETSCALE_BRANCH=mybranch
PLANETSCALE_ORG=jonico
PLANETSCALE_SERVICE_TOKEN=pscale_tkn_loCzIH7NktDK-GWJ71eX97Qr5D3a9iEO_pgHCSHUtw
PLANETSCALE_SERVICE_TOKEN_NAME=69xrlIwgs4ms
@jonico
jonico / wait-for-ps-branch-readiness.sh
Created October 4, 2021 21:36
shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
#!/bin/sh
# shell script that waits for a PlanetScale branch to be ready for use and increases retry times exponentially
# usage: wait-for-ps-branch-readiness.sh <db name> <branch name> <max retries>
function wait_for_branch_readiness {
local retries=$1
local db=$2
local branch=$3