Skip to content

Instantly share code, notes, and snippets.

View aaronsteers's full-sized avatar
💭
...building stuff...

Aaron ("AJ") Steers aaronsteers

💭
...building stuff...
View GitHub Profile
@alxthm
alxthm / meltano_make_crontab.py
Last active August 12, 2022 20:34
Python script to transform meltano schedules from a json format into a crontab file
import json
import logging
import os
import sys
from pathlib import Path
from typing import List
"""
Transform meltano schedules from a json format into a crontab file.
@td-shi
td-shi / ulid.sh
Last active May 7, 2024 17:55
ULID on shell script. Universally Unique Lexicographically Sortable Identifier. [ULID](https://github.com/ulid/spec)
#!/bin/sh
# -*- coding:utf-8 posix -*-
# === Initialize shell environment =============================================
#set -u # Just stop undefined values.
#set -e # Just stop error.
#set -x # Debug running command.
umask 0022
export LC_ALL=C
export LANG=C
@cerebrate
cerebrate / README.md
Last active December 2, 2023 08:17
Recompile your WSL2 kernel - support for snaps, apparmor, lxc, etc.

WARNING

THIS GIST IS EXTREMELY OBSOLETE. DO NOT FOLLOW THESE INSTRUCTIONS. SERIOUSLY.

IF YOU IGNORE THE ABOVE WARNING, YOU AGREE IN ADVANCE THAT YOU DIDN'T GET THESE INSTRUCTIONS FROM ME, THAT I WARNED YOU, AND THAT I RESERVE THE RIGHT TO POINT AND LAUGH MOCKINGLY IF AND WHEN SOMETHING BREAKS HORRIBLY.

I'll do a write-up of current custom-kernel procedures over on Random Bytes ( https://randombytes.substack.com/ ) one day soon.

NOTE

@justinclayton
justinclayton / taint_module.sh
Created January 19, 2016 18:48
Terraform: taint all resources from one module
#!/bin/bash
module=$1
for resource in `terraform show -module-depth=1 | grep module.${module} | tr -d ':' | sed -e 's/module.${module}.//'`; do
terraform taint -module ${module} ${resource}
done
@lmatthieu
lmatthieu / spark_read_csv.py
Last active October 5, 2016 15:09
Load csv file, infer types and save the results in Spark SQL parquet file
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext, SQLContext
import pandas as pd
# sc: Spark context
# file_name: csv file_name
# table_name: output table name
# sep: csv file separator
# infer_limit: pandas type inference nb rows
def read_csv(sc, file_name, table_name, sep=",", infer_limit=10000):
@rduplain
rduplain / README.md
Created October 17, 2011 20:04
Connect to MSSQL using FreeTDS / ODBC in Python.

Goal: Connect to MSSQL using FreeTDS / ODBC in Python.

Host: Ubuntu 11.10 x86_64

Install:

sudo apt-get install freetds-dev freetds-bin unixodbc-dev tdsodbc
pip install pyodbc sqlalchemy

In /etc/odbcinst.ini: