Skip to content

Instantly share code, notes, and snippets.

@jonlachmann
jonlachmann / common.py
Created February 3, 2022 08:40
Many to many LSTM in both keras and pytorch
from numpy import array
from numpy import linspace
from numpy import random
from numpy import zeros
from numpy import vstack
import torch
# Split a multivariate sequence into samples
def split_sequences(sequences, n_steps):
@btskinner
btskinner / .Renviron
Last active May 3, 2023 03:35
Makevars and Renviron to use with Homebrew + OpenBLAS + OpenMP
# ---------
# .Renviron
# ---------
PKG_CONFIG_PATH=/opt/X11/lib/pkgconfig
@merkushin
merkushin / mysql_uuid.sql
Last active October 15, 2023 14:06
MySQL UUID v5 Stored Functions
DROP FUNCTION IF EXISTS uuid_from_bin;
DROP FUNCTION IF EXISTS uuid_to_bin;
DROP FUNCTION IF EXISTS uuid_v5;
DROP FUNCTION IF EXISTS uuid_v4;
DELIMITER //
CREATE FUNCTION uuid_from_bin(b BINARY(16))
RETURNS CHAR(36)
BEGIN
@brendanzab
brendanzab / reactive_systems_bibliography.md
Last active October 10, 2022 06:36
A reading list that I'm collecting while building my Rust ES+CQRS framework: https://github.com/brendanzab/chronicle

Functional, Reactive, and Distributed Systems Bibliography

Books

#!/usr/bin/env ruby
# This script only support ASCII format
# and tick download.
require 'uri'
require 'net/http'
require 'mechanize'
agent = Mechanize.new

Alternative Method for Component Library Theming

I really like the idea of styled components as the lowest level visual primitive, so theme-ing via passing around color strings and pixel values (or worse, 😨 css snippets 😨) makes me sad 😢.

Here's an alternative. Instead of passing in theme variables, which requires the library author to explicitly allow certain css properties to be overridden, we pass styled components as the theme😃. Now, styled components are the lowest level visual primitive that a user works with. Plus, it allows for much much more powerful extension. A user can decorate, wrap, replace,

@gavinsimpson
gavinsimpson / modelled-nuuk-rainfall.png
Last active April 28, 2021 15:43
R code to download, extract, and fit a Tweedie GAM to monthly rainfall total time series from Nuuk, Greenland, using mgcv
modelled-nuuk-rainfall.png
@pesterhazy
pesterhazy / datomic-entity-history.clj
Last active December 7, 2020 11:13
Inspect a datomic entity's history
;; Show history of an entity
;;
;; useful for interactively inspecting what happened to a datomic entity in its lifetime
;;
;; use `entity-history` to get a list of transactions that have touched the entity (assertions, retractions)
;;
;; use `explain-tx` to find out what else was transacted in the txs
(defn entity-history
"Takes an entity and shows all the transactions that touched this entity.
@lornajane
lornajane / mac.md
Last active April 21, 2024 15:04
Keyboard Only OS X

Keyboard-only Mac Cheatsheet

Hi, I'm Lorna and I don't use a mouse. I have had RSI issues since a bad workstation setup at work in 2006. I've tried a number of extra hardware modifications but what works best for me is to use the keyboard and only the keyboard, so I'm in a good position and never reaching for anything else (except my coffee cup!). I rather unwisely took a job which required me to use a mac (I've been a linux user until now and also had the ability to choose my tools carefully) so here is my cheatsheet of the apps, tricks and keyboard shortcuts I'm using, mostly for my own reference. Since keyboard-only use is also great for productivity, you may also find some of these ideas useful, in which case at least something good has come of this :)

Apps List

There's more detail on a few of these apps but here is a quick overview of the tools I've installed and found helpful

Tool Link Comments
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward