Skip to content

Instantly share code, notes, and snippets.

#!/bin/sh
set -x
# == Swarm training (alpha release) ==
# Setup:
#
# git clone https://github.com/shawwn/gpt-2
# cd gpt-2
# git checkout dev-shard
@velicast
velicast / stdc++.h
Created July 17, 2013 17:56
Linux GCC 4.8.0 /bits/stdc++.h header definition.
// C++ includes used for precompiling -*- C++ -*-
// Copyright (C) 2003-2013 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 3, or (at your option)
// any later version.
@bennuttall
bennuttall / remotegpio.md
Last active July 8, 2021 18:22
Set up a Pi and host PC for remote GPIO access using gpiozero

Remote GPIO

GPIO Zero allows you to create objects representing GPIO devices. As well as running it on a Raspberry Pi, you can also install GPIO Zero on a PC and create objects referencing GPIO pins on a Pi over the network.

To do this, you'll need to do a few things to get set up:

  1. Enable Remote GPIO on the Pi in the Raspberry Pi Configuration Tool.

  2. Run the pigpio daemon on the Pi:

@spalladino
spalladino / Loans.sol
Created March 20, 2021 16:57
Strawman for flashloans for flashbots
pragma solidity ^0.7.0;
// Each mining pool that intends to provide flash loans deploys a Loaner contract and transfers ETH to it
// When testing each bundle, the diff in balance in this contract is taking into account for calculating effective gas price
// The contract loans funds only on blocks mined by the miner and on zero-gasprice txs
contract Loaner {
address immutable owner;
constructor(address _owner) {
owner = _owner;
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

int[][] result;
float t, c;
float ease(float p) {
return 3*p*p - 2*p*p*p;
}
float ease(float p, float g) {
if (p < 0.5)
return 0.5 * pow(2*p, g);
int[][] result;
float t, c;
float ease(float p) {
return 3*p*p - 2*p*p*p;
}
float ease(float p, float g) {
if (p < 0.5)
return 0.5 * pow(2*p, g);
@beesandbombs
beesandbombs / slicedCube.pde
Created April 30, 2018 20:52
sliced cube
int[][] result;
float t, c;
float ease(float p) {
return 3*p*p - 2*p*p*p;
}
float ease(float p, float g) {
if (p < 0.5)
return 0.5 * pow(2*p, g);
int[][] result;
float t, c;
float ease(float p) {
return 3*p*p - 2*p*p*p;
}
float ease(float p, float g) {
if (p < 0.5)
return 0.5 * pow(2*p, g);
// stics, or “sticks”
int[][] result;
float t, c;
float ease(float p) {
return 3*p*p - 2*p*p*p;
}
float ease(float p, float g) {